Responsible AI in India Leadership Ethics & Global Impact

Responsible AI in India Leadership Ethics & Global Impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session examined how Indian corporations can move responsible AI from abstract principles to provable practice, emphasizing that trust, transparency and accountability are now foundational ( [1][34] ). Andy Parsons argued that responsible AI must become an operational discipline rather than a mere compliance slide, noting a shift toward “provable practice” ( [33][34] ). He warned of a trust crisis caused by the massive scale of generative AI and said enterprises need to prove how content was created, by which models and tools ( [38-44] ). Parsons introduced the Content Authenticity Initiative and the C2PA open standard, which embeds provenance metadata directly into media files and is backed by a cross-industry coalition including Adobe, Microsoft, BBC and others ( [55-62][66-68] ). He stressed that open, interoperable, non-proprietary standards must be implemented in working code, a point especially relevant for India’s huge digital population ( [70-74] ).


Prativa Mohapatra explained Adobe’s “ART” (accountability, responsibility, transparency) philosophy, describing how provenance checks are baked into products such as Firefly and Acrobat Assistant so that inputs are licensed and outputs can be audited ( [196-204][208-220][224-228] ). She added that coordinated legal, compliance and ethical teams are essential, and that neglecting any pillar threatens future readiness ( [235-239] ). Amol Deshpande highlighted that responsible AI must be orchestrated across all five AI layers, involve people, processes and guardrails, and cannot be a one-size-fits-all solution; instead organisations should offer a “bring-your-own-AI” framework ( [162-170][176-181][187-190] ). Vishal Anand Kanwati described NPCI’s transparent transaction-decline explanations via a language model and affirmed that governance principles such as transparency are non-negotiable for trust in payment systems ( [287-293][295-298] ). Satya Ramaswamy shared Air India’s generative-AI virtual assistant that handles millions of queries, with safety “knobs” and continuous human-in-the-loop monitoring to satisfy global aviation regulations ( [258-262][261-264] ). He argued that complying with diverse international regulations does not hinder innovation, citing the airline’s ability to launch the industry’s first AI assistant while remaining within regulatory bounds ( [341-345][350-354] ).


The panel debated whether industry-led governance can replace regulation; Amol and Vishal stressed the need for standards, awareness and industry partnerships, while both agreed that regulatory frameworks are essential to prevent AI misuse at scale ( [322-329][360-366] ). Sarika Guliani concluded that responsible AI is a commitment beyond compliance, requiring shared human values, cross-sector collaboration and alignment with the “people, planet, progress” agenda, and announced that FICCI will continue to drive the dialogue into action ( [370-376][382-383] ). The discussion underscored that responsible AI must be embedded in products, governed by open standards, and supported by both industry initiatives and regulatory oversight to realize its potential in India’s digital future.


Keypoints


Major discussion points


From principles to provable practice – the need for concrete standards and transparency


Andy emphasized that responsible AI must move beyond slide-deck principles to “provable practice” and that “you need standards, not just principles” [34-35][109-112]. He presented the C2PA open standard as a concrete example of an interoperable, non-proprietary framework that can embed provenance information directly into content [62-70].


Content provenance (C2PA) as a concrete case study for responsible AI


The Coalition for Content Provenance and Authenticity (C2PA) provides “content credentials” that travel with media, enabling users to see the full genealogy of an asset - what model created it, which tools were used, etc. [55-66]. The initiative rests on three pillars – transparency, accountability and inclusivity – likened to “nutrition labels” for digital content [75-86].


Enterprise-level implementation challenges and the “ART” governance model


Amol described responsible AI as an “orchestration of all layers” – technology, people, process and governance – and warned against a one-size-fits-all approach, stressing the need for guardrails and scalable templates [162-166][170-181]. Prativa echoed this with Adobe’s “ART” (Accountability, Responsibility, Transparency) framework, citing product-level examples such as Firefly’s built-in provenance and Acrobat Assistant’s safe-by-design workflow [196-210][221-228].


Regulation as both catalyst and requirement, balanced with industry-led standards


Andy framed regulation (EU AI Act, US state laws, India’s IT rules) as a “catalyst for good practices” [107-108], while Vishal highlighted the necessity of transparency in transaction decisions and referenced the RBI’s responsible-AI guidelines [286-293]. Satya explained how Air India complies with multiple global aviation regulators while still innovating with a generative-AI virtual assistant [341-354].


Ecosystem collaboration to bridge large enterprises and MSMEs


The panel repeatedly stressed that industry bodies (FICCI, C2PA, etc.) must disseminate frameworks so smaller players can adopt them. Amol called for “awareness → action → demonstration” and for industry partnerships to cascade guardrails downstream [322-336]. Prativa warned that without such shared standards, a stark divide will emerge between “big guys” and “MSMEs” [291-300].


Overall purpose / goal of the discussion


The session aimed to move the conversation on responsible AI in India from abstract principles to actionable, enterprise-level practices. By showcasing Adobe’s C2PA model, sharing governance approaches from Air India, RPG Group, and NPCI, and debating the interplay of regulation and industry standards, the participants sought to equip Indian corporates with concrete tools, frameworks, and collaborative pathways for deploying trustworthy, inclusive AI at scale.


Overall tone and its evolution


– The opening remarks were formal and aspirational, stressing the urgency of responsible AI [4-6].


– Andy’s presentation adopted an optimistic, solution-focused tone, highlighting a successful open-standard initiative [58-66].


– The panel discussion shifted to a pragmatic and candid tone, acknowledging real-world challenges (uneven adoption, cost, governance complexity) [90-101][162-181].


– As the conversation progressed, the tone became collaborative and constructive, with participants emphasizing shared responsibility, ecosystem support, and the need for balanced regulation [322-336][341-354].


– The closing remarks returned to an hopeful, call-to-action tone, urging continued dialogue and industry commitment [370-384].


Overall, the tone remained constructive throughout, moving from high-level inspiration to grounded, actionable discussion and ending with a collective commitment to advance responsible AI in India.


Speakers

Announcer – Event announcer/moderator


Vishal Anand Kanwati – Chief Technology Officer, National Payments Corporation of India (NPCI) – expertise in payments infrastructure and AI-driven fraud detection [S4][S5]


Sarika Guliani – Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI – expertise in AI policy and industry collaboration [S6][S7]


Dr. Satya Ramaswamy – Chief Digital and Technology Officer, Air India Limited – expertise in aviation AI applications and safety-critical systems [S8][S9][S10]


Shantari Malaya – Editor, Economic Times – expertise in technology journalism and AI policy coverage [S11][S12]


Prativa Mohapatra – Vice President and Managing Director, Adobe India – expertise in responsible AI product development and content authenticity [S13][S14]


Andy Parsons – Global Head for Content Authenticity, Adobe (runs the Content Authenticity Initiative) – expertise in content provenance and AI transparency [S15][S16]


Amol Deshpande – Chief Digital Officer and Head of Innovation, RPG Group – expertise in enterprise AI strategy and governance [S18][S19]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

Opening & Context – Adobe, in partnership with FICCI, opened the session on “Responsible AI from Principles to Practice in Corporate India” [1-2]. The moderator emphasized that India’s current digital moment demands not only rapid AI adoption but responsible deployment, with trust, transparency and accountability described as “foundational” rather than optional [4-6].


Andy Parsons – From Principles to Proven Practice


Parsons framed the central challenge: responsible AI must move from a slide-deck concept to an auditable discipline [33-35]. He warned that 2026 will be the year responsible AI becomes both a duty and an innovation opportunity [21-22] and that organisations will soon be asked not whether they are responsible, but whether they can prove it [31-32]. He highlighted the need to consider implementation cost and day-to-day operational overhead when adopting responsible-AI practices [384-386].


The regulatory backdrop he outlined included the EU AI Act, California law, and India’s new IT rules on Self-Generated-Content (SGI) [387-389]. He positioned regulation as a catalyst for good practice rather than a barrier [107-108].


Parsons described the trust crisis created by the massive scale of generative AI, noting that enterprises now produce or consume AI-generated content at “extraordinary” volumes [44-45] and that the crisis is “real … happening every day to our children” [390-392]. In India’s “world’s largest digital population” [47-50], synthetic media and misinformation are operational risks for businesses [51-52]. Without the ability to demonstrate what was made, how, and by which models, companies cannot meet corporate responsibility obligations [53-55].


To illustrate a concrete solution, Parsons introduced the Coalition for Content Provenance and Authenticity (C2PA). This cross-industry body-including Adobe, Microsoft, BBC, Sony, Qualcomm and others-has created an open, free, non-proprietary standard that embeds “content credentials” directly into media files [396-398]. The C2PA badge is already visible on LinkedIn posts, signalling provenance to viewers [393-395]. Its three pillars-transparency, accountability and inclusivity-are likened to “nutrition labels” for digital content, providing provenance information such as the generating model, tools used and camera metadata [75-86][70-74].


Parsons acknowledged practical challenges: many social-media platforms strip metadata, undermining provenance [90-98]; consumer awareness of the C2PA symbol remains low [92-95]; and the business case for provenance is challenging because it does not directly generate revenue [100-101].


Panel Introduction – Shantari Malaya – The moderator introduced the panelists (Andy Parsons, Amol Deshpande, Prativa Mohapatra, Satya Ramaswamy, Vishal Anand Kanwati).


Amol Deshpande – Orchestrating Responsible AI


Deshpande argued that responsible AI must be orchestrated across the five layers of the AI lifecycle as understood by the panel and cannot be reduced to a single checklist [162-166]. He stressed the importance of people, processes and guardrails, describing a “bring-your-own-AI” model where each function can adopt suitable templates while the enterprise provides common guardrails [176-183][187-190]. He warned against a “one-size-fits-all” solution, insisting that scalable, sector-specific templates are needed for enterprises ranging from manufacturing to services [180-183][186-188].


Prativa Mohapatra – Adobe’s ART Framework & Product Embedding


Mohapatra explained Adobe’s internal ART (Accountability, Responsibility, Transparency) governance model [196-198]. She said the first pillar of Adobe’s AI governance is “ART”: accountability, responsibility and transparency. Every new Adobe product follows a rigorous, multi-step methodology that embeds provenance at the core. For example, Firefly, Adobe’s generative-AI tool, automatically attaches a “nutrition-label” style provenance tag to every output, guaranteeing that inputs are licensed and that the resulting content can be audited for compliance [208-212][214-220]. Similarly, the Acrobat Assistant inherits the trusted PDF workflow, allowing users to trace the origin of any generated document and ensuring that high-stakes outputs are traceable and legally sound [224-228]. She emphasized that legal and compliance teams must be integrated into AI governance, otherwise an organisation may fall short of future regulatory and risk requirements [235-239].


Satya Ramaswamy – Air India’s Generative-AI Virtual Assistant


Ramaswamy shared Air India’s experience with a generative-AI virtual assistant launched in May 2023, which has handled over 13.5 million queries with a 97 % autonomous resolution rate [258-262]. The system employs “safety knobs” that can be dialled to balance user convenience against the risk of inappropriate responses; AI monitors its own performance, and customers are prompted to rate the answer’s appropriateness [260-262]. Satya explained that Air India uses generative-AI models to monitor the performance of its own virtual assistant [263-264]. The airline works with partners such as Adobe to obtain indemnity against failures [263-264] and complies with multiple international aviation regulators (EU, US FAA, Indian DGCA) without letting compliance constrain Indian innovation [341-345][350-354].


Vishal Anand Kanwati – NPCI’s Transparent Fraud-Detection


Kanwati illustrated how transparency can be operationalised in payments. NPCI has built a small language model that explains, in real time, why a transaction was declined, giving consumers clear, understandable reasons for fraud-related decisions [287-291]. He linked this practice to the RBI’s responsible-AI guidelines, stating that “the principles have to be adopted – there is absolutely no choice for us” [295-298]. For him, such transparency is essential to maintain trust in the nation’s digital payments ecosystem [286-293].


Discussion on MSMEs & Ecosystem – Both Amol and Prativa stressed that the first step for MSMEs is awareness of responsibility, followed by actionable frameworks disseminated through bodies such as FICCI [322-326][328-336]. Amol warned that large enterprises must create reusable compliance frameworks because smaller firms lack dedicated legal or AI-ethics teams [304-307]; Prativa echoed that without shared standards a stark divide will emerge between “big guys” and “MSMEs” [291-300]. The panel agreed that industry consortia should cascade templates and best-practice guidance to lower-resource organisations [328-336][316-321].


Global vs Indian Regulatory Alignment – Satya noted that complying with EU, US and Indian DGCA regulations does not stifle Indian innovation [341-345]. Vishal argued that mandatory safeguards are essential to prevent AI from “going berserk” in critical financial systems [322-327][360-366].


Regulation vs Self-Governance – A broad consensus emerged that regulation is inevitable and can act as a catalyst for good practice, but “principles alone are insufficient” – concrete, interoperable standards are required [107-108][109-112][328-336][360-366]. Tension remained between Andy’s advocacy for a universal open standard (C2PA) [62-66][70-71] and Amol’s view that industry-specific templates are necessary [180-183][328-336].


Closing – Sarika Guliani – Guliani framed responsible AI as a value-driven commitment that goes beyond a compliance checkbox, linking it to the broader “people, planet, progress” agenda [370-376]. She thanked the panelists and announced that FICCI will continue to facilitate dialogue and drive collaborative actions to translate the discussed principles into concrete industry initiatives [382-383].


Key take-aways


– Shift from abstract AI principles to provable, operational practice across people, process, technology and governance [31-35][162-170].


– Importance of open, interoperable, non-proprietary standards such as C2PA content credentials for building trust in AI-generated media [75-86][70-74][396-398][393-395].


– Adobe’s ART framework shows how accountability, responsibility and transparency can be baked into product lifecycles (Firefly, Acrobat Assistant) [196-210][221-228].


– Continuous human-in-the-loop monitoring and adjustable safety guardrails are critical for high-risk deployments (Air India, NPCI) [260-264][287-291].


– Regulation is viewed as a catalyst, not a constraint, and must be complemented by industry standards to avoid fragmented compliance [107-108][328-336][360-366].


– Ongoing challenges include metadata preservation, consumer awareness of provenance symbols, and the resource gap for MSMEs[90-98][304-307].


– Sector-specific implementations provide practical road-maps for responsible AI at scale.


Unresolved issues – Raising widespread consumer awareness of provenance symbols; providing affordable, reusable compliance toolkits for MSMEs; balancing “light-touch” regulation with mandatory safeguards; and designing detailed human-in-the-loop processes for safety-critical AI systems. The panel suggested a combined approach: baseline regulatory safeguards, open-standard adoption (e.g., C2PA), and industry-led dissemination of sector-specific templates to ensure both interoperability and flexibility [328-336][360-366][322-329].


Thought-provoking remarks – Andy’s 2026 prediction; his challenge to prove responsible AI; the description of C2PA credentials as an open, free, cross-industry standard; Amol’s “one size doesn’t fit all” reminder; Prativa’s “nutrition-label” analogy; Satya’s use of generative AI to monitor its own assistant; Vishal’s language model that explains transaction declines; and the consensus that regulation can be a catalyst rather than a hindrance [21-22][31-32][58-59][62-66][70-71][180-183][186-188][78-82][260-262][287-291][107-108][341-345].


The panel left with a shared commitment to embed open standards, sector-specific guardrails, and regulatory compliance into AI products, ensuring that responsible AI becomes a practical, measurable capability across India’s corporate ecosystem.


Session transcriptComplete transcript of the session
Announcer

Welcome to this session titled, Responsible AI from Principles to Practice in Corporate India, presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite a guest. Andy Parsons, our Global Head for Content Authenticity at Adobe.

Andy, over to you.

Andy Parsons

Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.

I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity for innovation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.

And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point, but can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.

So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.

But it’s made the trust problem absolutely impossible to address. Ignore. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that.

Hundreds of millions of people consuming digital content every day. In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to provide some leadership in the C2PA, which hopefully many of you have heard of. have heard about this week, which is the Coalition for Content Provenance and Authenticity.

And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free. So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others.

And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials. I have encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others.

And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company. It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy, I think, is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency, provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used.

Simple ideas like knowing that a photograph is actually a photograph and not generated. These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.

Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.

Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I’ve often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m sure that you will see it. I’m sure that you will see it. I’m sure that you will see it. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.

What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.

And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excited to go first, that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again, and in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things, and that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries because we have representatives from all of them on our excellent panel today.

So we’re fortunate to have leaders from. Air India, PCI, RPG Group, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.

Shantari Malaya

Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.

So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.

Right. So Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.

So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.

Amol Deshpande

Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.

It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise. It has to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.

We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.

You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple levels.

Shantari Malaya

Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Pratibha here. Welcome, Pratibha. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe has constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally and at the same time as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?

So all yours here.

Prativa Mohapatra

Okay, thank you. So I think Andy set the context and since we are here not to learn about the principles but the practices, I think everybody should go back with certain practices. So the first practice of AI governance which we practice is art, which is accountability, responsibility and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course we are in the business of content for a very, very long time and now the same content is becoming available. It’s becoming the currency which everybody’s debating. So our principles have been there for a while, but how it is actualized.

And let me say how it’s translated into our products. And by the way, it’s in our products. It’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law. You will not be getting into any liability issues. Because how you do it is by feeding.

Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines. So Acrobat has this new feature called Acrobat Assistant.

It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.

So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to… Already Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI ensure that you tick all the three. If you miss any one you might not be ready for the future.

So that’s how I see it.

Shantari Malaya

Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Prithiva. I’ll circle back to you time permitting. Let’s see how best we can get back. Dr. Satya calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.

How do these things really fall in place in terms of vision and metrics?

Dr. Satya Ramaswamy

Thank you, Shantiri. Since the audience is international, real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.

So it was a global first in the whole airline industry. Today it handles. It has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to a human agent the remaining 50 % of the contact volume comes to A .G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when we were the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.

At the same time we don’t want any jailbreak to happen we don’t want problem injection to happen we don’t want any inappropriate thing to happen so we are watching the whole performance of the Virtual Assistant A .G as we call it all the time. So we use in fact Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer so at the end of the day when we send a response we also ask the customer did it answer your question and also allow them to give their reactions is it appropriate, inappropriate and thankfully over the last 20 years it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it but now we are as the technologies are maturing for example we now have interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem.

That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.

Shantari Malaya

Excellent. And given the kind of scale you’re operating at, I think every day is a new day.

Dr. Satya Ramaswamy

Yes, it is. We face challenges. There is new, brand new every day.

Shantari Malaya

Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here or rather sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you. How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts?

One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be? Fair at the same time proactive and detective when it comes to looking at fraud. So how what are the aspects that you look at? keenly here

Vishal Anand Kanwati

I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with this success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.

Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised

Shantari Malaya

absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you something but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?

Is it getting risked?

Prativa Mohapatra

Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.

So… I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.

So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?

Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is a… similar to electricity and steam, it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem.

Shantari Malaya

Absolutely. Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face, you know, when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem, the industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit?

Amol Deshpande

Shantari, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?

Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.

Mind you, this would change. It’s not like this one guardrails construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is, I think, very, very critical.

Shantari Malaya

Than you for that, Amol. Dr. Satya, if, you know, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean, the principles, so on and so forth. And India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony

Dr. Satya Ramaswamy

Absolutely, I think taking Air India, we are an international airline so we operate in many countries for example we go to North America US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world so by nature we are geared to looking at the regulation in all parts of the world in all parts of the world and we are looking at the regulation and we are looking at the regulation and we are looking at the regulation and being in compliance.

And our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right? So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control.

And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing. the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is, this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation.

For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it

Shantari Malaya

So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts here.

Vishal Anand Kanwati

Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions, you know. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important.

While. All of us realize. it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high

Shantari Malaya

Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please

Sarika Guliani

first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content. Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done. Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken thank you so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of Vicky and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.

That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Chantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.

So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (41)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“2026 will be the year responsible AI becomes both a duty and an innovation opportunity.”

The knowledge base notes that by 2026 questions of AI responsibility and trust will move from after-thoughts to central concerns, and AI is expected to reshape management and organisational design that year, confirming the report’s view of 2026 as a pivotal moment [S104] and [S105].

Confirmedhigh

“India’s new IT rules on Self‑Generated‑Content (SGI) require transparency in AI‑generated content.”

India’s Synthetic and Generated Intelligence (SGI) regulations have been announced, mandating transparency so users can distinguish AI-generated content, matching the report’s description of the SGI rules [S108].

Confirmedmedium

“The regulatory backdrop includes the EU AI Act.”

The EU AI Act is identified in the knowledge base as a key piece of AI regulation, confirming its presence in the regulatory landscape referenced by the speaker [S109].

Additional Contextmedium

“Trust, transparency and accountability are foundational for responsible AI deployment in India.”

Other sources stress that trust infrastructure is as critical as technical infrastructure and that accountability, transparency, rule of law and explainability are essential for AI governance, providing additional context to the claim [S59] and [S102].

Additional Contextlow

“Responsible AI must move from a slide‑deck concept to an auditable discipline.”

Discussion of AI governance in 2026 highlights the need for clear accountability mechanisms and auditable practices, adding nuance to the report’s framing of responsible AI as an auditable discipline [S104].

External Sources (113)
S1
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S2
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S4
Responsible AI in India Leadership Ethics & Global Impact part1_2 — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S5
Responsible AI in India Leadership Ethics & Global Impact — -Vishal Anand Kanwati- Chief Technology Officer, National Payments Corporation of India (NPCI)
S6
Responsible AI in India Leadership Ethics & Global Impact part1_2 — The discussion concluded with Sarika Guliani from FICCI emphasising that “responsibility is not anymore a compliance che…
S7
Responsible AI in India Leadership Ethics & Global Impact — The session concluded with FICCI’s commitment to continue translating discussions into actionable frameworks. Sarika Gul…
S8
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact-part1_2 — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S9
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Dr. Satya Ramaswamy- Vishal Anand Kanvaty – Vishal Anand Kanvaty- Dr. Satya Ramaswamy Dr. Satya focuses on balancing…
S10
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S11
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact-part1_2 — So we’re fortunate to have leaders from Air India and PCI, RPCI, and the U.S. Department of Defense, and Adobe. each of …
S12
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — So we’re fortunate to have leaders from. Air India, PCI, RPG Group, and Adobe. each of whom is navigating and translatin…
S13
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S14
Responsible AI in India Leadership Ethics & Global Impact — -Prativa Mohapatra- Vice President and Managing Director of Adobe India
S15
Driving U.S. Innovation in Artificial Intelligence — 2. Amy Cohen – Executive Director, National Association of State Election Directors 3. Andy Parsons – Senior Director of…
S16
Responsible AI in India Leadership Ethics & Global Impact — -Andy Parsons- Global Head for Content Authenticity at Adobe, runs the Content Authenticity Initiative at Adobe
S17
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excit…
S18
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S19
Responsible AI in India Leadership Ethics & Global Impact — – Andy Parsons- Amol Deshpande – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S20
Opening of the session — This position was supported by multiple delegations (Switzerland, Australia, Canada) and created a clear divide with cou…
S21
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges – **Charlie Hal…
S22
Certifying humanity: Labeling content amid AI flood — These debates are no longer theoretical. Provenance-based initiatives such as theContent Authenticity Initiative (C2PA),…
S23
High-level AI Standards panel — Need for Enhanced Collaboration Among Standards Organizations The UK government advocates for an open, inclusive, multi…
S24
Closing the Governance Gaps: New Paradigms for a Safer DNS — Although regulation in the DNS industry is inevitable, it should aim to avoid fragmented jurisdictional approaches. If t…
S25
https://dig.watch/event/india-ai-impact-summit-2026/why-science-metters-in-global-ai-governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S26
Global Enterprises Show How to Scale Responsible AI — High level of consensus on core principles with nuanced differences in implementation approaches. This suggests a maturi…
S27
Closing remarks – Charting the path forward — ### From Principles to Practice A central theme was the need to move beyond abstract principles toward concrete impleme…
S28
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S29
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Clara Neppel:Thank you for having me here, it’s a pleasure. So as it comes to the polls, the question is of course what …
S30
WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder — These key comments shaped the discussion by highlighting the complexity of implementing ethical AI across different regu…
S31
AI That Empowers Safety Growth and Social Inclusion in Action — Despite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap betwee…
S32
Laying the foundations for AI governance — Low to moderate disagreement level. The speakers largely agreed on problem identification but differed on solutions and …
S33
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Auke Pals: Thank you very much, Lisa. So I hope at this point, the EU AI Act could steer us in the right direction a…
S34
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — Boosting standardization process can establish a strong lay of requirements By focusing on education, industry collabor…
S35
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — – Moe Ba- Ke Wang- Li Tian- John OMO Bocar Ba emphasized the necessity of creating unified policy frameworks that work …
S36
Unlocking potential: Addressing inclusivity barriers in e-commerce trade to deliver sustainable impact in communities everywhere (United Kingdom) — To effectively support MSMEs, GSMA emphasizes the need for greater coordination between the private and public sectors. …
S37
Enabling trade inclusion for MSMEs, women and underrepresented communities through the postal network (UPU)- UPU TradePost Forum — However, women’s representation and empowerment in MSMEs are still limited. Currently, women are sole owners of only aro…
S38
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S39
Building Indias Digital and Industrial Future with AI — This comment shifted the discussion from abstract policy concepts to concrete technical and operational realities. It pr…
S40
WS #123 Responsible AI in Security Governance Risks and Innovation — These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, a…
S41
Responsible AI in India Leadership Ethics & Global Impact part1_2 — This set the foundational tone for the entire panel discussion, moving away from abstract principles to practical implem…
S42
Main Topic 3 –  Identification of AI generated content — A pervasive sentiment of distrust could potentially undermine democratic integrity by challenging its intrinsic structur…
S43
Certifying humanity: Labeling content amid AI flood — As a result, trust is no longer formed through close inspection. Few readers have the time, expertise, or tools to verif…
S44
Skilling and Education in AI — “Five second response, I think the one action that we need to take is improve the trust infrastructure and make sure tha…
S45
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — This comment shifted the discussion from high-level policy frameworks to concrete, relatable concerns. It established a …
S46
Responsible AI in India Leadership Ethics & Global Impact — Regulation should be viewed as a catalyst for good practices rather than just reactive compliance
S47
Keynote by Uday Shankar Vice Chairman_JioStar India — Policy frameworks should reflect India’s unique ambitions and avoid wholesale adoption of Western regulatory constructs,…
S48
Can we test for trust? The verification challenge in AI — Moderate to high disagreement with significant implications. The fundamental disagreement between Yampolskiy’s pessimist…
S49
Artificial intelligence (AI) – UN Security Council — Moreover, the lack of transparency can erode public trust. If people cannot see or understand how decisions affecting th…
S50
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Jibu Elias: Thank you very much, Yoichi-san. It’s an honor to be here to share my experience building a responsible AI e…
S51
Host Country Open Stage — Context-specific solutions are essential rather than one-size-fits-all approaches
S52
Building the Next Wave of AI_ Responsible Frameworks & Standards — Moderate disagreement level with significant implications for AI deployment strategies. While all speakers agreed on the…
S53
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S54
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — – Ioanna Ntinou- Mark Gachara Example of energy efficiency passes for houses in Germany and EU that are obligatory, mak…
S55
[Parliamentary session 1] Digital deceit: The societal impact of online mis- and disinformation — As users increasingly access information through AI, it’s essential to help them critically assess these tools and under…
S56
Laying the foundations for AI governance — – Industry attitudes toward regulation: whether companies genuinely want regulation or resist it due to power concentrat…
S57
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — The discussion revealed surprisingly few direct disagreements among speakers, with most conflicts being implicit rather …
S58
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Amandeep Singh Gill: Thank you so much, Jovan, and thank you to you, Diplo Foundation, and its partners for convening th…
S59
Driving Indias AI Future Growth Innovation and Impact — “Investment also includes energy infrastructure, because without energy, there is really no compute infrastructure you c…
S60
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S61
Closing remarks – Charting the path forward — ### From Principles to Practice A central theme was the need to move beyond abstract principles toward concrete impleme…
S62
High Level Session 1: Losing the Information Space? Ensuring Human Rights and Resilient Societies in the Age of Big Tech — This provides a concrete, real-world example of how radical transparency can work in practice, moving beyond theoretical…
S63
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S64
WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder — These key comments shaped the discussion by highlighting the complexity of implementing ethical AI across different regu…
S65
Responsible AI in India Leadership Ethics & Global Impact — “So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for …
S66
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges – **Charlie Hal…
S67
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Because how you do it is by feeding. Because AI is all about the input and output. So the input has to be something whic…
S68
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — Nadja Blagojevic: Yes, very happy to. And thank you so much for having Google here. We’re very happy to be speaking with…
S69
Global Enterprises Show How to Scale Responsible AI — The implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani empha…
S70
AI That Empowers Safety Growth and Social Inclusion in Action — Despite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap betwee…
S71
Laying the foundations for AI governance — Low to moderate disagreement level. The speakers largely agreed on problem identification but differed on solutions and …
S72
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Galia :Three minutes? One minute each, three minutes together. But just to say, I think this has been a really, really r…
S73
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Auke Pals: Thank you very much, Lisa. So I hope at this point, the EU AI Act could steer us in the right direction a…
S74
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — Boosting standardization process can establish a strong lay of requirements By focusing on education, industry collabor…
S75
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — ### SME Criticality and Transformation Urgency Bocar Ba: I think I don’t have time to be controversial, but I don’t lik…
S76
Unlocking potential: Addressing inclusivity barriers in e-commerce trade to deliver sustainable impact in communities everywhere (United Kingdom) — Collaboration is emphasized as crucial for progress in Africa, specifically in facilitating cross-border payments, which…
S77
Enabling trade inclusion for MSMEs, women and underrepresented communities through the postal network (UPU)- UPU TradePost Forum — However, women’s representation and empowerment in MSMEs are still limited. Currently, women are sole owners of only aro…
S78
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S79
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S80
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S81
Building the Next Wave of AI_ Responsible Frameworks & Standards — The discussion maintained a consistently collaborative and solution-oriented tone throughout. It began with an authorita…
S82
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S83
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S84
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S85
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S86
WS #6 Bridging Digital Gaps in Agriculture & trade Transformation — The tone was largely optimistic and solution-oriented. Speakers were enthusiastic about the potential of the Internet Ba…
S87
Day 0 Event #257 Enhancing Data Governance in the Public Sector — The discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities …
S88
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — The discussion began with a technology-focused, optimistic tone about AI’s transformative potential but gradually shifte…
S89
Quantum Technologies: Navigating the Path from Promise to Practice — The discussion unfolded against a backdrop of significant global investment exceeding $40 billion in quantum technologie…
S90
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S91
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The …
S92
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S93
WS #106 Promoting Responsible Internet Practices in Infrastructure — The discussion maintained a collaborative and constructive tone throughout, with participants showing mutual respect and…
S94
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — The discussion maintained a collaborative and optimistic tone throughout, with panelists demonstrating mutual respect an…
S95
Dynamic Coalition Collaborative Session — The discussion maintained a collaborative and constructive tone throughout, with participants showing mutual respect and…
S96
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S97
High-level SIDS Ministerial Dialogue: Key Challenges and Opportunities — Concluding the address, the speaker alluded to further information that remained unshared due to time constraints. They …
S98
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S99
Open Forum #58 Safety of journalists online — The tone of the discussion was initially somber when describing the serious threats journalists face, but became more co…
S100
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S101
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S102
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — The accountability mechanisms, transparency, rule of law, and explainability are crucial
S103
Open Forum #30 High Level Review of AI Governance Including the Discussion — Melinda Claybaugh: Thank you so much for the question, and thank you for the opportunity to be here. As you were giving …
S104
AI in 2026: Learning to live with powerful systems — Early deployments of AI were often marked by ambiguity. Who is responsible when an automated system produces an error? H…
S105
How AI in 2026 will transform management roles and organisational design — In 2026, AI will transform management structures and automate tasks as companies strive to demonstrate real value. By 20…
S106
Keynote-Alexandr Wang — “We publish model cards and evaluation benchmarks and data so you can see how they work, their intended use, and how we …
S107
Toward Collective Action_ Roundtable on Safe & Trusted AI — Professor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with go…
S108
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — India’s regulatory approach has gained unexpected international acceptance, with the new Synthetic and Generated Intelli…
S109
Stricter rules and prohibited practices: Unveiling the EU AI Act’s regulatory framework — The AI Act, legislation aimed at regulating the use of AI and preventing its harmful effects,has received approval from …
S110
WS #203 Protecting Children From Online Sexual Exploitation Including Livestreaming Spaces Technology Policy and Prevention — ## Alarming Statistics on Self-Generated Content Key themes that emerged included the need for better age assurance mec…
S111
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — I mean, I think that it’s exacerbated according to the data. The only thing that I can tell you is that trust has been e…
S112
Generative AI presents the biggest data-risk challenge in history — Cybersecurity specialistswarnthat generative AI systems, such as large language models, are creating a data risk frontie…
S113
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Andy Parsons
5 arguments191 words per minute2021 words632 seconds
Argument 1
Shift to provable practice rather than abstract principles
EXPLANATION
Andy emphasizes that responsible AI must move from high‑level principles to demonstrable, operational practices that can be verified. He frames this shift as essential for enterprises to prove they are acting responsibly.
EVIDENCE
He notes that the discussion theme is “shift from principles to provable practice” and asks whether systems can actually prove responsible AI, highlighting the need for evidence rather than just policy statements [34-35][31]. He also stresses that “you need standards, not just principles” to move beyond theory [109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes that “you need standards, not just principles” and frames regulation as a catalyst for moving from high-level principles to demonstrable practice [S5]; this supports the shift to provable practice.
MAJOR DISCUSSION POINT
From principles to provable practice
AGREED WITH
Prativa Mohapatra, Sarika Guliani, Amol Deshpande
Argument 2
C2PA content credentials provide an open, interoperable standard for provenance and authenticity
EXPLANATION
Andy describes the C2PA content credentials as an open, cross‑industry standard that attaches provenance metadata to media, enabling anyone to verify authenticity across platforms.
EVIDENCE
He explains that five years of work resulted in an open standard called the C2PA content credentials, visible as a symbol on LinkedIn, which provides transparent context for videos, audio, or images and is built on a cross-industry coalition [62-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources describe C2PA as an open, free, cross-industry standard for attaching cryptographically signed provenance metadata to media [S4] and highlight its broader adoption in the ecosystem [S21][S22].
MAJOR DISCUSSION POINT
Open standard for content provenance
AGREED WITH
Prativa Mohapatra, Vishal Anand Kanwati, Amol Deshpande
DISAGREED WITH
Amol Deshpande
Argument 3
Regulation acts as a catalyst; standards are needed beyond mere statements of intent
EXPLANATION
Andy argues that regulation should stimulate good practices, but merely publishing a responsible AI commitment is insufficient; concrete standards are required to achieve real impact.
EVIDENCE
He states that regulation, such as that in India, serves as a catalyst for good practices [107] and that “you need standards, not just principles” to move beyond a website commitment [109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulation is portrayed as a catalyst for good practices rather than reactive compliance, emphasizing the need for concrete standards [S5]; collaboration among standards bodies is also urged [S23].
MAJOR DISCUSSION POINT
Regulation as catalyst for standards
AGREED WITH
Amol Deshpande, Vishal Anand Kanwati, Shantari Malaya, Dr. Satya Ramaswamy
Argument 4
Metadata stripping by platforms and low consumer awareness hinder provenance adoption
EXPLANATION
Andy points out that many social media platforms remove metadata, reducing transparency, and that consumer awareness of provenance symbols is still very low, limiting adoption.
EVIDENCE
He notes that many platforms strip metadata when content is uploaded, and that consumer awareness is early, with users unfamiliar with the provenance pin and UI elements still developing [92-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Implementation challenges are highlighted, including platforms stripping metadata and low consumer awareness of provenance symbols [S5][S4].
MAJOR DISCUSSION POINT
Barriers to provenance adoption
Argument 5
Adobe’s Content Authenticity Initiative demonstrates baked‑in provenance across creative tools
EXPLANATION
Andy highlights Adobe’s approach of integrating provenance capabilities directly into core products rather than as add‑ons, creating a foundation for trusted AI content.
EVIDENCE
He recounts that Adobe decided five years ago to embed responsible AI via content transparency into tools like Photoshop and Premiere at their core, leading to the open C2PA standard now baked into products [58-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Adobe’s integration of provenance capabilities directly into core products like Photoshop and Premiere is documented, with the C2PA standard baked into these tools [S4]; Andy’s role as Global Head for Content Authenticity at Adobe is also noted [S5].
MAJOR DISCUSSION POINT
Baked‑in provenance in Adobe tools
A
Amol Deshpande
5 arguments180 words per minute758 words251 seconds
Argument 1
Responsible AI must be orchestrated across all AI layers with people, process, and governance
EXPLANATION
Amol stresses that responsible AI cannot be isolated to a single component; it must be coordinated across the five AI layers and involve people, processes, and governance structures.
EVIDENCE
He explains that responsibility spans all five AI layers and requires orchestration of technology, people, and governance, noting the need for agility, skill-building, and guardrails across the enterprise [162-177].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for orchestration of technology, people, and governance across AI layers is emphasized in multiple sources discussing enterprise-wide AI orchestration [S8][S10] and Deshpande’s framework for scalable playgrounds, people development, and governance [S4].
MAJOR DISCUSSION POINT
Holistic orchestration of responsible AI
AGREED WITH
Shantari Malaya
Argument 2
Industry bodies must disseminate and adopt such standards to avoid fragmented compliance
EXPLANATION
Amol argues that industry consortia should share and promote open standards so that compliance does not become siloed or inconsistent across sectors.
EVIDENCE
He emphasizes that industry partnership is key for disseminating frameworks, sharing learnings through bodies like FICCI, and preventing fragmented compliance, noting that templates must be adapted per industry but shared widely [328-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for enhanced collaboration among standards organisations to prevent fragmented approaches are made in the standards-collaboration report [S23] and in discussions about avoiding fragmented jurisdictional regulation [S24].
MAJOR DISCUSSION POINT
Standard dissemination via industry bodies
AGREED WITH
Andy Parsons, Vishal Anand Kanwati, Shantari Malaya, Dr. Satya Ramaswamy
DISAGREED WITH
Vishal Anand Kanwati
Argument 3
Industry consortia and bodies (e.g., FICCI, C2PA) should share frameworks and best‑practice templates
EXPLANATION
Amol calls for collaborative ecosystems where industry groups provide reusable templates and best‑practice guides, enabling smaller players to adopt responsible AI.
EVIDENCE
He describes a demand-supply model where suppliers provide guard-rails and frameworks, which are then shared across the value chain through industry bodies, ensuring diverse sectors receive appropriate guidance [330-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of multi-stakeholder ecosystems for sharing best-practice templates and frameworks is highlighted in the standards collaboration panel [S23] and in the consensus-building analysis that notes nuanced implementation across sectors [S26].
MAJOR DISCUSSION POINT
Sharing best‑practice templates
Argument 4
Need for industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles
EXPLANATION
Amol notes that a single set of guardrails cannot suit every industry; tailored templates are required, but they should all rest on shared responsible‑AI foundations.
EVIDENCE
He repeats that “one size doesn’t fit all” and stresses the need for industry-specific templates that can be cascaded through bodies like FICCI while preserving core principles [180-182][328-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence of industry-specific templates built on shared responsible-AI foundations is provided in the analysis of sector-specific needs and core principle consensus [S26] and in the description of templates that vary by industry while preserving core guardrails [S4].
MAJOR DISCUSSION POINT
Industry‑specific responsible AI templates
DISAGREED WITH
Andy Parsons
Argument 5
RPG Group emphasizes a “bring‑your‑own‑AI” model with scalable, guarded orchestration
EXPLANATION
Amol describes RPG’s approach of allowing each function to adopt its own AI solutions within a common governance framework, ensuring scalability and safety.
EVIDENCE
He uses the phrase “bring your own AI” to illustrate that no single solution fits all, and highlights the need for scalable, safe environments with guardrails that can be practiced across the diverse RPG group [178-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RPG “bring-your-own-AI” approach, allowing business units to adopt AI within a common governance framework, is described in the session summary [S5] and aligns with Deshpande’s scalable playgrounds framework [S4].
MAJOR DISCUSSION POINT
Bring‑your‑own‑AI orchestration
P
Prativa Mohapatra
3 arguments155 words per minute1118 words432 seconds
Argument 1
Embedding “ART” (Accountability, Responsibility, Transparency) into product development and lifecycle
EXPLANATION
Prativa outlines Adobe’s internal AI governance model called ART, which embeds accountability, responsibility and transparency into every product’s development process.
EVIDENCE
She states that the first practice of AI governance at Adobe is “ART”-accountability, responsibility and transparency-and that every new product follows a secure methodology with hundreds of steps embedding these principles [196-207].
MAJOR DISCUSSION POINT
ART governance framework
AGREED WITH
Andy Parsons, Sarika Guliani, Amol Deshpande
Argument 2
Adobe’s Firefly embeds provenance “nutrition labels” to guarantee lawful, traceable outputs
EXPLANATION
Prativa explains that Adobe’s generative AI tool Firefly automatically attaches provenance “nutrition labels” to generated content, ensuring legal compliance and traceability.
EVIDENCE
She notes that Firefly embeds content traditions and nutrition labels, so any output carries provenance information that helps enterprises avoid legal liability and verify the source of data and models used [208-212].
MAJOR DISCUSSION POINT
Provenance nutrition labels in Firefly
AGREED WITH
Andy Parsons, Vishal Anand Kanwati, Amol Deshpande
Argument 3
Small and medium enterprises lack resources for dedicated AI compliance teams; large firms must create reusable frameworks
EXPLANATION
Prativa points out that MSMEs cannot afford dedicated AI compliance structures, so larger enterprises need to develop reusable frameworks that can be shared or adapted by smaller players.
EVIDENCE
She observes that small organizations cannot set up dedicated legal and compliance teams for AI, whereas large firms can shift resources and create frameworks that can be disseminated, highlighting the disparity between big and small players [304-307].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion highlights the disparity between large enterprises that can develop reusable compliance frameworks and smaller firms that lack dedicated AI compliance resources, emphasizing the role of industry bodies in bridging this gap [S5][S26].
MAJOR DISCUSSION POINT
SME resource constraints for AI compliance
D
Dr. Satya Ramaswamy
4 arguments187 words per minute1064 words340 seconds
Argument 1
Deploying generative AI with safety guardrails, continuous monitoring, and human‑in‑the‑loop feedback
EXPLANATION
Satya describes Air India’s generative AI virtual assistant, which operates with adjustable safety settings, real‑time monitoring, and post‑interaction human feedback to ensure safe, reliable service.
EVIDENCE
He details the virtual assistant’s launch in May 2023, handling 13.5 million queries with 97 % autonomous resolution, using safety knobs to balance convenience and risk, and employing AI-based monitoring plus customer feedback on appropriateness to prevent jailbreaks or inappropriate responses [258-262].
MAJOR DISCUSSION POINT
Safety‑guarded generative AI assistant
Argument 2
Compliance with multiple global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation
EXPLANATION
Satya explains that Air India operates across many jurisdictions, complying with each region’s AI‑related regulations while still innovating, showing that regulation need not stifle progress.
EVIDENCE
He notes that Air India flies to North America, Europe and India, complying with the FAA, DGCA and other regulators, and asserts that meeting these rules does not constrain Indian innovation, citing the successful launch of the industry-first generative AI assistant [341-354].
MAJOR DISCUSSION POINT
Global regulatory compliance and innovation
AGREED WITH
Andy Parsons, Amol Deshpande, Vishal Anand Kanwati, Shantari Malaya
Argument 3
Guardrails must be balanced with user convenience; over‑restrictive safety reduces service quality
EXPLANATION
Satya warns that tightening safety controls too much can degrade customer experience, so a balance is needed between protection and usability.
EVIDENCE
He explains that dialing the safety knob too high makes the service inconvenient, and that the system must stay flexible to evolving customer queries while still preventing jailbreaks and inappropriate content [260-262].
MAJOR DISCUSSION POINT
Balancing safety guardrails and user experience
Argument 4
Air India’s generative AI virtual assistant handles millions of queries with 97 % autonomous resolution, backed by safety monitoring
EXPLANATION
Satya provides concrete performance metrics of the virtual assistant, demonstrating its scale, effectiveness, and the safety mechanisms that underpin its operation.
EVIDENCE
He cites that since its launch the assistant has processed about 13.5 million queries, averaging 40 000 per day, with a 97 % autonomous handling rate and continuous safety monitoring to prevent misuse [258-262].
MAJOR DISCUSSION POINT
Scale and effectiveness of AI assistant
V
Vishal Anand Kanwati
3 arguments184 words per minute584 words189 seconds
Argument 1
Ensuring transparency, fairness, and explainability in AI‑driven transaction decisions
EXPLANATION
Vishal stresses that customers should receive clear explanations for declined or flagged transactions, promoting fairness and trust in AI‑based fraud detection.
EVIDENCE
He describes a small language model that can chat with users to explain why a transaction was declined, providing transparency and helping users understand the decision, while also aiming to keep false-positive rates low [287-291].
MAJOR DISCUSSION POINT
Transparent AI decisions in payments
AGREED WITH
Andy Parsons, Prativa Mohapatra, Amol Deshpande
Argument 2
Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments
EXPLANATION
Vishal argues that regulations are necessary to keep AI systems from causing harm, especially given the high stakes of financial transactions.
EVIDENCE
He states that regulations are required because AI can “go berserk,” citing the need for safeguards to limit false positives and protect transaction integrity, and emphasizes that such rules must be embedded across the ecosystem [360-366].
MAJOR DISCUSSION POINT
Regulatory safeguards for payment AI
AGREED WITH
Andy Parsons, Amol Deshpande, Shantari Malaya, Dr. Satya Ramaswamy
DISAGREED WITH
Amol Deshpande
Argument 3
NPCI’s AI‑driven fraud detection provides transparent explanations for declined transactions while minimizing false positives
EXPLANATION
Vishal outlines NPCI’s approach of combining accuracy with explainability, ensuring that customers understand declines and that false‑positive rates remain low.
EVIDENCE
He explains that the system aims for low false-positive rates, improves accuracy over time through data and industry collaboration, and now offers a chat-based interface that tells users why a transaction was declined, reinforcing trust [280-291].
MAJOR DISCUSSION POINT
Transparent fraud detection at NPCI
S
Sarika Guliani
1 argument141 words per minute586 words249 seconds
Argument 1
Responsible AI is a value‑driven commitment, not just a compliance checkbox
EXPLANATION
Sarika asserts that responsible AI should be rooted in shared human values and ethical commitments rather than being treated merely as a regulatory formality.
EVIDENCE
She remarks that responsibility is no longer a compliance check but a technology commitment built on shared human values, emphasizing that decisions now define what we create rather than just following rules [372-376].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion stresses that responsibility is no longer a mere compliance check but a technology commitment built on shared human values [S5].
MAJOR DISCUSSION POINT
AI responsibility as value‑driven commitment
AGREED WITH
Andy Parsons, Prativa Mohapatra, Amol Deshpande
A
Announcer
2 arguments129 words per minute129 words59 seconds
Argument 1
AI is a powerful engine for innovation and productivity in India’s digital journey
EXPLANATION
The Announcer frames AI as a key driver that can accelerate India’s digital transformation and economic growth, positioning it as a defining moment for the nation.
EVIDENCE
The opening remarks state that India stands at a defining moment in its digital journey as AI becomes a powerful engine for innovation and productivity, highlighting the strategic importance of AI for the country [2].
MAJOR DISCUSSION POINT
AI as catalyst for national development
Argument 2
Responsible deployment of AI, grounded in trust, transparency, and accountability, is essential and non‑optional
EXPLANATION
The Announcer emphasizes that the real differentiator is not the speed of AI adoption but the responsibility with which it is deployed, insisting that trust, transparency and accountability must be foundational pillars.
EVIDENCE
The speaker contrasts rapid adoption with responsible deployment, stating that trust, transparency and accountability are no longer optional but foundational for AI initiatives [3-5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session highlights that trust, transparency and accountability are foundational pillars for AI, moving beyond optional compliance [S5].
MAJOR DISCUSSION POINT
Foundational pillars of responsible AI
S
Shantari Malaya
5 arguments160 words per minute1621 words605 seconds
Argument 1
Responsible AI principles must be translated into concrete enterprise strategy frameworks
EXPLANATION
Shantari argues that the value of responsible AI lies in its operationalization within companies, requiring clear strategies that embed fairness, accountability, transparency, privacy and inclusivity into business models.
EVIDENCE
She notes that building trustworthy and inclusive AI will be about how responsible AI principles are realistically translated into enterprise strategy frameworks and how organizations will go about it [144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to operationalise responsible AI within enterprise strategy frameworks is echoed in the orchestration literature and consensus-building reports on translating principles into practice [S8][S26].
MAJOR DISCUSSION POINT
Operationalizing responsible AI in enterprises
Argument 2
Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem
EXPLANATION
Shantari states that in a maturing economy, regulation will inevitably play a role and that stakeholders should view it as a positive catalyst rather than a barrier.
EVIDENCE
She remarks that regulatory intervention is an inevitability that must be welcomed at some level, indicating acceptance of regulation as part of the AI governance landscape [369-371].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulation is described as inevitable and should be viewed as a catalyst rather than a barrier, with recommendations to avoid fragmented regulatory approaches [S24][S23].
MAJOR DISCUSSION POINT
Regulation as a necessary component of AI governance
AGREED WITH
Amol Deshpande
Argument 3
One‑size‑fits‑all solutions are unsuitable; industry‑specific approaches are required
EXPLANATION
Shantari highlights that different sectors have distinct needs, praising the “bring‑your‑own‑AI” concept and emphasizing the necessity for tailored frameworks rather than uniform standards.
EVIDENCE
She acknowledges that “one size doesn’t fit all,” appreciates the “bring your own AI” coinage, and stresses the need for sector-specific solutions [186-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses stress that while core responsible-AI principles are shared, implementation must be sector-specific, reflecting nuanced differences across industries [S26][S4].
MAJOR DISCUSSION POINT
Need for sector‑specific responsible AI models
AGREED WITH
Amol Deshpande, Andy Parsons
Argument 4
The pace of AI adoption must be balanced with the ability to manage operational consequences
EXPLANATION
Shantari points out the tension between moving quickly with AI and ensuring organizations can own the operational risks and consequences of rapid deployment.
EVIDENCE
She reflects on whether the industry is moving too fast to own operational consequences, questioning the balance between speed and responsibility [241-245].
MAJOR DISCUSSION POINT
Balancing speed of AI adoption with operational responsibility
Argument 5
Industry bodies and ecosystems must help MSMEs adopt responsible AI frameworks
EXPLANATION
Shantari stresses that larger enterprises and industry consortia have a duty to create reusable, accessible frameworks so that small and medium businesses can implement responsible AI without prohibitive costs.
EVIDENCE
She asks Amol about the role of the ecosystem in helping responsible AI move from letter to spirit, highlighting the need for industry-led support for smaller players [316-321].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session notes that industry bodies have a duty to create reusable, accessible frameworks for MSMEs, reinforcing ecosystem support for smaller players [S5][S26].
MAJOR DISCUSSION POINT
Ecosystem support for MSME responsible AI adoption
Agreements
Agreement Points
Regulation is essential and can act as a catalyst for good practices, but standards are needed beyond mere principles
Speakers: Andy Parsons, Amol Deshpande, Vishal Anand Kanwati, Shantari Malaya, Dr. Satya Ramaswamy
Regulation acts as a catalyst; standards are needed beyond mere statements of intent Industry bodies must disseminate and adopt such standards to avoid fragmented compliance Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem Compliance with multiple global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation
All speakers agree that regulation will inevitably shape responsible AI and should be viewed as a catalyst rather than a barrier; however, merely publishing commitments is insufficient – concrete, interoperable standards are required to translate principles into practice [107][109][328-336][360-366][369-371][341-354].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with the view that regulation should be a catalyst for good practices rather than mere compliance, as expressed in recent Indian policy discussions and OECD-style frameworks [S46][S47].
Transparency and provenance must be embedded in AI systems and clearly communicated to users
Speakers: Andy Parsons, Prativa Mohapatra, Vishal Anand Kanwati, Amol Deshpande
C2PA content credentials provide an open, interoperable standard for provenance and authenticity Adobe’s Firefly embeds provenance “nutrition labels” to guarantee lawful, traceable outputs Ensuring transparency, fairness, and explainability in AI‑driven transaction decisions Responsible AI must be orchestrated across all AI layers with people, process, and governance
The panel concurs that AI-generated content and decisions need verifiable provenance and explainability; open standards like C2PA and built-in “nutrition labels” are examples, and platforms should avoid stripping metadata while providing clear reasons for AI-driven outcomes [62-66][92-98][208-212][287-291][162-177].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects calls for labeling AI-generated content and ensuring provenance to maintain public trust, echoed in UN and OECD reports on AI transparency [S42][S43][S55][S49].
One‑size‑fits‑all solutions are unsuitable; responsible AI frameworks must be industry‑specific and flexible
Speakers: Amol Deshpande, Shantari Malaya, Andy Parsons
One size doesn’t fit all; industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles One‑size‑fits‑all solutions are unsuitable; industry‑specific approaches are required It should not be owned by any one company; it should be standards‑based
Speakers stress that responsible AI cannot be a single monolithic solution; instead, sector-tailored templates and a “bring-your-own-AI” mindset are needed, underpinned by open, non-proprietary standards [180-183][186-188][70-71].
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with advocacy for context-specific standards rather than universal ones, highlighted in OECD and industry-led governance debates [S51][S57][S56].
Responsible AI must move from abstract principles to provable, operational practice embedded in products and processes
Speakers: Andy Parsons, Prativa Mohapatra, Sarika Guliani, Amol Deshpande
Shift to provable practice rather than abstract principles Embedding “ART” (Accountability, Responsibility, Transparency) into product development and lifecycle Responsible AI is a value‑driven commitment, not just a compliance checkbox Responsible AI must be orchestrated across all AI layers with people, process, and governance
All agree that responsible AI should be concretised through baked-in product features, governance frameworks and measurable provenance, moving beyond policy statements to demonstrable practice [31-35][196-207][372-376][162-177].
POLICY CONTEXT (KNOWLEDGE BASE)
Matches the ‘principles-to-practice’ shift emphasized in multiple panels and policy roadmaps [S38][S41][S40].
Awareness and capacity building are prerequisite steps for responsible AI adoption, especially for MSMEs
Speakers: Amol Deshpande, Shantari Malaya
Responsible AI must be orchestrated across all AI layers with people, process, and governance Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem
Both highlight that the first step is raising awareness and building skills across the ecosystem; industry bodies must help smaller firms acquire the needed capacity to implement responsible AI [322-326][241-245].
POLICY CONTEXT (KNOWLEDGE BASE)
Supported by capacity-building recommendations in AI policy roadmaps and South-South cooperation initiatives [S60][S58][S50].
Similar Viewpoints
Both assert that regulatory compliance does not hinder innovation; instead, regulation can drive the adoption of robust standards and good practices [107][109][341-354].
Speakers: Andy Parsons, Dr. Satya Ramaswamy
Regulation acts as a catalyst; standards are needed beyond mere statements of intent Compliance with multiple global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation
Both emphasize the necessity of sector‑specific responsible AI frameworks rather than a universal solution [180-183][186-188].
Speakers: Amol Deshpande, Shantari Malaya
One size doesn’t fit all; industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles One‑size‑fits‑all solutions are unsuitable; industry‑specific approaches are required
Both view responsible AI as a core value‑driven commitment that must be integrated into product lifecycles, not merely a compliance exercise [196-207][372-376].
Speakers: Prativa Mohapatra, Sarika Guliani
Embedding “ART” (Accountability, Responsibility, Transparency) into product development and lifecycle Responsible AI is a value‑driven commitment, not just a compliance checkbox
Unexpected Consensus
Balancing safety guardrails with user convenience and transparency across very different domains (aviation and digital payments)
Speakers: Dr. Satya Ramaswamy, Vishal Anand Kanwati
Guardrails must be balanced with user convenience; over‑restrictive safety reduces service quality Ensuring transparency, fairness, and explainability in AI‑driven transaction decisions
Despite operating in distinct sectors, both speakers converge on the need to calibrate AI safety controls so that they protect users without degrading experience, and to provide clear explanations for AI-driven outcomes [260-262][287-291].
POLICY CONTEXT (KNOWLEDGE BASE)
Highlights the tension between safety and usability noted in cross-sector discussions on AI governance, such as the aviation-payments analogy used in trust-infrastructure talks [S45][S54].
Overall Assessment

The discussion shows strong convergence among speakers on four pillars: (1) regulation as a catalyst paired with concrete standards; (2) transparency/provenance embedded in AI products; (3) the necessity of industry‑specific, flexible frameworks; (4) operationalising responsible AI through baked‑in product features and capacity building. These alignments cut across AI, data governance, and the enabling environment for digital development, indicating a mature consensus that can drive coordinated policy, standard‑setting and industry collaboration.

High consensus – most speakers echo each other’s positions, suggesting a unified industry stance that can facilitate rapid development of interoperable standards, supportive regulatory frameworks, and ecosystem‑wide capacity initiatives.

Differences
Different Viewpoints
Universality of open standards versus need for industry‑specific templates
Speakers: Andy Parsons, Amol Deshpande
C2PA content credentials provide an open, interoperable standard for provenance and authenticity Need for industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles
Andy advocates a cross-industry, open and free standard (C2PA) that should be adopted universally and not owned by any single company [62-66][70-71][109]. Amol counters that a single set of guardrails cannot suit every sector, insisting that “one size doesn’t fit all” and that templates must be tailored to each industry while still resting on shared principles [180-182][328-336].
POLICY CONTEXT (KNOWLEDGE BASE)
Debate mirrors the split observed in OECD workshops where open standards were weighed against sector-tailored templates [S51][S52].
Feasibility of industry‑led governance versus necessity of mandatory regulation
Speakers: Amol Deshpande, Vishal Anand Kanwati
Industry bodies must disseminate and adopt such standards to avoid fragmented compliance Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments
Amol emphasizes that awareness, action and industry-body partnerships can drive responsible AI without heavy regulatory imposition, proposing a demand-supply model where standards are shared through consortia [322-327][328-336]. Vishal argues that regulations are required because AI can “go berserk”, insisting that safeguards must be embedded across the ecosystem and that regulation is unavoidable [360-366].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects ongoing disagreement between industry-led and government-mandated approaches documented in recent AI governance forums [S56][S57][S46].
Perception of the trust crisis in AI‑generated content
Speakers: Andy Parsons, Dr. Satya Ramaswamy
The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves Air India’s generative AI assistant has not produced any inappropriate response in years, showing that the trust problem can be effectively managed
Andy stresses that a trust crisis is already evident in everyday media and business contexts, citing the proliferation of synthetic content and misinformation as real operational risks [38-40][42-44]. Satya, referencing Air India’s virtual assistant, claims that in over two years the system has never given an inappropriate answer, suggesting that the crisis is not as severe as portrayed [262].
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes concerns about public distrust of AI-generated media raised in UN and OECD panels on misinformation and content labeling [S42][S43][S45].
Unexpected Differences
Severity of the AI trust crisis
Speakers: Andy Parsons, Dr. Satya Ramaswamy
The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves Air India’s generative AI assistant has not produced any inappropriate response in years, showing that the trust problem can be effectively managed
Andy portrays a pervasive trust problem affecting media and businesses, while Satya points to his airline’s AI system that has operated without any inappropriate outputs, suggesting a much less acute crisis than Andy describes [38-40][42-44][262]
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with assessments of a deepening trust crisis in AI, cited in UN Security Council remarks and academic surveys on trust erosion [S49][S45][S42].
Overall Assessment

The panel largely shares a common vision of responsible AI as essential and sees regulation, standards, and industry collaboration as necessary. However, clear points of contention arise around whether a single open standard can serve all sectors versus the need for industry‑specific templates, the extent to which regulation should drive governance versus industry‑led self‑regulation, and how serious the current trust crisis truly is.

Moderate – while there is broad consensus on goals, the differing views on implementation pathways (universal standards vs sector‑specific frameworks; industry‑led governance vs mandatory regulation; perception of trust risk) could affect the speed and coherence of policy and product roll‑outs. These divergences suggest that coordinated multi‑stakeholder dialogue will be needed to reconcile approaches before large‑scale adoption can proceed smoothly.

Partial Agreements
All three concur that regulation is required for responsible AI, but Andy frames it as a catalyst to spur standards, Vishal stresses it as a mandatory safeguard, and Shantari views it as an inevitable component that should be embraced [107][109][360-366][369-371]
Speakers: Andy Parsons, Vishal Anand Kanwati, Shantari Malaya
Regulation acts as a catalyst; standards are needed beyond mere statements of intent Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem
All agree that smaller enterprises need support, but Amol emphasizes industry‑body distribution of templates, Prativa stresses large firms creating reusable frameworks, and Shantari calls for ecosystem‑wide assistance through bodies like FICCI [328-336][304-307][316-321]
Speakers: Amol Deshpande, Prativa Mohapatra, Shantari Malaya
Industry bodies must disseminate and adopt such standards to avoid fragmented compliance Small and medium enterprises lack resources for dedicated AI compliance teams; large firms must create reusable frameworks Industry bodies and ecosystems must help MSMEs adopt responsible AI frameworks
Takeaways
Key takeaways
Responsible AI must move from abstract principles to provable, operational practice across all AI layers (people, process, technology, governance). Open, interoperable standards such as the C2PA content credentials are essential for building trust and provenance in AI‑generated content. Embedding accountability, responsibility, and transparency (the “ART” framework) directly into product development cycles is a practical way to operationalise responsible AI. Continuous safety monitoring, guardrails, and human‑in‑the‑loop feedback are critical for generative AI deployments, especially in high‑risk domains like aviation and payments. Regulation is viewed as a catalyst; compliance with global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation when supported by industry standards. Challenges include metadata stripping, low consumer awareness, and the resource gap for MSMEs to build dedicated AI compliance capabilities. Sector‑specific implementations (Adobe’s Firefly provenance labels, RPG’s “bring‑your‑own‑AI” orchestration, Air India’s virtual assistant, NPCI’s transparent fraud‑detection) illustrate practical pathways.
Resolutions and action items
FICCI will continue to facilitate dialogue and drive collaborative actions on responsible AI among industry participants. Panelists and their organisations committed to share frameworks, templates, and best‑practice guidance through industry bodies (e.g., C2PA, FICCI). Adobe will promote wider adoption of C2PA credentials and embed provenance metadata in its product suite. Air India will maintain and enhance its safety monitoring and feedback loops for the generative AI virtual assistant. NPCI will expand its transparent AI‑driven fraud‑detection explanations and refine false‑positive rates. RPG Group will disseminate its scalable “bring‑your‑own‑AI” governance model to other enterprises and partners.
Unresolved issues
How to effectively raise consumer awareness and UI visibility of provenance symbols at scale. Specific mechanisms for supporting MSMEs in implementing responsible AI without the resources of large enterprises. The precise balance between regulatory mandates and industry‑led self‑governance, especially regarding “light‑touch” versus stricter rules. Details of human‑in‑the‑loop processes for AI systems in domains like aviation and payments were mentioned but not fully defined. Standardisation of industry‑specific templates that satisfy diverse sector requirements while maintaining core responsible‑AI principles.
Suggested compromises
Adopt a regulatory approach that acts as a catalyst—mandatory baseline safeguards combined with flexibility for innovation (light‑touch regulation). Combine industry‑led standards (e.g., C2PA) with regulatory requirements to avoid fragmented compliance and ensure interoperability. Balance safety guardrails with user convenience by calibrating “safety knobs” and providing transparent fallback options (human escalation). Leverage large enterprises to create reusable compliance frameworks that can be shared with MSMEs through industry consortia.
Thought Provoking Comments
2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity for innovation.
Sets a concrete near‑term horizon, turning responsible AI from a vague aspiration into an imminent business imperative and innovation driver.
Created urgency that framed the rest of the discussion; panelists referenced the 2026 timeline when talking about upcoming regulations (EU AI Act, US California law, Indian IT rules) and the need to move from principles to practice.
Speaker: Andy Parsons
Can your systems actually prove that you have been responsible with AI, and how do you go about doing that?
Shifts the conversation from abstract ethics to measurable, auditable evidence of responsibility, introducing the notion of ‘provable practice’.
Prompted multiple speakers to discuss provenance, standards, and concrete mechanisms (C2PA credentials, product‑level metadata) that can demonstrate compliance, steering the dialogue toward technical solutions.
Speaker: Andy Parsons
We decided five years ago that responsible AI via content transparency wasn’t a feature that could be grafted onto our products… it had to be baked into the tools at their very core.
Highlights a strategic product‑development choice—embedding responsibility at the architecture level rather than as an after‑thought—offering a model for other enterprises.
Set the stage for the later discussion of the C2PA standard and inspired other panelists (e.g., Prativa) to cite how their own products embed provenance, reinforcing the theme of deep integration.
Speaker: Andy Parsons
The C2PA content credentials provide transparent context about a piece of media… an open, cross‑industry standard that anyone can adopt for free.
Introduces a tangible, industry‑wide solution that addresses the earlier call for provable responsibility and emphasizes openness and interoperability.
Led to references about adoption challenges (metadata stripping, consumer awareness) and reinforced the argument that standards—not just principles—are essential for scalable trust.
Speaker: Andy Parsons
One size doesn’t fit all. It’s a ‘bring‑your‑own‑AI’ scenario in every function.
Challenges the notion of a single, monolithic governance framework, emphasizing the need for flexible, context‑specific approaches across diverse business units.
Shifted the tone from a uniform solution to a discussion about modularity and the importance of tailoring guardrails, prompting other speakers to talk about industry‑specific templates and the role of ecosystem bodies.
Speaker: Amol Deshpande
Firefly embeds a ‘nutrition‑label’ style provenance; every output carries that nutrition level, guaranteeing compliance and accountability.
Provides a concrete product example that translates abstract principles (accountability, transparency) into a user‑facing feature, making the concept tangible.
Deepened the practical dimension of the conversation, leading to further examples (Acrobat Assistant) and reinforcing the idea that responsibility can be built into the user experience.
Speaker: Prativa Mohapatra
We use generative AI to watch the performance of our generative AI virtual assistant, balancing the safety knob with customer convenience.
Introduces the meta‑use of AI for self‑monitoring, illustrating a sophisticated, real‑world guardrail mechanism that addresses both safety and user experience.
Added a layer of technical complexity, prompting the panel to consider AI‑in‑AI oversight as part of responsible AI strategies and influencing the later discussion on regulation as a catalyst rather than a constraint.
Speaker: Dr. Satya Ramaswamy
We built a small language model that can explain to a customer why a transaction was declined, giving transparent, real‑time reasons for fraud‑related decisions.
Shows a concrete, consumer‑centric implementation of transparency in a high‑stakes domain (payments), extending the provenance concept beyond media to financial services.
Expanded the conversation to the payments ecosystem, illustrating how the same principles can be operationalized across sectors and reinforcing the need for explainability in AI decisions.
Speaker: Vishal Anand Kanwati
Regulation is a catalyst for good practices; compliance does not constrain Indian innovation.
Reframes regulation from a restrictive force to an enabling one, addressing a common fear among enterprises and aligning with India’s rapid digital growth.
Shifted the tone of the regulatory debate, influencing later remarks (Vishal, Amol) that while regulation is inevitable, it can coexist with innovation and industry‑led standards.
Speaker: Dr. Satya Ramaswamy
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the dialogue from abstract ethics to concrete, actionable frameworks. Andy Parsons’ framing of a 2026 deadline and the demand for provable responsibility set a sense of urgency and introduced the need for measurable standards, which anchored the rest of the conversation. Subsequent comments—especially the introduction of the C2PA open standard, Amol’s ‘bring‑your‑own‑AI’ flexibility, Prativa’s product‑level provenance labels, Satya’s meta‑AI monitoring, and Vishal’s transaction‑explanation model—provided tangible examples that illustrated how principles can be embedded across industries. These insights prompted participants to explore implementation challenges, the role of regulation as an enabler, and the importance of industry collaboration. Collectively, the highlighted comments shaped the session into a forward‑looking, solution‑oriented exchange, emphasizing that responsible AI is not merely a compliance checkbox but a strategic, technically grounded capability that can be scaled across India’s diverse enterprise landscape.

Follow-up Questions
How can organizations demonstrably prove that they are using AI responsibly, and what metrics or evidence are needed to show compliance?
Establishing verifiable proof of responsible AI is essential for building trust, meeting regulatory requirements, and differentiating compliant enterprises.
Speaker: Andy Parsons
What are the implementation costs and ongoing operational expenses associated with deploying responsible AI practices?
Understanding financial implications helps businesses budget, justify investments, and assess ROI for responsible AI initiatives.
Speaker: Andy Parsons
How can consumer awareness of content provenance symbols be increased, and what UI/UX designs are most effective for displaying these indicators?
Widespread user recognition of provenance marks is critical for the success of transparency standards and for empowering end‑users to make informed choices.
Speaker: Andy Parsons
What strategies can address the uneven adoption of provenance standards, especially when platforms (e.g., social media) strip metadata?
Ensuring consistent preservation of provenance data across all distribution channels is necessary to maintain the integrity of the transparency ecosystem.
Speaker: Andy Parsons
How can a viable business case be built for provenance and transparency technologies when they do not directly generate revenue?
Demonstrating economic value or indirect benefits (e.g., risk reduction, brand trust) is needed to encourage enterprise investment in responsible AI tools.
Speaker: Andy Parsons
What best‑practice frameworks enable a “bring‑your‑own‑AI” approach that remains scalable, safe, and governed by effective guardrails across diverse functions?
Enterprises need reusable, adaptable models for integrating third‑party AI while maintaining compliance and risk controls.
Speaker: Amol Deshpande
What training and skill‑development programs are most effective for upskilling the entire value chain on responsible AI principles?
People are a critical stakeholder; systematic education ensures consistent application of responsible AI across an organization.
Speaker: Amol Deshpande
How can explainable AI be integrated into payment‑transaction systems so that users receive clear, understandable reasons for declines or fraud flags?
Transparency in financial decisions builds trust and reduces customer friction, especially in high‑volume digital payment ecosystems.
Speaker: Vishal Anand Kanwati
What methods can balance false‑positive fraud detection with fairness, ensuring legitimate transactions are not unduly blocked while still catching fraud?
Optimizing detection accuracy is vital for user experience, financial inclusion, and regulatory compliance in payment systems.
Speaker: Vishal Anand Kanwati
How can responsible‑AI frameworks and tooling be made affordable and accessible for MSMEs that lack large legal or compliance teams?
Ensuring smaller businesses can adopt responsible AI prevents a widening gap between large enterprises and the broader market.
Speaker: Prativa Mohapatra
What mechanisms can industry bodies use to disseminate responsible‑AI standards and templates effectively across varied sectors and company sizes?
Coordinated industry‑wide adoption accelerates standardization and reduces duplication of effort.
Speaker: Amol Deshpande
How can global AI regulatory frameworks (EU AI Act, UNESCO, OECD) be harmonized with India’s emerging policies to create a coherent compliance landscape?
Alignment reduces regulatory friction for multinational operations and ensures Indian innovations remain globally competitive.
Speaker: Dr. Satya Ramaswamy
What metrics and evaluation methods should be used to assess the effectiveness of AI governance frameworks and provenance standards?
Quantitative assessment is needed to track progress, demonstrate impact, and guide continuous improvement.
Speaker: General (multiple participants implied)
What is the optimal balance between industry‑led self‑regulation and formal regulatory intervention for AI, especially in high‑scale ecosystems like payments?
Clarifying the roles of self‑governance versus law helps shape policy that protects users while fostering innovation.
Speaker: Vishal Anand Kanwati
How can a light‑touch regulatory approach be designed that still ensures safety and fairness without stifling AI innovation?
Finding the right regulatory intensity is crucial for encouraging rapid AI adoption while safeguarding public interest.
Speaker: Sarika Guliani
What design principles ensure effective human‑in‑the‑loop mechanisms for AI systems operating at airline‑scale, balancing safety with customer convenience?
Human oversight remains essential in safety‑critical domains; research is needed on scalable, real‑time intervention models.
Speaker: Dr. Satya Ramaswamy
How does the presence of provenance symbols affect user trust and behavior across different cultural and linguistic contexts in India?
India’s diverse user base may respond differently; studying impact informs culturally appropriate rollout strategies.
Speaker: Andy Parsons (implied)
What are the technical challenges and solutions for ensuring interoperability of provenance standards across hardware manufacturers (cameras, smartphones) and software platforms?
Cross‑industry compatibility is key for a universal trust layer; research can identify standards gaps and integration pathways.
Speaker: Andy Parsons

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Responsible AI in India Leadership Ethics & Global Impact part1_2

Responsible AI in India Leadership Ethics & Global Impact part1_2

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with the moderator emphasizing that responsible AI-grounded in trust, transparency and accountability-is now a foundational requirement for Indian enterprises [1-6]. Andy Parsons of Adobe framed the discussion as a shift from abstract AI principles to “provable practice,” noting that 2026 will see responsible AI become both a regulatory duty and a business opportunity [33-34][20-21]. He described Adobe’s leadership in the Coalition for Content Provenance and Authenticity (C2PA), an open, cross-industry standard that embeds provenance metadata directly into media files, enabling anyone to verify how content was created [54-62]. The C2PA’s core principles-transparency, provenance, accountability and inclusivity-are presented as “nutrition labels” for digital content, allowing users to trace models, tools and data behind each asset [74-80][81-84]. Andy also warned of uneven adoption, metadata stripping by platforms, low consumer awareness and the difficulty of building a profitable business case for provenance, arguing that standards, not merely principles, are needed to move forward [90-99][108-110].


In the panel, Amol Deshpande highlighted that responsible AI must be orchestrated across all five AI layers, involve people, processes and technology, and cannot rely on a single “one-size-fits-all” solution, coining a “bring-your-own-AI” approach [162-166][177-180]. Prativa Mohapatra explained Adobe’s internal “ART” framework (accountability, responsibility, transparency) and gave concrete examples such as Firefly, which tags generated outputs with “nutrition” metadata, and Acrobat Assistant, which ensures traceable, lawful document creation [197-199][209-214][224-228]. She stressed that legal and compliance teams must redesign their workflows to embed AI governance throughout the input-output lifecycle, otherwise enterprises risk falling short of future regulatory expectations [235-238].


Satya Ramaswamy described Air India’s generative-AI virtual assistant that has handled 13.5 million queries with a 97 % autonomous success rate, while continuous safety monitoring and customer feedback loops prevent jailbreaks and inappropriate responses [257-263][264-268]. He noted that partnerships with firms like Adobe provide “prompt firewalls” and indemnities that boost confidence in managing AI risk at airline scale [269-271]. Vishal Anand Kanvaty of NPCI emphasized transparency for declined transactions, using a language model to explain reasons to users, and argued that regulatory safeguards are essential to prevent false-positive fraud decisions and maintain trust in the payments ecosystem [293-298][370-376].


Across the discussion, participants agreed that industry-led standards, cross-sector collaboration and regulatory frameworks are all necessary to translate responsible-AI principles into operational practice, especially for MSMEs that lack internal resources [332-340][379-383]. Sarika Guliani of FICCI reiterated that responsible AI is a commitment to shared human values and that the “people, planet, progress” agenda must guide future innovation, with FICCI pledging to advance the dialogue into concrete action [379-383][389-390]. Overall, the dialogue underscored that moving from principle to practice requires open standards, robust governance, and coordinated regulation to ensure trustworthy AI deployment across India’s diverse enterprise landscape [108-110].


Keypoints


Major discussion points


From principles to provable practice – The panel framed responsible AI as moving beyond abstract ethics to demonstrable compliance, driven by new regulations such as the EU AI Act, California law and India’s IT rules, and positioning it as both a leadership imperative and a regulatory requirement [30-33][105-110][108-113].


Open, cross-industry standards for transparency – Adobe highlighted the C2PA (Coalition for Content Provenance and Authenticity) as an open, free standard that embeds provenance metadata directly into media assets; this model is being baked into Adobe products (e.g., Firefly, Acrobat) to give enterprises verifiable “nutrition labels” for AI-generated content [54-66][61-70][209-219].


Implementation challenges and governance needs – Speakers noted uneven adoption, metadata stripping by platforms, low consumer awareness, and the difficulty of building a business case for provenance. They stressed the necessity of robust governance, guardrails, and a shift from “check-list compliance” to operational frameworks [90-99][105-110][158-166].


Sector-specific responsible-AI deployments – Real-world examples were shared: Air India’s generative-AI virtual assistant that balances safety knobs, continuous monitoring, and human-in-the-loop escalation [257-270]; NPCI’s transparent fraud-prevention model that explains transaction declines and leverages AI while insisting on regulatory safeguards [286-301][370-376]; and RPG’s “bring-your-own-AI” approach that stresses orchestration across data, people, process and technology layers [162-180][185-190].


Overall purpose / goal


The session aimed to translate high-level responsible-AI principles into concrete, enterprise-ready practices for Indian corporations. By showcasing standards, regulatory trends, and concrete industry pilots, the discussion sought to equip leaders with actionable frameworks and to foster a collaborative ecosystem that can scale responsible AI across sectors.


Overall tone


The conversation began with an optimistic, forward-looking tone, emphasizing opportunity and collaboration. As speakers moved into challenges-such as uneven adoption, regulatory pressure, and implementation costs-the tone became more cautionary yet remained constructive, focusing on solutions and shared responsibility. The closing remarks returned to a hopeful, commitment-driven tone, urging continued dialogue and collective action.


Speakers

Vishal Anand Kanvaty


– Role/Title: Chief Technology Officer, National Payments Corporation of India (NPCI)


– Area of Expertise: Digital payments, AI-driven fraud detection and responsible AI governance [S1]


Sarika Guliani


– Role/Title: Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI


– Area of Expertise: AI policy, industry standards, responsible AI implementation [S3]


Dr. Satya Ramaswamy


– Role/Title: Chief Digital and Technology Officer, Air India Limited


– Area of Expertise: Aviation technology, AI-enabled customer service, safety-critical AI systems [S5]


Shantheri Mallaya


– Role/Title: Editor, Economic Times (Panel Moderator)


– Area of Expertise: Journalism, technology policy, AI ethics and industry discourse [S8]


Prativa Mohapatra


– Role/Title: Vice President and Managing Director, Adobe India


– Area of Expertise: Product governance, responsible AI, content authenticity and AI-driven creative tools [S11]


Andy Parsons


– Role/Title: Global Head for Content Authenticity, Adobe (runs the Content Authenticity Initiative)


– Area of Expertise: Content provenance, AI transparency, standards development (C2PA) [S13]


Amol Deshpande


– Role/Title: Group Chief Digital Officer and Head of Innovation, RPG Group


– Area of Expertise: Digital transformation, enterprise AI strategy, responsible AI implementation [S15]


Moderator


– Role/Title: Session Moderator (unnamed)


– Area of Expertise: Event facilitation, AI discussion moderation [S19]


Additional speakers:


Nita – mentioned in closing remarks; no role or expertise specified in the transcript.


Nanya – mentioned in closing remarks; no role or expertise specified in the transcript.


Full session reportComprehensive analysis and detailed insights

The session, presented by Adobe in association with FICCI, opened with moderator Shantari Mallaya (Economic Times) welcoming participants to “Responsible AI from Principles to Practice in Corporate India.” She framed trust, transparency and accountability as “foundational, not optional” for India’s accelerating digital transformation [5-6].


Andy Parsons, Global Head for Content Authenticity at Adobe, set the tone by declaring 2026 the year responsible AI becomes both a regulatory duty and a strategic opportunity. He highlighted that the EU AI Act’s enforcement provisions take effect in August, that California’s first AI law is already in force, and that India’s new IT rules on SGI are being implemented, shifting the business question from “should we be responsible?” to “can you prove you are responsible?” [24-33]. Parsons introduced Adobe’s leadership in the Content Provenance and Authenticity (C2PA) credentials, an open, free, cross-industry standard that embeds provenance metadata directly into media files, enabling anyone to verify a piece of content’s origin, model and tools [55-62]. He described this “nutrition-label” approach as essential for India’s massive digital population, where synthetic content and AI-generated misinformation pose real operational risks. He also warned of challenges: social-media platforms often strip metadata [89-92], consumer awareness of provenance symbols remains low [95-99], and building a profitable business case for provenance remains challenging [108-110]. Consequently, he argued for standards-based infrastructure rather than mere principles, and likened regulation to a catalyst that pushes good practice without being punitive [105-108].


After the opening, Mallaya positioned the panel as a deep dive into translating responsible-AI principles-fairness, accountability, transparency, privacy and inclusivity-into concrete enterprise strategies [144-150].


Amol Deshpande, Chief Digital & Innovation Officer, RPG Group, responded that responsibility must be orchestrated across the five AI layers (data, model, inference, deployment, monitoring) and cannot rely on a single solution. He advocated a “bring-your-own-AI” approach, where each function selects appropriate guardrails while the organisation supplies a scalable, safe environment and governance templates adaptable to diverse business units [162-166][177-184]. He emphasized people as the critical stakeholder, calling for extensive up-skilling to embed human judgement into increasingly complex generative and agentic AI systems [169-176].


Prativa Mohapatra, Vice-President & Managing Director, Adobe India, outlined Adobe’s internal ART (Accountability, Responsibility, Transparency) philosophy and how it is baked into product development pipelines through hundreds of validation steps. Across Adobe’s portfolio-Firefly and the Acrobat Assistant-every AI-generated output carries a content-credential tag that confirms licensing, data compliance and model traceability, thereby shielding enterprises from legal liability and requiring legal and compliance teams to redesign workflows to embed AI governance throughout the input-output lifecycle [209-218][224-232][235-238].


Satya Ramaswamy, Chief Digital and Technology Officer, Air India, illustrated a sector-specific deployment: a generative-AI virtual assistant launched in May 2023 that has handled 13.5 million customer queries with a 97 % autonomous success rate. The system balances a “safety knob” that prevents jailbreaks and inappropriate responses with a seamless user experience, using generative AI both to serve customers and to monitor its own performance. He likened the design to an autopilot/red-button safety-critical analogy, emphasizing human-in-the-loop oversight and “prompt firewalls” provided through Adobe partnerships that bolster risk management without stifling innovation [257-274][332-336].


Vishal Anand Kanwati, CTO, National Payments Corporation of India (NPCI), described AI-driven fraud detection that maintains fairness. NPCI began with a low false-positive threshold and, through data-driven model refinement and industry collaboration, achieved higher accuracy. A small language model now explains to users why a transaction was declined, delivering transparency that builds trust in the payments ecosystem. He stressed that regulatory safeguards are indispensable to prevent AI from “going berserk” and referenced the RBI’s responsible-AI framework as a guiding standard [286-293][298-302][370-376].


Points of Agreement

* All speakers endorsed the need for transparent provenance of AI-generated content – via C2PA credentials (Andy) [55-62], Adobe’s ART-driven content-credential tags (Prativa) [209-218], and NPCI’s transaction-explanation model (Vishal) [286-293].


* They concurred that open, standards-based infrastructure and reusable frameworks are essential for scaling responsible AI, with industry bodies such as FICCI, C2PA and RBI playing pivotal dissemination roles [66-70][297-304][332-340][344-347].


* Regulation was uniformly seen as a catalyst that must coexist with innovation (Andy) [105-108].


* Both Satya and Amol highlighted the critical importance of human-in-the-loop oversight and adjustable guardrails for safety-critical applications [180-182][360-362].


Points of Disagreement

1. Regulation intensity – Vishal argued that mandatory safeguards are essential to prevent harmful AI behaviour [370-376]; Sarika Guliani cautioned that regulation should be balanced and proportionate [379-382]; Andy positioned regulation as a catalyst that encourages good practice without being punitive [105-108].


2. Scope of standards – Andy promoted a single, open C2PA standard as the foundation for provenance [55-62]; Amol counter-argued that “one size does not fit all”, advocating sector-specific templates and a “bring-your-own-AI” model [168-180]; Prativa warned that without free, universally accessible frameworks the divide between large enterprises and MSMEs would widen [297-304].


3. Primary driver of adoption – Amol emphasized an awareness → action → demonstration pathway, with industry bodies disseminating frameworks [332-340]; Vishal insisted that regulation is indispensable for ecosystem safety [370-376]; Sarika stressed that responsible AI is a commitment to shared human values, not merely a compliance checkbox, and should be guided by the “people, planet, progress” agenda [383-389].


Key Take-aways

– Responsible AI must move from high-level principles to provable, operational practice.


Transparent provenance, enabled by open standards such as C2PA, is a cornerstone for trust.


– Effective governance requires coordinated people, process, technology and industry-body layers, not a simple checklist.


– Emerging regulations (EU AI Act, India’s IT rules, state-level AI laws) act as catalysts that should coexist with innovation.


Sector-specific pilots-Air India’s AI assistant, NPCI’s fraud-explanation service, RPG’s flexible governance, Adobe’s ART-driven products-demonstrate practical pathways.


– Without open, free frameworks, responsible AI risks becoming a luxury for large firms, leaving MSMEs behind.


Closing Remarks

Sarika Guliani (FICCI) concluded that responsible AI is a commitment to shared human values rather than a mere compliance checkbox, and that the “people, planet, progress” agenda must guide all technological innovation. FICCI pledged to continue the dialogue and translate the insights into concrete actions for the Indian ecosystem [383-389][389-390].


The moderator thanked the panelists and the audience, signalling that the conversation will move from discussion to implementation.


Session transcriptComplete transcript of the session
Moderator

I’d like to welcome you all to this session titled Responsible AI from Principles to Practice in Corporate India presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite Andy Parsons, our Global Head for Content Authenticity at Adobe.

Andy, over to you.

Andy Parsons

Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.

I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity. I’m going to talk about that in a minute. I’m going to talk about that in a minute. I’m going to talk about that in a minute. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.

And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point. But can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, responsible AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.

So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.

But it’s made the trust problem absolutely impossible to address. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that. Hundreds of millions of people consuming digital content every day.

In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to have leadership in the C2PA, which hopefully many of you have. have heard about this week, which is the Coalition for Content Provenance and Authenticity. And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free.

So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others. And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials.

I’ve encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio, or video. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others. And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company.

It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy I think is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency. Provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used. Simple ideas like knowing that a photograph is actually a photograph and not generated.

These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.

Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.

Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I have often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m not sure if that’s true. I’m not sure if that’s true. I’m not sure if that’s true. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.

What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.

And last, enterprises. Like many of yours in this room, that are willing and excited to go first that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again. And in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things. And that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries, because we have representatives from all of them on our excellent panel today.

So we’re fortunate to have leaders from Air India and PCI, RPCI, and the U .S. Department of Defense, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.

Shantheri Mallaya

Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.

So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.

Right. So, Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So, Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.

So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.

Amol Deshpande

Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.

It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise, to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.

We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.

You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. And that. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple

Shantheri Mallaya

Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Prativa here. Welcome, Prativa. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe is constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally? And at the same time, as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?

So all yours here.

Prativa Mohapatra

Okay, thank you. So I think Andy said the context. And since we are here not to learn about the principles, but the practices, I think everybody should go back with certain practices. So the first practice of AI governance, which we practice, is art. Which is accountability, responsibility, and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course, we are in the business of content for a very, very long time. And now the same content is becoming the currency which everybody’s debating. So our principles have been there for a while.

But. How it is actualized? and let me say how it’s translated into our products. And by the way, it’s in our products, it’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law, you will not be getting into any liability issues.

Because how you do it is by feeding. Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines.

So Acrobat has this new feature called Acrobat Assistant. It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.

So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to already, Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt and re -design and re -design to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI, ensure that you tick all the three.

If you miss any one, you might not be ready for the future. So that’s how I see it.

Shantheri Mallaya

Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Pratibha. I’ll circle back to you, time permitting. Let’s see how best we can get back. Dr. Satya, calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.

How do these things really fall in place in terms of vision and metrics?

Dr. Satya Ramaswamy

Thank you, Shantiri. Since the audience is international, a real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.

So it was a global first in the whole airline industry. Today, it has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to human agent the remaining 50 % of the contact volume comes to A .G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.

At the same time, we don’t want any jailbreak to happen. We don’t want problem injection to happen. We don’t want any inappropriate thing to happen. So we are watching the whole performance of the Gen A virtual assistant, A .G as we call it, all the time. So we use, in fact, Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer, right? So at the end of the day, when we send a response, we also ask the customer, you know, did it answer your question? And also allow them to give their reactions, right? Is it appropriate, inappropriate? And thankfully, over the last 20 years, it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it.

But now we are, as the technologies are maturing, for example, we now have… interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem. That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.

Shantheri Mallaya

Excellent. And given the kind of scale you’re operating at, I think every day is a new day. Yes, it is. We face challenges. There is new, brand new every day. Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here, or rather I’ll sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you.

How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts? One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be, fair at the same time proactive and detective when it comes to looking at fraud? So how, what are the aspects that you look at? keenly here

Vishal Anand Kanvaty

I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with the success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.

Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised

Shantheri Mallaya

absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you some things but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?

Is it getting risked?

Prativa Mohapatra

Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.

So, I think that’s a big thing. I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.

So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?

Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is similar to electricity and it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem. Absolutely.

Shantheri Mallaya

Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem? The industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit. So

Amol Deshpande

Shantiri, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?

Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.

Mind you, this would change. It’s not like this one guardrail construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is I think very, very critical. Absolutely. Thank

Shantheri Mallaya

you for that Amol. Dr. Satya, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean the principles, so on and so forth, and India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time, we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony absolutely, I

Dr. Satya Ramaswamy

think taking Air India, we are an international airline so we operate in many countries, for example we go to North America, US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world, so by nature we are geared to looking at the regulation in all parts of the world and I think and being in compliance, and our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right?

So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco Airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control. And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing.

the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is – this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation. For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it in the same spirit like Adobe.

Absolutely.

Shantheri Mallaya

So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts

Vishal Anand Kanvaty

Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important. While all of us realize.

it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high

Shantheri Mallaya

Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please

Sarika Guliani

first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken and the agency itself and the so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of FICCI and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.

That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Shantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.

So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (7)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The session was presented by Adobe in association with FICCI and titled “Responsible AI from Principles to Practice in Corporate India.””

The knowledge base explicitly states that the discussion titled “Responsible AI from Principles to Practice in Corporate India” was presented by Adobe in association with FICCI, confirming the partnership and session title [S2].

Confirmedhigh

“EU AI Act’s enforcement provisions take effect in August.”

EU AI Act enforcement begins in August, with oversight authorities appointed and penalties enforceable from 2 August (and the Act itself entered into force on 1 August 2024) [S72] and [S73].

Confirmedhigh

“Adobe leads the Content Provenance and Authenticity (C2PA) credentials, an open, free, cross‑industry standard that embeds provenance metadata directly into media files.”

C2PA is described as a technical standard that enables creators to attach cryptographically signed provenance metadata to media, and is supported by Adobe among other companies, confirming its open, cross-industry nature [S37] and [S76].

Confirmedmedium

“Amol Deshpande advocated a “bring‑your‑own‑AI” approach for organisational governance.”

The discussion notes that the phrase “bring your own AI” was highlighted and praised during the session, confirming its use by speakers such as Amol Deshpande [S1].

Additional Contextmedium

“India’s new IT rules on SGI are being implemented, requiring platforms to label synthetic content and act on it.”

India has introduced rules that obligate social-media platforms to label AI-generated/deep-fake content and remove flagged material within three hours, providing concrete detail on the regulatory environment referenced in the report [S79].

External Sources (82)
S1
Responsible AI in India Leadership Ethics & Global Impact part1_2 — -Vishal Anand Kanvaty- Chief Technology Officer, National Payments Corporation of India (NPCI)
S2
Responsible AI in India Leadership Ethics & Global Impact — -Vishal Anand Kanwati- Chief Technology Officer, National Payments Corporation of India (NPCI)
S3
Responsible AI in India Leadership Ethics & Global Impact part1_2 — The discussion concluded with Sarika Guliani from FICCI emphasising that “responsibility is not anymore a compliance che…
S4
Responsible AI in India Leadership Ethics & Global Impact — The session concluded with FICCI’s commitment to continue translating discussions into actionable frameworks. Sarika Gul…
S5
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact-part1_2 — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S6
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Dr. Satya Ramaswamy- Vishal Anand Kanvaty – Vishal Anand Kanvaty- Dr. Satya Ramaswamy Dr. Satya focuses on balancing…
S7
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S8
Responsible AI in India Leadership Ethics & Global Impact part1_2 — -Shantheri Mallaya- Editor at Economic Times, panel moderator
S9
Responsible AI in India Leadership Ethics & Global Impact — -Shantari Malaya- Editor at Economic Times, panel moderator
S10
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S11
Responsible AI in India Leadership Ethics & Global Impact — -Prativa Mohapatra- Vice President and Managing Director of Adobe India
S12
Driving U.S. Innovation in Artificial Intelligence — 2. Amy Cohen – Executive Director, National Association of State Election Directors 3. Andy Parsons – Senior Director of…
S13
Responsible AI in India Leadership Ethics & Global Impact — -Andy Parsons- Global Head for Content Authenticity at Adobe, runs the Content Authenticity Initiative at Adobe
S14
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excit…
S15
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S16
Responsible AI in India Leadership Ethics & Global Impact — – Andy Parsons- Amol Deshpande – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S17
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S18
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S19
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S20
Closing remarks – Charting the path forward — Importance of moving from principles to practical implementation
S21
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Safe SAIF, secure AI framework is something we have shared outside. And it is important to understand supply chain risk….
S22
Ethics and AI | Part 6 — A significant focus of the Act is placed on transparency. It mandates that users be informed when they are interacting w…
S23
Building the Next Wave of AI_ Responsible Frameworks & Standards — This comment fundamentally shifted the discussion from viewing responsibility as a constraint on innovation to seeing it…
S24
The Global Power Shift India’s Rise in AI & Semiconductors — The discussion concluded that India’s opportunity in AI and semiconductors is real but time-bound, requiring decisive ex…
S25
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — This comment shifted the discussion from high-level policy frameworks to concrete, relatable concerns. It established a …
S26
Toward Collective Action_ Roundtable on Safe & Trusted AI — And AI is allowing this to happen at a scale that at the moment we already see disruptions, but I think there’s real ris…
S27
AI as critical infrastructure for continuity in public services — This comment provides a concrete, measurable example of how AI exclusion occurs, moving beyond abstract discussions of i…
S28
The rise and risks of synthetic media — The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in he…
S29
AI slop’s meteoric rise and the impact of synthetic content in 2026 — In December 2025, the Macquarie Dictionary, Merriam-Webster, and the American Dialect Society named ‘slop’ as the Word o…
S30
Meta India VP highlights AI’s role in ensuring user safety against misinformation — Meta India Vice President Sandhya Devanathan said the companyuses AI to combat misinformationwhile stressing that it wil…
S31
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste. Honorable Minister Vaishnav, Your Excellency’s colleagues, let me begin by thanking our host, Prime Minister Mo…
S32
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S33
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S34
Conversational AI in low income & resource settings | IGF 2023 — Finding the right balance between regulation and innovation is crucial. By addressing these issues, AI can play a signif…
S35
Open Forum #17 AI Regulation Insights From Parliaments — Balancing Innovation and Regulation There’s a critical balance needed between regulation and innovation incentives. Cou…
S36
What is it about AI that we need to regulate? — Global AI Governance Initiatives: Directions and TrajectoriesGlobal AI governance initiatives are heading toward multipl…
S37
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges Charlie Halford…
S38
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — Nadja Blagojevic: Yes, very happy to. And thank you so much for having Google here. We’re very happy to be speaking with…
S39
WSIS Action Line C7 E-environment — – **Governance and Implementation Challenges**: Common barriers across all initiatives including unclear policy framewor…
S40
Advancing Scientific AI with Safety Ethics and Responsibility — -Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI…
S41
Laying the foundations for AI governance — – Industry attitudes toward regulation: whether companies genuinely want regulation or resist it due to power concentrat…
S42
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Andy Parsons positioned regulation as helping enterprises move from reactive to proactive responsible AI adoption. The u…
S43
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — Agent Standards Initiative focuses on industry-driven, consensus-based voluntary standards rather than regulation There…
S44
Industry leaders partner to promote responsible AI development — Anthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies,have joined to establishtheFrontier …
S45
Building the Next Wave of AI_ Responsible Frameworks & Standards — I think there is a significant role the governments, innovation hubs, academia, and startups have to play in developing …
S46
Safe and Responsible AI at Scale Practical Pathways — A sustainable data economy requires clear incentive models with guaranteed trust, value creation, and exchangeability me…
S47
AI for agriculture Scaling Intelegence for food and climate resiliance — The minister emphasizes that artificial intelligence in agriculture should rest on reliable data sources, be governed by…
S48
Opening address of the co-chairs of the AI Governance Dialogue — Infrastructure | Legal and regulatory International technical standards and their role to make sure that policy and reg…
S49
Responsible AI in India Leadership Ethics & Global Impact — Aviation industry’s safety-critical nature provides embedded concepts of human-in-the-loop control and regulatory compli…
S50
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S51
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Matilda Road:Mathilda, over to you. Thank you, Florian. Good morning everyone. It’s great to see so many of you here and…
S52
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — in the world in terms of policy and regulation. When Vision 2030 was launched by His Royal Highness the Crown Prince, we…
S53
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S54
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S55
WS #283 AI Agents: Ensuring Responsible Deployment — User control and human oversight are essential safeguards, particularly for high-impact decisions that are difficult to …
S56
Agentic AI in Focus Opportunities Risks and Governance — All panelists emphasized the critical importance of enterprise guardrails and human oversight. They stressed that while …
S57
Policy Guidelines — – ◾ Section 1: The Development of Open Access to Scientific Information and Research , gives an overview of the definiti…
S58
Is the AI bubble about to burst? Five causes and five scenarios — Historically,open systems often win in the long run– think of the internet, HTML, and Linux. They become standards, attr…
S59
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The level of consensus among the speakers was relatively high, particularly on the benefits and potential applications o…
S60
Comprehensive Report: European Approaches to AI Regulation and Governance — And how would the downstream provider offering then this final system to the border control or to the, for instance, to …
S61
Google to require disclosure of AI-generated content in political ads — Googleis implementing new rules requiring political ads on its platforms to disclose when images and audio are generated…
S62
Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content — Human rights | Legal and regulatory | Sociocultural Information Integrity and Human Rights Framework There must be dis…
S63
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — – Ioanna Ntinou- Mark Gachara Example of energy efficiency passes for houses in Germany and EU that are obligatory, mak…
S64
Responsible AI in India Leadership Ethics & Global Impact — And our customers are international, and when we operate in this international geographies, we have to comply with the a…
S65
Responsible AI in India Leadership Ethics & Global Impact part1_2 — This comment reframes the entire discussion from theoretical principles to practical implementation. It shifts the focus…
S66
What is it about AI that we need to regulate? — Global AI Governance Initiatives: Directions and TrajectoriesGlobal AI governance initiatives are heading toward multipl…
S67
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges Charlie Halford…
S68
From principles to practice: Governing advanced AI in action — – **Implementation Challenges Across Jurisdictions**: Participants highlighted the tension between rapid technological a…
S69
WSIS Action Line C7 E-environment — – **Governance and Implementation Challenges**: Common barriers across all initiatives including unclear policy framewor…
S70
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Examples of sectoral self-regulations are in the case of Mauritius in the perspective of increasing the capacity of exis…
S71
How AI in 2026 will transform management roles and organisational design — In 2026, AI will transform management structures and automate tasks as companies strive to demonstrate real value. By 20…
S72
EU AI Act oversight and fines begin this August — A new phase of the EU AI Acttakes effect on 2 August, requiring member states to appoint oversight authorities and enfor…
S73
EU AI Act officially comes into force — The world’s first comprehensive AI law, known as the EU AI Act, officially came intoforceon 1 August 2024, marking a sig…
S74
Keynotes — Legal and regulatory | Human rights O’Flaherty calls for the EU to maintain its commitment to enforcing the Digital Ser…
S75
EU AI Act published in Official Journal, initiating countdown to legal deadlines — The European Union has finalised its AI Act, a significant regulatory framework aimed at governing the use of AI within …
S76
Certifying humanity: Labeling content amid AI flood — These debates are no longer theoretical. Provenance-based initiatives such as theContent Authenticity Initiative (C2PA),…
S77
Day 0 Event #12 Tackling Misinformation with Information Literacy — Zoe Darma: to start with a quiz and that there are no wrong answers, but there actually are. There actually are right …
S78
Day 0 Event #265 Using Digital Platforms to Promote Info Integrity — Gisella Lomax connected online misinformation to devastating real-world consequences: “Information risks such as hate sp…
S79
India enforces a three-hour removal rule for AI-generated deepfake content — Strict new ruleshave been introducedin India for social media platforms in an effort to curb the spread of AI-generated …
S80
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S81
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S82
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Andy Parsons
7 arguments190 words per minute2010 words632 seconds
Argument 1
Principles‑to‑practice imperative (Andy Parsons)
EXPLANATION
Andy stresses that responsible AI must move beyond abstract principles and become a demonstrable part of corporate compliance and strategy. He frames this shift as essential for 2026, when responsibility will be both a regulatory requirement and a business opportunity.
EVIDENCE
He notes that responsible AI will stop being a slide in a deck and become part of a compliance strategy and an important opportunity, and that the panel’s theme is “the shift from principles to provable practice” [33-34]. He also points out that responsibility will become a discipline rather than a mere policy statement [32-33].
MAJOR DISCUSSION POINT
Principles‑to‑practice imperative (Andy Parsons)
Argument 2
C2PA content credentials as an open, interoperable standard (Andy Parsons)
EXPLANATION
Andy describes the C2PA content credentials as an open, cross‑industry standard that attaches provenance information to any media asset. The standard is designed to be freely adoptable and interoperable across tools and platforms.
EVIDENCE
He explains that five years of work produced the open C2PA standard, that a C2PA symbol appears on LinkedIn, and that the credentials provide transparent context for videos, audio, or images [61-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Andy’s description of C2PA matches the external mention of an open, free C2PA content credentials standard developed five years ago [S1][S2].
MAJOR DISCUSSION POINT
C2PA content credentials as an open, interoperable standard (Andy Parsons)
AGREED WITH
Prativa Mohapatra, Vishal Anand Kanvaty, Moderator, Sarika Guliani
DISAGREED WITH
Amol Deshpande, Prativa Mohapatra
Argument 3
Need for industry‑wide, standards‑based infrastructure (Andy Parsons)
EXPLANATION
Andy argues that a shared, standards‑based infrastructure for content trust is essential and must not be owned by any single company. He calls for an open, interoperable layer that any organization can adopt to embed transparency into AI‑generated content.
EVIDENCE
He highlights a cross-industry coalition that includes Adobe, Microsoft, BBC, OpenAI, Sony, Qualcomm and others, creating an infrastructure layer for content trust that is standards-based, non-proprietary, and available to everyone [66-70] and stresses that this philosophy is especially important for India [71-73].
MAJOR DISCUSSION POINT
Need for industry‑wide, standards‑based infrastructure (Andy Parsons)
AGREED WITH
Prativa Mohapatra, Amol Deshpande, Vishal Anand Kanvaty
DISAGREED WITH
Amol Deshpande, Prativa Mohapatra
Argument 4
EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons)
EXPLANATION
Andy points out that emerging regulatory regimes—such as the EU AI Act, California’s AI law, and India’s new IT rules—are compelling organizations to embed responsible AI practices now. He frames regulation as a catalyst for good practices rather than a purely punitive force.
EVIDENCE
He cites the EU AI Act’s enforcement provisions taking effect in August, the first U.S. state law in California, and India’s new IT rules on SGI, noting that India is actively shaping its own path [25-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The focus on the EU AI Act’s transparency requirements and balanced regulation is reflected in the EU AI Act transparency provisions [S22] and discussions on balancing regulation and innovation [S34][S35].
MAJOR DISCUSSION POINT
EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons)
AGREED WITH
Vishal Anand Kanvaty, Dr. Satya Ramaswamy, Sarika Guliani
DISAGREED WITH
Vishal Anand Kanvaty, Sarika Guliani
Argument 5
Embedding responsible AI at the core of products is essential rather than treating it as a bolt‑on feature
EXPLANATION
Andy argues that responsible AI must be baked into the core architecture of tools, not added later as an afterthought, to ensure genuine trust and provenance.
EVIDENCE
He explains that five years ago Adobe decided that responsible AI via content transparency had to be baked into the core of products like Photoshop and Premiere, not grafted on as a feature [57-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on baking transparency into tools rather than grafting it on is echoed in external notes about core integration of content transparency [S1][S5].
MAJOR DISCUSSION POINT
Core integration of responsible AI into products (Andy Parsons)
Argument 6
The AI trust crisis is real, concrete and impacts everyday users and businesses
EXPLANATION
Andy points out that the trust crisis caused by AI‑generated content is a tangible, daily problem affecting consumers, children, and enterprises across India’s diverse linguistic landscape.
EVIDENCE
He describes the trust crisis with AI as real, concrete, happening every day to children, businesses and individuals, especially across India’s cultural and linguistic diversity [37-39].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The real-world trust erosion and synthetic media risks are discussed in roundtable remarks on trust breakdown [S26] and the rise of synthetic media [S28][S29].
MAJOR DISCUSSION POINT
Real‑world AI trust crisis (Andy Parsons)
Argument 7
India’s massive digital population makes synthetic content and AI‑generated misinformation operational risks for businesses
EXPLANATION
Andy highlights that with hundreds of millions of daily digital consumers, AI‑generated misinformation is not abstract but an operational risk that enterprises must manage.
EVIDENCE
He notes that India has the world’s largest digital population, and that synthetic content and AI-generated misinformation are real operational risks for businesses [46-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s large digital user base and misinformation challenges are highlighted in the Meta India VP remarks on AI combating misinformation [S30] and the broader risks of synthetic media [S28].
MAJOR DISCUSSION POINT
Operational risks of AI‑generated misinformation in India (Andy Parsons)
S
Shantheri Mallaya
3 arguments159 words per minute1631 words611 seconds
Argument 1
Translating principles into enterprise strategy (Shantheri Mallaya)
EXPLANATION
Shantheri frames the central challenge as moving responsible‑AI principles—fairness, accountability, transparency, privacy, inclusivity—into concrete enterprise strategy frameworks. She asks panelists to explain how these values can be operationalised in real business contexts.
EVIDENCE
In her opening she asks how responsible-AI principles will be realistically translated into enterprise strategy frameworks and how organisations will go about it [144-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from principles to practice is also noted in the closing remarks [S20] and the responsible AI as an enabler discussion [S23].
MAJOR DISCUSSION POINT
Translating principles into enterprise strategy (Shantheri Mallaya)
Argument 2
India is positioning itself as a global leader in trustworthy and inclusive AI
EXPLANATION
Shantheri highlights that India is charting the course for the world in building trustworthy and inclusive AI, indicating a leadership role on the international stage.
EVIDENCE
She remarks that India is really charting the course for the world and that building trustworthy and inclusive AI is a momentous time for the country [135-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s leadership in trustworthy AI is highlighted in summit remarks on inclusive AI development [S31] and the global vision plenary noting India’s role [S32].
MAJOR DISCUSSION POINT
India as a global leader in trustworthy and inclusive AI (Shantheri Mallaya)
Argument 3
Balancing AI‑driven innovation with regulation and user experience is essential
EXPLANATION
She stresses the need to balance rapid AI innovation with regulatory compliance and maintaining a high-quality user and customer experience.
EVIDENCE
She asks how to balance AI-driven innovation, regulation, accountability, operational efficiency, and user experience within large-scale aviation operations [245-249].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for balance between regulation and innovation is discussed in the IGF session on conversational AI [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Balancing innovation, regulation and user experience (Shantheri Mallaya)
S
Sarika Guliani
2 arguments142 words per minute590 words249 seconds
Argument 1
Commitment beyond compliance, embedding human values (Sarika Guliani)
EXPLANATION
Sarika argues that responsible AI should be seen as a commitment to shared human values rather than a mere compliance checkbox. She stresses that technology choices now shape the future, and that ethical considerations must be embedded from the outset.
EVIDENCE
She states that responsibility is no longer a compliance check but a commitment of technology with shared human values, and that the choice of what to create defines our future, not just words on a slide [379-382].
MAJOR DISCUSSION POINT
Commitment beyond compliance, embedding human values (Sarika Guliani)
AGREED WITH
Andy Parsons, Prativa Mohapatra, Vishal Anand Kanvaty, Moderator
Argument 2
Regulation should be balanced, avoiding overly heavy‑handed approaches
EXPLANATION
Sarika argues that while regulation is necessary, it should be proportionate and not stifle innovation, advocating for a light‑touch regulatory approach where appropriate.
EVIDENCE
She notes that the discussion would need another session to compare light-touch versus balanced regulation, indicating a preference for proportionate regulatory frameworks [379-382].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balanced regulatory approaches are advocated in the IGF discussion on regulation vs innovation [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Need for balanced, proportionate regulation (Sarika Guliani)
P
Prativa Mohapatra
6 arguments156 words per minute1126 words432 seconds
Argument 1
Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra)
EXPLANATION
Prativa explains Adobe’s internal “ART” philosophy—Accountability, Responsibility, Transparency—and shows how it is baked into its generative AI tool Firefly and the Acrobat Assistant. This ensures that outputs are traceable, lawful, and trustworthy.
EVIDENCE
She describes how Firefly embeds a “nutrition label” that guarantees lawful, non-infringing output, and how Acrobat Assistant follows the same provenance principles, allowing users to trace the origin of content and ensure compliance [197-210] and [222-228].
MAJOR DISCUSSION POINT
Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra)
Argument 2
Product‑level governance methodology with hundreds of checks (Prativa Mohapatra)
EXPLANATION
Prativa notes that every new Adobe product undergoes a rigorous, secure development methodology that includes hundreds of validation steps, embedding responsible‑AI principles directly into the product lifecycle.
EVIDENCE
She states that each new product goes through a very strong, secure methodology with hundreds of steps, ensuring principles are embedded into creation processes [206-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The mention of a strong, secure methodology with hundreds of validation steps aligns with external commentary on product governance processes [S5].
MAJOR DISCUSSION POINT
Product‑level governance methodology with hundreds of checks (Prativa Mohapatra)
Argument 3
Risk of a divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra)
EXPLANATION
Prativa warns that a gap could emerge between large AI developers and smaller firms that lack resources, emphasizing the need for free, open frameworks that all can adopt. She cites Adobe’s early C2PA work as an example of making standards freely available.
EVIDENCE
She highlights the stark divide between big and small enterprises, the importance of free, accessible frameworks, and references Adobe’s 2019 content authentication initiative as a pioneering open effort [297-304] and notes that creators must continue providing such frameworks [305-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for open, free frameworks for all enterprises echo the discussion of open standards and inclusive AI leadership [S23][S31].
MAJOR DISCUSSION POINT
Risk of a divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra)
Argument 4
Large players must create reusable, open frameworks that MSMEs can adopt (Prativa Mohapatra)
EXPLANATION
Prativa argues that large enterprises should develop reusable, open‑source frameworks that smaller businesses can leverage, ensuring responsible AI does not become a luxury only for the well‑resourced. She calls for ongoing collaboration among technology creators to extend methodologies to the broader ecosystem.
EVIDENCE
She states that large enterprises must create frameworks that MSMEs can adopt, and that creators need to keep building methods for others to use, emphasizing the need for open, reusable solutions [315-317].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation for large firms to provide reusable, open frameworks matches the emphasis on open standards and inclusive AI development [S23][S31].
MAJOR DISCUSSION POINT
Large players must create reusable, open frameworks that MSMEs can adopt (Prativa Mohapatra)
Argument 5
Legal, compliance and ethics teams must redesign processes to embed AI governance
EXPLANATION
Prativa emphasizes that enterprises need to re‑opt and redesign their legal, compliance and ethical processes to incorporate AI governance throughout the organization.
EVIDENCE
She states that every organization has legal and compliance teams that must be re-opted and re-designed to address AI compliance, ensuring all three pillars-legal, compliance and ethics-are covered [234-237].
MAJOR DISCUSSION POINT
Re‑designing legal and compliance processes for AI governance (Prativa Mohapatra)
Argument 6
AI governance requires integration of people, process and technology, reflecting the ART philosophy
EXPLANATION
She outlines that responsible AI must combine accountability, responsibility, and transparency across people, processes, and technology, mirroring the ART framework used at Adobe.
EVIDENCE
She notes that enterprises need legal, compliance, and ethical strategies together, and that AI governance must tick all three-people, process, technology-to be ready for the future [233-236].
MAJOR DISCUSSION POINT
Holistic integration of people, process and technology in AI governance (Prativa Mohapatra)
A
Amol Deshpande
5 arguments181 words per minute759 words251 seconds
Argument 1
Orchestration across all AI layers; people‑centric governance (Amol Deshpande)
EXPLANATION
Amol stresses that responsible AI must be orchestrated across every layer of the AI stack and that people are a critical stakeholder. He advocates a “bring‑your‑own‑AI” approach with guardrails, rather than a one‑size‑fits‑all solution.
EVIDENCE
He explains that responsibility must exist at every AI layer, that people are a very important stakeholder, and that a scalable, safe environment with guardrails is essential, describing a “bring your own AI” scenario and the need for templates [162-166] and [169-176] and [177-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guardrails across the AI stack and people-centric governance are highlighted in the generative AI guardrails discussion [S33].
MAJOR DISCUSSION POINT
Orchestration across all AI layers; people‑centric governance (Amol Deshpande)
Argument 2
Awareness → action → demonstration cycle; role of industry bodies in disseminating frameworks (Amol Deshpande)
EXPLANATION
Amol outlines a three‑step process—awareness, action, demonstration—to embed responsible AI, and highlights the pivotal role of industry bodies in spreading best‑practice frameworks across sectors.
EVIDENCE
He states that the first step is awareness, followed by action, then demonstration, and that industry bodies (e.g., FICCI) are crucial for disseminating learnings and templates across the value chain [332-340] and [341-347].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The three-step cycle and role of industry bodies are reflected in the IGF roundtable on safe AI and the open forum on regulation insights [S34][S35].
MAJOR DISCUSSION POINT
Awareness → action → demonstration cycle; role of industry bodies in disseminating frameworks (Amol Deshpande)
DISAGREED WITH
Vishal Anand Kanvaty, Sarika Guliani
Argument 3
RPG Group’s need for flexible, scalable AI governance across diverse business units (Amol Deshpande)
EXPLANATION
Amol describes the RPG Group’s challenge of governing AI across a heterogeneous conglomerate, emphasizing that a single solution cannot fit all units and that flexible, scalable guardrails are required.
EVIDENCE
He notes the need for a scalable, safe environment with guardrails, that one size doesn’t fit all, and that templates are being exercised within the enterprise across diverse business units [168-180] and [181-184].
MAJOR DISCUSSION POINT
RPG Group’s need for flexible, scalable AI governance across diverse business units (Amol Deshpande)
Argument 4
Industry bodies help cascade standards and templates to MSMEs lacking resources (Amol Deshpande)
EXPLANATION
Amol argues that industry associations can bridge the resource gap for MSMEs by sharing standards, templates, and best practices, enabling smaller firms to adopt responsible AI without building frameworks from scratch.
EVIDENCE
He mentions that organizations like FICCI can help cascade frameworks, that MSMEs lack access to such information, and that industry bodies are critical for sharing learnings across sectors [344-347].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry associations bridging resource gaps for MSMEs are discussed in the IGF session on regulation and innovation [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Industry bodies help cascade standards and templates to MSMEs lacking resources (Amol Deshpande)
Argument 5
Enterprises need a scalable, safe AI environment with built‑in guardrails
EXPLANATION
Amol stresses that large organisations must provide a scalable environment where AI operates safely, with guardrails that protect against misuse while allowing flexibility.
EVIDENCE
He describes the need for a scalable, safe environment protected with guardrails as a key requirement for the enterprise [180-182].
MAJOR DISCUSSION POINT
Scalable safe AI environment with guardrails (Amol Deshpande)
D
Dr. Satya Ramaswamy
4 arguments183 words per minute1035 words338 seconds
Argument 1
Global regulatory compliance coexists with innovation; safety‑critical aviation example (Dr. Satya Ramaswamy)
EXPLANATION
Satya explains that Air India must comply with a patchwork of international regulations (US, EU, India) while still innovating with AI. He stresses that safety‑critical aviation standards drive rigorous compliance without stifling innovation.
EVIDENCE
He notes Air India’s operations across multiple jurisdictions, the need to obey DGCA, FAA, and EU regulators, and that compliance does not constrain Indian innovation, citing the partnership with Adobe and the launch of a global AI virtual assistant [351-364].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing global regulatory compliance with innovation is discussed in the IGF session on regulation and innovation [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Global regulatory compliance coexists with innovation; safety‑critical aviation example (Dr. Satya Ramaswamy)
Argument 2
Air India’s generative‑AI virtual assistant with safety guardrails and continuous monitoring (Dr. Satya Ramaswamy)
EXPLANATION
Satya details Air India’s AI‑driven virtual assistant that handles millions of customer queries, operates with a 97 % autonomous success rate, and incorporates multiple safety guardrails, continuous monitoring, and user feedback loops to prevent misuse.
EVIDENCE
He describes the launch in May 2023, handling 13.5 million queries, 97 % autonomous handling, safety knobs, jailbreak prevention, real-time monitoring, and the use of generative AI to watch its own performance, with Adobe providing indemnity [257-270] and [261-268].
MAJOR DISCUSSION POINT
Air India’s generative‑AI virtual assistant with safety guardrails and continuous monitoring (Dr. Satya Ramaswamy)
Argument 3
Safety‑critical aviation demands continuous human‑in‑the‑loop oversight of AI systems
EXPLANATION
Satya explains that because aviation is safety‑critical, AI systems must always allow a human operator to intervene instantly, ensuring safety overrides automated decisions.
EVIDENCE
He describes the red button on the joystick that lets a pilot take control at any moment if the autopilot behaves incorrectly, illustrating the human-in-the-loop safety mechanism [360-362].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop oversight for safety‑critical AI (Dr. Satya Ramaswamy)
Argument 4
Partnerships with technology providers like Adobe provide indemnity and confidence in AI deployments
EXPLANATION
Satya highlights that collaborations with firms such as Adobe, which offer indemnity, give Air India confidence to adopt AI while managing risk.
EVIDENCE
He notes that Adobe provides full indemnity in case of problems, which gives a lot of confidence in managing AI risk [269-270].
MAJOR DISCUSSION POINT
Strategic tech partnerships to mitigate AI risk (Dr. Satya Ramaswamy)
V
Vishal Anand Kanvaty
2 arguments184 words per minute582 words189 seconds
Argument 1
Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty)
EXPLANATION
Vishal argues that regulatory frameworks are necessary because unchecked AI can produce harmful outcomes; safeguards embedded in law protect the ecosystem and maintain trust.
EVIDENCE
He states that regulations are required because AI can go berserk, that safeguards are mandatory to prevent such behavior, and that regulations must be embedded into systems and consulted with stakeholders [370-376].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of regulation to prevent uncontrolled AI behavior is highlighted in the IGF discussion on regulation balance [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty)
DISAGREED WITH
Amol Deshpande, Sarika Guliani
Argument 2
NPCI’s AI for fraud detection prioritising low false‑positives and transparent explanations (Vishal Anand Kanvaty)
EXPLANATION
Vishal explains NPCI’s AI‑driven fraud detection system, which aims to keep false‑positive rates low while providing transparent, user‑facing explanations for declined transactions, thereby building trust in the payment ecosystem.
EVIDENCE
He notes the priority of minimizing false positives, the development of a language model that can explain why a transaction was declined, and that this transparency aligns with RBI’s responsible-AI framework, helping maintain trust in the payment system [286-294] and [295-301].
MAJOR DISCUSSION POINT
NPCI’s AI for fraud detection prioritising low false‑positives and transparent explanations (Vishal Anand Kanvaty)
M
Moderator
3 arguments132 words per minute132 words59 seconds
Argument 1
Responsible deployment outweighs speed of AI adoption
EXPLANATION
The moderator stresses that while AI can accelerate innovation, the priority must be on deploying it responsibly rather than merely adopting it quickly. Speed without responsibility could undermine trust and safety.
EVIDENCE
He notes that the real differentiator is not how quickly AI is adopted but how responsibly it is deployed, emphasizing the need for responsible AI over rapid adoption [3-4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on responsible deployment over speed mirrors the closing remarks on moving from principles to practice [S20].
MAJOR DISCUSSION POINT
Responsible deployment outweighs speed of AI adoption (Moderator)
Argument 2
Trust, transparency and accountability are foundational for AI in corporate India
EXPLANATION
The moderator frames trust, transparency, and accountability as non‑optional, foundational elements that must underpin AI initiatives in Indian enterprises.
EVIDENCE
He declares that trust, transparency and accountability are no longer optional and are foundational for the discussion on responsible AI [5-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Foundational importance of trust, transparency and accountability is reflected in the EU AI Act transparency focus [S22] and roundtable concerns about trust breakdown [S26].
MAJOR DISCUSSION POINT
Foundational role of trust, transparency and accountability (Moderator)
Argument 3
The session aims to advance safe and trusted AI in the corporate landscape
EXPLANATION
The moderator sets the purpose of the session as focusing on advancing safe, trusted AI practices within corporations.
EVIDENCE
He states that the conversation will center on advancing safe and trusted AI in the corporate landscape [7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session’s goal aligns with the overall theme of advancing safe, trusted AI in the responsible AI discussions [S20][S23].
MAJOR DISCUSSION POINT
Advancing safe and trusted AI in corporate sector (Moderator)
Agreements
Agreement Points
Transparency and provenance of AI‑generated content must be embedded in products and made openly verifiable.
Speakers: Andy Parsons, Prativa Mohapatra, Vishal Anand Kanvaty, Moderator, Sarika Guliani
C2PA content credentials as an open, interoperable standard (Andy Parsons) Adobe’s “ART” philosophy with nutrition‑label style provenance in Firefly (Prativa Mohapatra) NPCI’s transparent explanations for declined transactions (Vishal Anand Kanvaty) Trust, transparency and accountability are foundational (Moderator) Commitment beyond compliance, embedding human values (Sarika Guliani)
All speakers stress that responsible AI requires concrete, transparent provenance mechanisms-whether via open standards like C2PA, Adobe’s built-in nutrition labels, or transaction-level explanations-so that users can see how content or decisions are generated and trust the system [5-6][61-66][74-76][209-210][293-294][379-382].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy trends emphasize mandatory disclosure of AI-generated media, as seen in Google’s upcoming political-ad rules requiring clear labeling of synthetic content [S61] and broader calls for algorithmic transparency in public-interest frameworks [S62]; NPCI’s own transparency-by-design approach for its language models reinforces this direction [S49].
Open, standards‑based infrastructure and reusable frameworks are essential for scaling responsible AI across industries.
Speakers: Andy Parsons, Prativa Mohapatra, Amol Deshpande, Vishal Anand Kanvaty
Need for industry‑wide, standards‑based infrastructure (Andy Parsons) Risk of divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra) Industry bodies help cascade standards and templates to MSMEs (Amol Deshpande) RBI framework and transparent AI models as a reusable foundation (Vishal Anand Kanvaty)
The panel concurs that responsible AI cannot rely on proprietary solutions; it must be built on open, cross-industry standards and reusable frameworks that can be adopted by both large firms and MSMEs, with industry bodies playing a key dissemination role [66-70][297-304][332-340][344-347][292-301].
POLICY CONTEXT (KNOWLEDGE BASE)
International bodies promote voluntary, consensus-driven standards (e.g., the Agent Standards Initiative) to foster interoperable, responsible AI ecosystems [S43]; the AI Standards Hub and multistakeholder dialogues stress the need for open technical standards that remain adaptable to regulatory needs [S48][S51].
Regulatory frameworks are a catalyst and necessary safeguard for responsible AI, but should be balanced and proportionate.
Speakers: Andy Parsons, Vishal Anand Kanvaty, Dr. Satya Ramaswamy, Sarika Guliani
EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons) Regulation is essential to prevent AI “berserk” behaviour (Vishal Anand Kanvaty) Global regulatory compliance coexists with innovation (Dr. Satya Ramaswamy) Regulation should be balanced, avoiding overly heavy‑handed approaches (Sarika Guliani)
All agree that regulation is indispensable-acting as a catalyst, ensuring safety, and providing a level playing field-while emphasizing the need for proportionate rules that do not stifle innovation [25-27][370-376][351-364][379-382].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry perspectives acknowledge that well-designed regulation can shift firms from reactive to proactive AI governance, providing clarity and urgency for responsible practices [S42]; however, scholars warn against over-regulation and advocate proportionate, context-sensitive rules that complement existing laws [S53][S45].
Human‑in‑the‑loop oversight and guardrails are critical, especially for safety‑critical applications.
Speakers: Dr. Satya Ramaswamy, Amol Deshpande
Human‑in‑the‑loop oversight for safety‑critical aviation AI (Dr. Satya Ramaswamy) Enterprises need scalable, safe AI environments with built‑in guardrails (Amol Deshpande)
Both speakers highlight that AI systems must include real-time human oversight and robust guardrails to ensure safety, whether in aviation or broader enterprise contexts [360-362][180-182].
POLICY CONTEXT (KNOWLEDGE BASE)
Aviation safety standards embed human-in-the-loop controls and regulatory compliance, illustrating the necessity of oversight for high-risk AI systems [S49]; similar principles are echoed in broader AI governance discussions emphasizing user control and human accountability [S55][S56][S54].
Balancing rapid AI innovation with regulatory compliance and user experience is essential.
Speakers: Shantheri Mallaya, Dr. Satya Ramaswamy, Vishal Anand Kanvaty
Balancing AI‑driven innovation with regulation and user experience (Shantheri Mallaya) Global regulatory compliance coexists with innovation; safety does not constrain Indian innovation (Dr. Satya Ramaswamy) Low false‑positive rates and transparent explanations balance fraud detection with user trust (Vishal Anand Kanvaty)
The moderator and panelists agree that AI deployment must simultaneously pursue speed, compliance, and a high-quality user experience, using mechanisms such as transparent explanations and safety guardrails [245-249][351-364][286-294].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses stress the need to align fast-moving AI development with compliance mechanisms that do not hinder user experience, advocating incentive models that build trust while preserving innovation speed [S45][S46][S53].
Similar Viewpoints
Both stress that the priority is responsible AI deployment rather than merely rapid adoption, framing responsibility as a strategic imperative [3-4][33-34][5-6].
Speakers: Andy Parsons, Moderator
Responsible deployment outweighs speed of AI adoption (Moderator) Principles‑to‑practice imperative (Andy Parsons)
Both highlight the danger that responsible AI becomes a luxury for large firms and argue that industry bodies must provide open frameworks to enable MSMEs [297-304][332-340][344-347].
Speakers: Prativa Mohapatra, Amol Deshpande
Risk of divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra) Industry bodies help cascade standards and templates to MSMEs (Amol Deshpande)
Both see regulation as indispensable for safety and trust, even in highly regulated sectors like aviation and payments [351-364][370-376].
Speakers: Vishal Anand Kanvaty, Dr. Satya Ramaswamy
Regulation is essential to prevent AI “berserk” behaviour (Vishal Anand Kanvaty) Global regulatory compliance coexists with innovation; safety‑critical aviation demands compliance (Dr. Satya Ramaswamy)
Unexpected Consensus
Both a payment‑system leader (NPCI) and an airline (Air India) emphasize that AI safety must be achieved without compromising user experience, using transparent explanations and human oversight.
Speakers: Vishal Anand Kanvaty, Dr. Satya Ramaswamy
NPCI’s AI for fraud detection prioritising low false‑positives and transparent explanations (Vishal Anand Kanvaty) Air India’s generative‑AI virtual assistant with safety guardrails, continuous monitoring and human feedback (Dr. Satya Ramaswamy)
Despite operating in very different domains, both speakers converge on a model where AI safety, transparency, and user-centric design are jointly pursued, an alignment not explicitly anticipated at the start of the session [257-270][286-294].
POLICY CONTEXT (KNOWLEDGE BASE)
NPCI’s implementation of transparent small language models and Air India’s adherence to safety-critical, human-in-the-loop standards exemplify sector-specific applications of responsible AI that prioritize user experience alongside safety [S49].
An engineer (Andy Parsons) and a senior policy‑focused moderator both frame responsible AI as a strategic business opportunity rather than a compliance burden.
Speakers: Andy Parsons, Moderator
Embedding responsible AI as a leadership and operating discipline and opportunity (Andy Parsons) Trust, transparency and accountability are foundational for corporate AI (Moderator)
It is notable that a technical leader and the session moderator share a business-oriented view of responsible AI, treating it as a growth driver rather than a mere regulatory checkbox [32-33][5-6].
POLICY CONTEXT (KNOWLEDGE BASE)
Andy Parsons highlighted how emerging regulations can act as catalysts for proactive AI adoption, turning compliance into a competitive advantage, a view echoed by industry leaders who see responsible AI as a market differentiator [S42][S41].
Overall Assessment

The panel exhibits strong consensus on four core pillars: (1) embedding transparent provenance through open standards; (2) building open, reusable frameworks with industry‑body support; (3) viewing regulation as a necessary, balanced catalyst; and (4) ensuring human‑in‑the‑loop safety guardrails while balancing innovation and user experience.

High consensus across technical, business, and policy perspectives, indicating a unified direction for responsible AI implementation in India’s corporate sector. This alignment suggests that forthcoming initiatives are likely to prioritize open standards, collaborative governance, and proportionate regulation, facilitating scalable and trustworthy AI adoption.

Differences
Different Viewpoints
Extent and nature of regulation for AI
Speakers: Vishal Anand Kanvaty, Sarika Guliani, Andy Parsons
Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty) Regulation should be balanced, avoiding overly heavy‑handed approaches (Sarika Guliani) EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons)
Vishal argues that mandatory regulation is required to embed safeguards and prevent harmful AI outcomes [370-376]. Sarika counters that regulation must be proportionate and avoid stifling innovation, advocating a light-touch or balanced approach [379-382]. Andy frames regulation as a catalyst that pushes good practices rather than a punitive burden, citing the EU AI Act, California law and India’s IT rules as drivers for responsible AI [25-27][106-108]. These positions reveal a clear disagreement on how strong and prescriptive AI regulation should be.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI regulation range from calls for comprehensive safeguards to arguments for limited, sector-specific rules, reflecting divergent industry attitudes toward regulatory scope and the need for balanced policy design [S41][S42][S53].
Universal open standards versus industry‑specific, flexible frameworks
Speakers: Andy Parsons, Amol Deshpande, Prativa Mohapatra
C2PA content credentials as an open, interoperable standard (Andy Parsons) Need for industry‑wide, standards‑based infrastructure (Andy Parsons) One size doesn’t fit all… need templates per industry (Amol Deshpande) Risk of a divide… need free, accessible frameworks (Prativa Mohapatra)
Andy promotes a single, open, cross-industry standard (C2PA) that any organization can adopt, emphasizing non-proprietary, interoperable infrastructure [61-66][66-70]. Amol stresses that a “one size fits all” model is unrealistic and that each sector requires its own templates and guardrails, advocating a “bring-your-own-AI” approach [168-180]. Prativa warns that without free, open frameworks large enterprises will outpace MSMEs, underscoring the need for accessible standards to avoid a divide [297-304]. The speakers therefore disagree on whether a universal open standard can serve all sectors or whether tailored, industry-specific solutions are necessary.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between universal, open standards and adaptable, industry-specific frameworks is a recurring theme in AI governance, with initiatives like the Agent Standards Initiative advocating open, consensus-based standards while acknowledging the need for flexibility in implementation [S43][S48][S58].
Primary driver for responsible AI adoption – industry bodies versus regulatory mandates
Speakers: Amol Deshpande, Vishal Anand Kanvaty, Sarika Guliani
Awareness → action → demonstration cycle; role of industry bodies in disseminating frameworks (Amol Deshpande) Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty) Regulation should be balanced, avoiding overly heavy‑handed approaches (Sarika Guliani)
Amol emphasizes that the ecosystem should first become aware, then act, and finally demonstrate responsible AI, with industry associations (e.g., FICCI) playing a key role in cascading standards and templates to the broader market [332-340]. Vishal argues that regulation is indispensable to keep AI from behaving dangerously and must be embedded in systems [370-376]. Sarika, while acknowledging the need for regulation, calls for a proportionate, balanced approach that does not over-regulate, suggesting that industry bodies can complement but not replace regulation [379-382]. The tension lies in whether industry-led self-governance or statutory regulation should be the main engine for responsible AI.
Unexpected Differences
Open‑standard advocacy versus internal proprietary governance approaches
Speakers: Andy Parsons, Prativa Mohapatra
C2PA content credentials as an open, interoperable standard (Andy Parsons) Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra)
Andy strongly advocates for an industry-wide, free, open standard (C2PA) that any organization can adopt, emphasizing cross-industry interoperability [61-66][66-70]. Prativa, while supporting responsible AI, focuses on Adobe’s internal ART framework embedded within its own products, without explicitly championing an external open standard. This subtle divergence-external open standards versus internal proprietary governance-was not anticipated given the overall consensus on the need for transparency.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate pits open-standard advocates, who promote interoperable, community-driven specifications, against firms favoring proprietary governance models; this mirrors broader discussions on open ecosystems versus closed incumbents in technology history [S43][S44][S58].
Overall Assessment

The panelists uniformly agree that responsible AI, transparency, and accountability are essential for India’s digital future. However, they diverge on three main fronts: (1) how prescriptive regulation should be, ranging from mandatory safeguards to balanced, light‑touch frameworks; (2) whether a single open standard can satisfy all sectors or whether industry‑specific, flexible solutions are required; (3) the relative weight of industry bodies versus statutory regulation in driving adoption. These disagreements are moderate rather than polarising, reflecting differing strategic preferences rather than fundamental opposition.

Moderate disagreement – the differing views on regulatory intensity, standardisation strategy, and governance mechanisms could lead to fragmented implementation unless a coordinated consensus is reached. The implications are that policy makers and industry leaders must negotiate a hybrid model that blends baseline regulatory requirements with adaptable standards and strong industry‑body participation to avoid silos and ensure inclusive, trustworthy AI deployment.

Partial Agreements
All four speakers share the goal of achieving transparency and accountability in AI systems. Andy pushes for a global open standard (C2PA) that tags content with provenance [61-66]. Prativa describes internal product‑level governance (the ART philosophy) that embeds traceability directly into Adobe tools [197-210]. Satya highlights the necessity of a human‑in‑the‑loop safety mechanism in aviation AI [360-362]. Vishal focuses on transaction‑level transparency, providing users with explanations for AI‑driven decisions [292-294]. While the end‑goal of trustworthy AI is common, the speakers diverge on the mechanisms—global standards, internal product design, operational human oversight, or user‑facing explanations.
Speakers: Andy Parsons, Prativa Mohapatra, Satya Ramaswamy, Vishal Anand Kanvaty
C2PA content credentials as an open, interoperable standard (Andy Parsons) Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra) Human‑in‑the‑loop oversight for safety‑critical AI (Satya Ramaswamy) Transparent explanations for declined transactions (Vishal Anand Kanvaty)
Takeaways
Key takeaways
Responsible AI must move from high‑level principles to provable, operational practice within enterprises. Transparency and provenance of AI‑generated content are essential; open, interoperable standards such as C2PA enable this at scale. Effective AI governance requires coordinated people, process, technology, and industry‑body layers – not a single checklist. Regulatory developments (EU AI Act, India IT rules, state‑level AI laws) are viewed as catalysts that should coexist with innovation. Sector‑specific implementations illustrate practical approaches: Air India’s guarded generative‑AI assistant, NPCI’s fraud‑detection model with transparent explanations, RPG Group’s flexible, scalable governance across diverse units. There is a risk of a divide between large enterprises and MSMEs; open, free frameworks and industry‑wide dissemination are needed to ensure inclusive adoption.
Resolutions and action items
FICCI pledged to continue the dialogue and translate insights into concrete actions for the Indian ecosystem. Adobe highlighted its ART (Accountability, Responsibility, Transparency) methodology and will continue embedding it in product pipelines such as Firefly and Acrobat. Air India committed to maintain continuous monitoring and safety guardrails for its generative‑AI virtual assistant, leveraging partner technologies for risk mitigation. NPCI will expand its transparent AI‑driven fraud‑explanation service and align it with emerging regulatory frameworks. Industry bodies (e.g., C2PA, FICCI, sector associations) agreed to promote open standards and share governance templates to help MSMEs adopt responsible AI.
Unresolved issues
How to harmonise global AI regulations (EU AI Act, OECD, UNESCO) with India’s emerging policies and the diverse needs of different sectors. The precise balance between industry‑led self‑regulation and mandatory regulatory intervention remains unsettled. Effective mechanisms for consumer awareness of provenance symbols and UI design for transparency are still under development. Specific approaches for integrating human‑in‑the‑loop oversight in high‑volume payment fraud detection were mentioned but not detailed. Scalable, low‑cost governance frameworks that MSMEs can realistically implement without extensive legal teams were not fully resolved.
Suggested compromises
Adopt a hybrid model where open, industry‑driven standards provide the baseline, complemented by proportionate regulatory requirements to ensure safety without stifling innovation. Implement safety guardrails that are adjustable – tighter for high‑risk contexts (aviation) and lighter for consumer‑facing services, balancing risk and user convenience. Encourage large enterprises to create reusable, open‑source governance templates that can be cascaded to smaller firms via industry bodies. Regulators to act as catalysts, offering guidance and frameworks while allowing flexibility for companies to innovate within those boundaries.
Thought Provoking Comments
2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity.
Frames the timeline as a decisive turning point, moving responsible AI from a nice‑to‑have to a business imperative, which sets a forward‑looking urgency for the whole panel.
Established the central theme of the session and prompted other speakers to discuss concrete ways to meet that 2026 deadline, leading to deeper talks on standards, compliance and operationalisation.
Speaker: Andy Parsons
The question is no longer ‘should we be responsible with AI?’ but ‘can your systems actually prove that you have been responsible with AI?’
Shifts the debate from philosophical agreement to measurable proof, introducing the concept of ‘provable practice’ that challenges participants to think about auditability and evidence.
Triggered a focus on provenance, metadata and standards (C2PA) and caused panelists like Prativa and Amol to reference how their organisations embed traceability into products.
Speaker: Andy Parsons
We built an open, cross‑industry standard – the C2PA content credentials – that embeds provenance directly into media files, so anyone can verify who made it, with what model, and when.
Introduces a concrete, industry‑wide solution that is non‑proprietary, highlighting collaboration over competition and providing a tangible tool for accountability.
Guided the discussion toward the importance of open standards, with later speakers (e.g., Amol and Prativa) echoing the need for interoperable frameworks and citing the C2PA as a model for other sectors.
Speaker: Andy Parsons
One size doesn’t fit all – we need a ‘bring your own AI’ approach, with orchestration across all AI layers and people as the most critical stakeholder.
Challenges the notion of a single, monolithic AI governance model, emphasizing flexibility, modularity, and the human factor in responsible AI deployment.
Shifted the conversation from generic principles to practical implementation strategies, prompting Prativa to discuss product‑specific safeguards and Satya to illustrate how Air India balances flexibility with safety.
Speaker: Amol Deshpande
Our AI governance philosophy is ART – Accountability, Responsibility, Transparency – and we embed it into every product through hundreds of validation steps.
Provides a memorable framework (ART) that simplifies complex governance concepts and demonstrates how Adobe operationalises them, making the abstract tangible.
Reinforced Andy’s provable practice theme, gave the panel a concrete example (Firefly’s nutrition labels), and encouraged other speakers to share analogous mechanisms in their domains.
Speaker: Prativa Mohapatra
Our generative AI virtual assistant has handled 13.5 million queries with a 97 % autonomous success rate, and we even use generative AI to monitor its own performance for safety.
Offers a real‑world, high‑scale case study that illustrates both the benefits and the safety challenges of AI, and introduces the novel idea of AI‑in‑the‑loop monitoring.
Moved the discussion from theory to operational reality, prompting follow‑up questions about risk management, prompting Vishal to discuss transparency in payments, and reinforcing the need for robust guardrails.
Speaker: Dr. Satya Ramaswamy
We built a small language model that can explain why a transaction was declined, giving customers transparent reasons while keeping false‑positive rates low.
Shows how transparency can be delivered at massive scale in a critical financial context, linking technical design (explainability) with consumer trust.
Introduced the payments perspective, expanding the conversation beyond media to financial services, and highlighted the practical trade‑offs between accuracy and user experience.
Speaker: Vishal Anand Kanwat
Responsibility is no longer a compliance checklist; it is a commitment to shared human values – we choose what we create, not just what we can create.
Elevates the discussion to a philosophical level, reminding participants that ethical intent underpins technical measures, and framing responsible AI as a value‑driven choice.
Served as a concluding synthesis, reinforcing earlier points about standards, governance, and human‑centric design, and set the tone for future collaborative actions beyond the session.
Speaker: Sarika Guliani
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that moved the dialogue from abstract principles to concrete, measurable practices. Andy Parsons’ framing of 2026 as the deadline for provable responsible AI and his introduction of the C2PA standard set the agenda, prompting panelists to showcase how their organisations translate those ideas into product‑level safeguards (Prativa’s ART framework, Satya’s airline AI assistant, Vishal’s transparent payment explanations). Amol’s ‘bring your own AI’ and emphasis on people added nuance, steering the conversation toward flexible, human‑centric governance. Each of these insights sparked new sub‑topics—standards, auditability, scalability, and the balance between regulation and innovation—thereby deepening the analysis and shaping a cohesive narrative that blended technical solutions with ethical imperatives.

Follow-up Questions
What are the implementation costs and day‑to‑day operational expenses of adopting responsible AI practices?
Understanding financial implications is crucial for enterprises to plan and justify responsible AI investments.
Speaker: Andy Parsons
How can organizations demonstrably prove that their AI systems are responsible and compliant?
A measurable, auditable proof of responsibility is needed to move from principles to provable practice.
Speaker: Andy Parsons
How can consumer awareness of content‑provenance symbols (e.g., C2PA badge) be increased, and what UI designs are most effective?
Early consumer awareness is limited; effective UI can drive trust and adoption of provenance standards.
Speaker: Andy Parsons
What business case can be built for content provenance to make it financially compelling for enterprises?
Enterprises need clear ROI or value‑proposition arguments to invest in provenance infrastructure.
Speaker: Andy Parsons
How can standards adoption be improved given that many social‑media platforms strip metadata and provenance information?
Metadata stripping undermines transparency; research is needed on platform policies and technical solutions.
Speaker: Andy Parsons
What approaches allow embedding safety controls (the “safety knob”) in generative AI without degrading user experience?
Balancing safety with convenience is critical for customer‑facing AI services like virtual assistants.
Speaker: Dr. Satya Ramaswamy
How can prompt‑firewall and centralized control mechanisms be standardized across industries?
Standardized prompt controls could help prevent jailbreaks and misuse, but industry‑wide norms are lacking.
Speaker: Dr. Satya Ramaswamy
How can responsible‑AI frameworks be made accessible and affordable for MSMEs?
SMEs lack resources for extensive governance; scalable, low‑cost frameworks are needed to avoid a divide.
Speaker: Prativa Mohapatra
What role should industry bodies play in disseminating responsible‑AI templates and best practices to diverse sectors?
Industry bodies can cascade standards, but mechanisms for effective knowledge transfer require study.
Speaker: Amol Deshpande
How can global best practices (EU AI Act, UNESCO, OECD, etc.) be harmonized with India’s emerging regulatory landscape (DPDP Act, IT rules, etc.)?
Alignment is needed to avoid conflicting obligations and to create a coherent national AI governance model.
Speaker: Shantheri Mallaya
Is industry‑led governance realistically possible for AI at scale, or is regulatory intervention inevitable?
Determining the balance between self‑regulation and mandatory rules is essential for sustainable AI ecosystems.
Speaker: Vishal Anand Kanvaty
What metrics and governance models ensure fairness, accountability, and transparency in AI‑driven fraud detection for payment systems?
Payments require precise, unbiased AI; research is needed on appropriate performance and fairness metrics.
Speaker: Vishal Anand Kanvaty
How can AI transparency be integrated into legacy systems across sectors such as aviation, payments, and creative tools?
Legacy environments pose technical challenges for embedding provenance and auditability.
Speaker: Multiple (Andy Parsons, Dr. Satya Ramaswamy, Prativa Mohapatra)
What impact does the lack of consumer‑facing provenance symbols have on trust, and how can this impact be measured?
Empirical evidence is needed to justify investments in visible provenance cues.
Speaker: Andy Parsons
What barriers exist to global adoption of open standards like C2PA, and how can they be overcome?
Understanding technical, legal, and market obstacles is key to widespread standard uptake.
Speaker: Andy Parsons
How can AI governance frameworks be tailored for sector‑specific needs while maintaining interoperability?
Sector diversity requires flexible yet compatible governance models.
Speaker: Amol Deshpande
What are the implications of AI‑generated misinformation in a multilingual, culturally diverse market like India?
Misinformation risk is amplified by language and cultural variety; targeted research is needed.
Speaker: Andy Parsons
How can legal and compliance teams be upskilled efficiently to handle AI governance responsibilities?
Rapid skill development is essential for enterprises to meet emerging AI regulations.
Speaker: Prativa Mohapatra
What is the optimal balance between AI automation and human‑in‑the‑loop oversight for safety‑critical domains?
Ensuring safety while leveraging AI efficiency requires clear guidelines for human intervention.
Speaker: Dr. Satya Ramaswamy
How can the effectiveness of AI transparency measures be evaluated empirically across different industries?
Metrics and studies are needed to assess whether transparency initiatives actually build trust and reduce risk.
Speaker: General (multiple participants)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Who Watches the Watchers Building Trust in AI Governance

Who Watches the Watchers Building Trust in AI Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, introduced by Gregory C. Allen, featured Stephen Clare, co-lead author of the International AI Safety Report, Hiroki Hibuka, a Japanese AI policy expert, and Shana Mansbach of the think-tank Fathom, which convenes AI governance discussions [1-3][4-8][9-10]. Clare explained that the report, originating from the 2023 Bletchley Safety Summit, is meant to be an IPCC-style evidence base for AI governance and is backed by more than 30 countries and intergovernmental bodies [19-22].


He noted that many risks have moved from theoretical to observable, with billions of users and incidents such as deepfakes and AI-enabled cyber attacks prompting a surge in risk-management techniques [25-31]. Clare highlighted that model jailbreaks have become substantially harder, citing the UK Security Institute’s shift from minutes to several hours to find universal jailbreaks for the latest models [42-45]. Nevertheless, he warned that safeguards remain vulnerable to skilled actors, that implementation is uneven across companies, and that ensuring broad compliance is now a pressing governance challenge [51-57][58].


Hiroki contrasted hard-law and soft-law strategies, arguing that most jurisdictions already have sector-specific regulations (privacy, copyright, finance, etc.) and the key question is how to adapt them rather than create entirely new AI statutes [82-86]. He described the EU’s AI Act versus Japan’s and the US’s more sector-specific, “exempt” versus “exposed” approaches, noting Japan’s preference for pre-emptive rules and the need for more agile, multi-stakeholder soft-law mechanisms [87-95][96-100]. He emphasized the difficulty of evaluating values such as privacy or fairness and the lack of benchmark standards worldwide [98-100].


Mansbach argued that the rapid rise in AI capabilities has created a systemic trust deficit for the public, deployers, regulators and developers, which traditional command-and-control governance cannot address because of speed and technical-capacity gaps [105-113][114-118]. Fathom proposes a government-authorized marketplace of independent verification organizations (IVOs) that would assess outcomes such as child safety, data privacy, controllability and interpretability, providing a rebuttable presumption of a heightened standard of care [116-122][124-128][173-179]. She identified liability clarity, insurance eligibility and market advantage as three incentives for entities to seek verification, likening the model to UL or Underwriters Lab certifications [221-230][231-239].


Gregory highlighted that without insurance or liability frameworks AI adoption could be stifled, and that analogies such as AS9100 in aerospace or the NHTSA’s star-rating system illustrate how third-party standards can drive safety [207-214][330-334]. The panel agreed that current evaluation tools are narrow and quickly become outdated, underscoring the urgency of developing flexible, outcome-based standards and independent audits to keep pace with evolving AI systems [258-266][270-276][292-298]. Overall, they concluded that a layered, outcomes-focused verification ecosystem, supported by legal, insurance and market incentives, is essential to bridge the trust gap and enable effective AI governance [171-179][221-230][292-298].


Keypoints


Major discussion points


The International AI Safety Report as the new baseline for AI governance – The panel repeatedly cites the report as the “foundation” for current conversations, noting that AI risks have moved from theoretical to observable real-world impacts (e.g., deep-fakes, cyber-attacks) and that technical safeguards are becoming harder to bypass, yet still have vulnerabilities that raise urgent governance questions. [2-4][24-31][33-41][50-58]


Divergent global regulatory approaches – Participants compare the EU’s hard-law AI Act with Japan’s sector-specific, pre-emptive soft-law model and the United States’ high-level, principle-based regime, emphasizing that the real issue is how existing laws (privacy, copyright, sector regulations) are updated or supplemented rather than whether new AI-specific statutes are needed. [80-88][89-96]


The “trust problem” and the proposal of independent verification organizations (IVOs) – A central theme is the lack of trust for the public, deployers, regulators, and developers. The panel proposes a government-authorized marketplace of IVOs that issue outcomes-based certifications, which can clarify standards of care, unlock insurance, and create market incentives (e.g., “seal of approval” similar to UL). [106-112][117-124][125-130][171-178][221-230][231-239]


Practical challenges of auditing and evaluation – Audits are costly, lack clear economic incentives, and suffer from an “evaluation gap” because existing benchmarks are narrow and quickly become outdated. The discussion highlights the need for adaptable, incentive-aligned testing frameworks and more transparent, third-party evaluation capacity. [187-192][197-199][255-268][270-284]


Layered responsibility across the AI ecosystem – Rather than assigning safety to a single actor, the speakers argue for a “defense-in-depth” model that distributes duties among developers, downstream deployers, ecosystem monitors, and end-users-mirroring analogies to automotive and aerospace safety standards. [155-162][158-166][161-168]


Overall purpose / goal of the discussion


The panel’s aim was to take stock of where AI governance stands in 2026, using the International AI Safety Report as a common reference point, to compare how different jurisdictions are handling regulation, and to explore innovative governance mechanisms-particularly independent, outcomes-based verification-that can bridge the trust gap, align incentives, and support effective, scalable oversight of rapidly advancing AI systems.


Overall tone


The conversation began with a celebratory, appreciative tone toward the report and the progress made since the Bletchley Summit. As the dialogue progressed, the tone shifted to a more urgent and problem-focused stance, highlighting gaps in technical safeguards, regulatory inconsistencies, and incentive misalignments. By the end, the tone became constructive and forward-looking, emphasizing collaborative solutions (IVOs, market incentives, analogies to other safety regimes) while maintaining a realistic acknowledgment of the challenges ahead.


Speakers

Gregory C. Allen


Area of expertise: AI governance, policy discussion moderation


Role/Title: Moderator/Host of the panel discussion [S4]


Stephen Clare


Area of expertise: AI safety, technical risk management, AI governance


Role/Title: Co-lead author of the International AI Safety Report; co-lead writer of the report [S3]


Hiroki Hibuka


Area of expertise: AI policy, law, and governance, especially in Japan


Role/Title: Research Professor, Kyoto University Graduate School of Law; former Japanese government policymaker; non-resident senior associate at CSIS [S1]


Shana Mansbach


Area of expertise: AI governance, independent verification, policy innovation


Role/Title: Vice President of Strategy and Communications, Fathom [S5]


Additional speakers:


Karina Prunkle – Co-lead writer of the International AI Safety Report (mentioned in the discussion).


Full session reportComprehensive analysis and detailed insights

Gregory C. Allen opened the session by introducing the four panelists and noting Stephen Clare’s contribution to the International AI Safety Report as the “foundation” for AI-governance discussions in the coming year [1-4]. He also highlighted Hiroki Hibuka’s expertise on Japanese AI policy [5-8] and mentioned Shana Mansbach’s role at the young think-tank Fathom, a leading convenor of the ASHFE conference series [9-10].


Stephen Clare then outlined the origins and purpose of the International AI Safety Report. Drafted as the shared evidence base for the 2023 Bletchley Safety Summit and modelled on IPCC reports, the document is backed by more than thirty countries and intergovernmental organisations [18-22]. Its 2026 message is that “the rubber is really hitting the road”: risks once theoretical are now observable at scale, with a billion users worldwide and concrete harms such as deep-fake proliferation and AI-enabled cyber-attacks [24-31]. Clare reported that technical safeguards have improved markedly-modern models now require seven to ten hours for a universal jailbreak, compared with minutes for earlier systems [42-45]-and that twelve leading AI developers publish frontier safety frameworks, indicating greater transparency [48-49]. He cautioned, however, that safeguards remain vulnerable to skilled actors, implementation is uneven, and the key governance challenge is ensuring broad compliance and addressing non-adoption [51-58].


Hiroki Hibuka provided a comparative overview of global regulatory approaches. He emphasized that all jurisdictions already contain a mix of hard-law and soft-law instruments (privacy, copyright, sector-specific rules) [80-86] and argued that the policy task is to update these existing rules rather than create brand-new AI statutes. He contrasted the EU’s AI Act (hard-law, high-risk-focused) with Japan’s pre-emptive, sector-specific soft-law approach and the United States’ “exposed” principle-based regime that relies on high-level guidelines and post-hoc litigation [87-96]. Hibuka noted the difficulty of evaluating abstract values such as privacy, transparency and fairness, pointing to the current lack of benchmark standards worldwide [98-100]. He further observed that democratic debate is needed to decide acceptable safety levels (e.g., how many deaths are tolerable for autonomous vehicles) and that test-measure design-such as comparing accident rates on a straight highway versus a complex city-is itself a policy question [300-310]. Hibuka also highlighted public procurement as a powerful market pull: governments could require verified AI in contracts, creating a strong incentive for firms to seek certification [300-310].


Gregory then asked Shana Mansbach to explain Fathom’s perspective on the emerging “trust problem”. She described how the surge in model capabilities has generated uncertainty for the public, deployers, regulators and developers, producing a systemic lack of confidence that AI systems work safely, securely and as advertised [105-108]. She argued that traditional command-and-control governance cannot keep pace with AI’s speed or the scarcity of technical expertise outside frontier labs [111-114].


Mansbach proposed an outcomes-based marketplace of government-authorised independent verification organisations (IVOs). Regulators would define desired outcomes-such as child safety, data-privacy, controllability and interpretability-and IVOs would conduct up-to-date testing to certify that AI systems meet those outcomes [117-122]. She discussed the concept of a “standard of care” that verification could establish, providing a rebuttable presumption of heightened care and clarifying liability before any harm occurs [173-179]. Mansbach identified three primary incentives for organisations to seek verification: (i) liability clarity, (ii) eligibility for insurance coverage (insurers are currently refusing to underwrite AI-enabled products), and (iii) a market advantage akin to UL or Underwriters Lab seals, which could become decisive for buyers such as school superintendents [221-230][231-239]. She qualified these analogues as partial rather than perfect matches to existing safety-certification models [231-239].


Gregory linked these ideas to existing safety-standard mechanisms, noting that in aerospace the AS9100 certification is required for insurance and that insurers’ refusal to cover AI-driven activities could act as a de-facto regulatory lever [207-214][240-250]. He also drew an analogy to the U.S. National Highway Traffic Safety Administration’s star-rating system for vehicles, suggesting a similar rating could guide AI-system adoption [330-334].


Stephen elaborated on a “layered, defence-in-depth” responsibility model. He argued that no single actor can bear full responsibility: developers should embed training techniques to reduce dangerous outputs, downstream deployers should implement monitoring and classification systems, and ecosystem-wide monitors should track AI-generated content across borders. He stressed the need for societal-level resilience-hardening digital infrastructure against AI-enhanced cyber-attacks-rather than attempting to prevent every harmful use [155-168].


The panel then examined incentives for independent audits. Hibuka reiterated that without clear economic benefits corporate executives are unlikely to pursue verification, citing autonomous-vehicle certification as a strong market driver [187-192]. He reiterated that public procurement could provide a powerful pull if governments required verified AI for contracts [300-310], and noted that insurance could serve as another carrot, though current lack of AI-specific coverage limits this lever [197-199][318-328]. Stephen highlighted a significant “evaluation gap”: existing benchmarks are narrow, quickly become outdated, and fail to capture the breadth of real-world use cases, as many evaluations consist of static question sets that do not reflect the stochastic, multi-turn nature of modern models [255-267]. Shana agreed, adding that testing is intrinsically hard because model outputs vary across runs and downstream impacts can differ dramatically between users (e.g., a harmful suggestion that may be benign for most but catastrophic for a vulnerable individual) [270-277]. She argued that a competitive IVO marketplace would incentivise continual improvement of testing tools, creating a “race to the top” similar to how UL certification drives product safety in other sectors [285-290].


Gregory asked how consensus on risks could be turned into formal standards. Stephen responded that while the report provides a state-of-the-science baseline, there is still a lack of agreed-upon best practices, and any standards would need to evolve rapidly to keep pace with model capabilities [292-298][255-267].


Across the panel, the participants repeatedly referred to the International AI Safety Report as a foundational baseline for current AI-governance discussions [2-3][19-23]. They agreed that technical safeguards have improved yet remain vulnerable and unevenly applied [35-40][51-57]; organisational safety frameworks are inconsistent, creating a need for outcomes-based verification [48-57][111-130]; and insurance can serve as a powerful lever to drive adoption of verification standards [221-231][244-250]. Disagreements centred on the primary economic incentive for audits (public procurement versus insurance versus market pressure) [187-195][318-328][221-238] and on whether existing hard- and soft-law regimes are sufficient or new governance mechanisms are required [80-86][48-57][62-65].


Key take-aways


1. The International AI Safety Report is a foundational baseline confirming that AI risks are now material.


2. Technical safeguards are stronger but remain vulnerable and unevenly applied.


3. Global regulatory approaches differ, yet all must adapt existing hard- and soft-law rules to cover AI.


4. A trust deficit exists across stakeholders; an outcomes-based IVO marketplace could mitigate it by providing liability clarity, insurance eligibility, and market advantage.


5. Safety responsibility must be layered across developers, deployers and societal monitors.


6. Incentives such as insurance underwriting, public procurement and consumer-facing seals are essential to motivate audits.


7. Current evaluation benchmarks are narrow and outdated, necessitating dynamic, multi-turn testing tools.


8. Lessons from aerospace (AS9100), automotive safety ratings and UL certification can inform AI-safety standards.


Proposed actions


a. Establish a government-authorised IVO marketplace.


b. Encourage regulators and insurers to tie compliance with IVO verification to liability standards, insurance premiums and procurement contracts.


c. Develop sector-specific safety standards that combine hard law, soft law and voluntary frameworks.


d. Increase transparency from AI labs to reduce information asymmetry.


Unresolved issues include designing economically viable incentives, defining a universal standard of care, creating up-to-date evaluation methodologies that capture stochastic, multi-turn risks, and ensuring third-party auditors retain expertise as technology evolves. The panel suggested a hybrid approach that blends layered responsibility, flexible outcomes-based standards and market-driven incentives to achieve scalable, trustworthy AI governance.


Session transcriptComplete transcript of the session
Gregory C. Allen

Again, to my immediate right, we have Stephen Clare, who wrote the International AI Safety Report as the co -lead author, if I’m not mistaken. And he earned that applause, because that report is a remarkable document that I do think is the foundation upon which all conversations about AI governance now must rest for the next year. It’s the sort of minimum amount of knowledge that you must have to participate in the conversation, which I think is really a tribute to him. Then we have Hiroki Hibuka, who is currently a research professor at the Kyoto University Graduate School of Law, and was also deeply involved in drafting Japan’s first set of soft law regulations, and is an expert on all things AI, but also especially astute at what’s going on in Japan.

We also have a privilege of collaborating with him at CSIS, where he’s a non -resident senior associate. And I must say, he is probably the best person writing about Japanese AI policy in Japanese, but he is definitely the best person writing about it in English. And so I often tell Hiroki that, like, if he doesn’t write about it, nobody in Washington, D .C. knows about it. So it’s important, his work. And then finally, we have Shana Mansbach, who’s the vice president of strategy and communications at Fathom, which is a young think tank, started only two years ago, but has already succeeded as one of the best conveners of the ASHFE conference series on AI, and also now leading a policy initiative, which I think she’s going to tell us all about.

So without further ado, I’d like to start with you, Stephen. I just said that the report that you were the lead author of is sort of the bedrock for having a conversation on AI governance. For those in the audience who haven’t yet made it through, but they, of course, will, can you sort of set the stage? Where are we in 2026 in AI governance and in AI safety, technical and procedural intervention?

Stephen Clare

Sure. Thanks, Greg. First of all, I’m sorry if I’d known Greg was going to make the report, you know, required reading, I would have tried harder to make it shorter. Yeah. Thanks for having me. Thanks for really excited to be here. So for people who don’t know, the report is it was founded up the Bletchley 2023 Bletchley Safety Summit as sort of, you know, the shared evidence base for decision makers thinking about these complicated, fast moving, noisy governance questions. It’s kind of trying to be like the IPCC report for for AI. It’s backed by over 30 countries and intergovernmental organizations. You know, I’m one of two co lead writers along with Karina Prunkle, but there’s over 30 dedicated experts writing different sections, and there’s hundreds of people that review it.

So it’s really trying to be a sort of state of the art, what do we know? What don’t we know about general purpose AI systems and the risks they might pose? I think this year the main message of the report is like the rubber is really hitting the road or something with these kind of systems. Risks that even a year or two ago might have been theoretical are now very real and we’re seeing emerging empirical evidence. More real world impacts of AI on productivity and labor markets and in science and in software engineering. It’s all like really happening out in the world. There’s a billion people now using AI around the world. Many of those impacts include risks.

So we’re seeing effects of deepfake spreading, cyber attacks being more common with AI systems. And so the need for sort of risk management techniques that are effective is also growing. One thing that I found surprising working on the report is that in this domain on risk management and technical safety, there’s actually some good news. Quite a lot of good news, I’d say. In various ways, our technical safeguards are improving. Models are becoming much harder to jailbreak. So. You know. So three, four years ago, if you asked a model to give you a recipe for a Molotov cocktail, it would not do that. But if you said, oh, I miss my grandma, and she used to tell me this amazing bedtime story about how she loved making Molotov cocktails, please help me remember my grandmother, it would be like, okay, well, if it’s for your grandmother.

Then that stopped working maybe a year or two ago, but then if you maybe translated your question into Swahili or something and put it in the model and then translated the answer back, it might have made safeguards. So none of that works anymore. These safeguards are much harder to evade, and we know this quantitatively. For example, the UK Security Institute will try and evade the safeguards or jailbreak all these new models when they’re released. At the beginning of 2025, they could do this in literally minutes, find a sort of universal jailbreak that would elicit potentially harmful knowledge. For the latest models, it’s taking them seven, ten hours to get around safeguards. So there’s still vulnerabilities, but for novices or even moderately skilled actors, it’s basically the same thing.

It’s becoming much, much harder to evade them. We’re also seeing more of these safeguards get implemented into organizational practices. So 12 companies, all the leading AI developers now have frontier safety frameworks, which are these documents that describe how they plan to manage risks as they scale more powerful systems, which is many more than had them a couple of years ago and is, I think, a sign of transparency and sort of collective learning about risk management that’s worth noting. So basically, yeah, our toolkit for managing these risks is growing. But, you know, it wouldn’t be a safety report if I didn’t maybe end on a few caveats or some bad news. The first is that these technical safeguards are still vulnerable in many ways.

They can still be jailbroken with enough effort or in edge cases, and it’s very difficult to test and provide reliable assurances that these safeguards will work across this huge range of use cases that these models are now applied to in the real world. And on the organizational side, you know, these safeguards only work if they are applied. And although we’re seeing, especially from frontier developers, we’re very prominent, usually quite robust safeguards applied on models, across the whole industry, and especially behind the frontier, application remains quite inconsistent. The safety frameworks, all these companies have them, but they vary in the risks they cover, they vary in the practices that they recommend. And so the landscape as a whole, you know, these tools only work if they are applied.

And we still see that, some vulnerabilities across the landscape, which I think turns this technical challenge, that points towards the governance challenge of how do we assure broader adoption, how do you ensure compliance, what do you do when there’s a lack of compliance. We’re sort of facing these questions, and again, because these risks and the impacts are now not something that we can sort of push down the road anymore, I think, for future years, the governance questions are becoming a lot more urgent.

Gregory C. Allen

Terrific. And if I could contrast what you said with what we might have said if we were having this conversation back at the Bletchley Park AI Summit. But it’s almost like the only good news on AI safety, AI security, and AI governance at Bletchley was, well, at least we’re all here talking about it. And now, three years later, the good news is we’ve done a lot about it. We have techniques that can provide demonstrable increases in safety. We don’t know everything that we need to work, but we know a lot of stuff that does work. And really, a lot of the challenges, I think, as the report says, it’s now in the hands of policymakers to make sure that these safeguards get implemented robustly and diversely.

So with that, I now want to turn to Hiroki, who I hope can give us a state of where we are in the story of AI governance around the world. If the next steps are really in the hands of policymakers, where are we globally?

Hiroki Hibuka

Thank you, Greg. And again, congratulations. Stephen was the publisher of the great report. And I think, first of all, I feel very glad that now the discussion on AI governance is such advanced compared to three years ago. I’m a lawyer and I’m a former policymaker. I worked for the Japanese government for four years, designing the Japanese AI policies, mainly in terms of regulation and governance. And as a lawyer and policymaker, the question after reading the report is, where is the end? And to what extent stakeholders have to manage the risks? Because in the end, you can’t remove all the risks. AI is black box and the technology advances so fast. And even though there is advance and progress of Godwins, the next day you may find another risk.

So there is no end to the story of how regulators should design the regulations. That is the main question. All countries. Countries are facing and different nations, regions take different approaches. Maybe the most famous regulation is the EU AI Act. And in that context, a lot of people say, hey, EU takes a hard law regulatory approach on AIs while Japan or UK or United States takes a software approach. But I think it’s a completely wrong understanding of the regulatory framework because, as you know, there are already lots of regulations that can be applied to AI systems. Privacy protection laws, copyright laws, or sector -specific laws such as finance, automotive or healthcare. We already have a lot of regulations out there.

So the real question is not whether or not to regulate AIs, but the real question is how to update our existing regulations and whether or not we need additional regulations targeting AI systems. In addition to the existing regulatory framework, so in that sense all countries take the hard law approach and also all countries have soft laws because European Union there are a lot of technical standards to implement the EU AI Act that are now under discussion but anyways all countries have both hard laws and soft laws that is the start of the discussion and then when we compare EU approach and Japan approach the clear difference is whether to regulate AI holistically or not sector -specific and when I compare the Japanese policy and the US policy we are on the same position as to taking a sector -specific regulation the main difference I understand is whether you prioritize the exempt approach or exposed approach the US takes more exposed approach you can do whatever you want to do and the regulation is usually very high level the principle is very high But once you have a problem, if you damage others’ properties or lives, then you go to the court and you fight in the court.

The Japanese society is not like that. In Japan, actually the number of losses is very low. People prefer to set the rules in advance. Japanese companies are very, very good at complying with the given rules. But they are not very good at creating their own governance mechanisms or explaining to stakeholders why you are doing that. And now Japanese stakeholders are starting to realize that it doesn’t work. So we need to have more agile and multi -stakeholder approach. So we are trying to leverage the power of soft laws, negotiating among different stakeholders, and give the standards, guidances. But in the end, again, if you violate the existing hard laws, of course you will be sanctioned. So that’s the main differences in American approach and Japan approaches.

And in the end, all countries are facing difficult questions of how to deal with this cutting -edge technologies that are black box and there are unlimited risk scenarios. And sometimes we don’t know how to evaluate the values such as privacy or transparency or fairness. There has been no clear benchmark standards so far in the society. So how to design those benchmarks and regulation methods are the challenges all countries are facing.

Gregory C. Allen

Terrific, Hiroki. And Shaina, I know you have a unique perspective on this because your organization is now proposing sort of additional models of AI governance that are not really reflected in existing law, whether in the United States or Europe or Japan or India. So walk us through what you see as the important work we’re doing now.

Shana Mansbach

Sure. My panelists have set me up very well to say this. So I think as the International AI Safety Report shows, the capabilities around these models are surging. And as the capabilities surge, so too does the uncertainty around the risks, by which I mean, do these systems work safely, securely, and as advertised? That uncertainty creates a trust problem, a trust problem for the public, which doesn’t have a way of figuring out what is actually safe, a trust problem for deployers, by which I mean hospital systems, retail, banks, who want to and indeed need to use these systems, but have no idea what they can actually trust. So there’s a trust problem for the regulators, too.

They don’t know, how do you confer not just trust, but how do you confer earned trust? And I would say there’s a trust problem for the developers also, because if and as trust starts to grow, there’s a trust problem for So if the trust starts to decline, you’re going to see adoption decline as well, so this is something that developers should be focused on too. The current approach is just not the current approach to tech governance is not equipped to handle this trust problem very well. Traditional command and control governance says here are the rules, here are all the things you have to do, here are the procedures, here is what compliance actually here’s what compliance actually looks like.

There are a bunch of problems with this approach in the context of AI, but I’ll focus on two, which is the speed problem. AI moves really, really quickly, and even well -intentioned regulations are going to become outdated very, very quickly, and then there’s the technical capacity problem. Even with the rise of the AI safety institutes, which are doing amazing work, the talents, the expertise for understanding these systems and understanding their risks is largely concentrated in the frontier labs, which of course leads some people to say, well, let’s just go to the frontier labs. They can regulate themselves. I don’t think I have to spend too much time explaining why there are problems with that approach but it’s simple incentives I think all of us know people in the labs who are doing amazing, amazing work they are the people who make sure that I can because of them I sleep better at night but the incentives are just not there there are always going to be trade -offs between investing in safety testing and tooling and investing in development so we’re going to have problems with self -regulation in terms of addressing that trust gap so where does that lead us?

at Fathom, my organization, we’re very focused on coming up with new models that can solve this trust gap so we’re very focused on independent verification specifically the marketplace of independent verification organizations by which I mean a government -authorized and overseen marketplace of independent verifiers which are trying to be charged with creating testing and tooling to determine whether these AI systems are actually safe The difference here is that this is an outcomes -based approach. Instead of, as I said, having procedures, here are the rules, here are all the things you need to do, here are all the boxes you must check to be certified as being good, you have an outcomes -based approach where you have a government saying, here are the things that we care about.

We care about children’s safety. We care about data privacy and protection. We care about controllability and interpretability. And then you have independent verifiers that can actually go out, do the testing, have updated testing constantly to make sure that those outcomes are being met. We think that independent verification solves for a couple of these deficits in the trust context. First, they are independent. The labs are not grading their own homework. Second, democratic accountability. You have governments that are creating outcomes instead of the industry doing it itself. Third, flexibility. Under this system, the IVOs, independent verification organizations, are constantly updating their testing and criteria to make sure that they’re keeping up with the pace of technology and the pace of risks as well.

And I think the fourth thing, which is pretty interesting, is it creates a race to the top here. Right now, the only people working on safety testing and tooling are in the labs. What we’re envisioning is a marketplace that incentivizes ever better testing and tooling here. I could talk about IVOs for days and days, but let me just end on one point. I was talking to Greg about this earlier, and Greg asked, are there analogous systems or industries or sectors that we could talk about? And I said, yeah, sort of. I mean, in America, we have Underwriters Lab. There’s LEED certification. There are some analogies. But the honest answer is there’s not a perfect analogy.

We have had the same regulatory system for the last century. And I think that with the rise of AI, we’re seeing that system is no longer built for purpose. And when we try to use old systems, hard law, soft law, any of these things, we’re really struggling to make it work. So what I’m trying to do, what I’d encourage all of us to do is to say, you know, we do need to think a little bit differently. Because this is what this technology in this time calls for.

Gregory C. Allen

Well, that’s great. So there’s a few points I want to pull together there. The first is, you know, as Hiroki pointed out, in the U .S. system, liability law looms extremely large, right? The lawsuits at the end of this story when things go wrong. And when you have, as, for example, ChatGPT does, 800 million weekly average users, something’s going to go wrong every week, right? And the question is… How is that going to intersect with our existing body of regulation? How is that going to intersect with liability law? The second thing is this is going to, because we’re talking about these general purpose technologies, this is going to be adopted in so many different sectors of the economy.

And right now, as Shana pointed out, the number of people who have, you know, Steven’s expertise on what it takes to really make AI systems safe and well -governed and perform reliably as intended across the whole range of potential applications, that’s not a lot of humans on planet Earth who are good at that stuff. And because these AI models are going to be deployed in just about every sector of the economy, we need some level of those capabilities in every sector of the economy. And so the question is, you know, if I am a financier, if I am a finance company, if I am a health care company, you know, how am I going to know and how are my consumers going to know?

that when they use AI -related capabilities, it’s going to work reliably as intended over the full range of acceptable use cases. And so, Stephen, I want to come to you and ask, when it comes to governance, when it comes to oversight and verification, how do you see the balance of responsibilities in terms of what responsibilities need to fall upon the model developers, what responsibilities need to fall upon the users, what responsibilities need to fall on independent third parties, whether that’s the government, whether that’s auditors, whether that’s this marketplace of verification that Shana is talking about. So what do you see as the balance of responsibilities, and how might this go wrong, how might this go right?

In 30 seconds or less.

Stephen Clare

I mean, I’m sure it’s kind of the boring but true answer. It’s the boring part of it. depends and it’ll vary a lot across use cases and sectors. I think probably it’s not the case that it’s fair or helpful or true to allocate to one actor or another, but instead we need this layered approach of just many different policies and practices at different parts of the stack. Because none of our approaches are foolproof, they all have vulnerabilities, and so we have, instead of safety by design, we have this safety by degree situation where we want defense in depth. So for developers, there will be training techniques that they can implement to make models less likely to elicit dangerous knowledge in the first place.

If there are people building on top of those models and then deploying them, there will be monitoring systems they can put in place and classifiers that identify dangerous queries and stop models from answering them. and then probably for ecosystem monitoring bodies which could be deployers but could also be other institutions in the world there can be tracking how AI content is spreading across borders and around the world and then I think there’s this other aspect of we’re focusing a lot on sort of model or developer safety but as we are moving into this world where many people around the world are having access to powerful, helpful intelligent technologies and we also just need to adapt for that reality and think about resilience at the societal level too of how do we adapt to the beneficial use cases and the various use cases that these models will be used for so thinking about hardening digital systems against increased cyber attacks just sort of admitting the reality of the situation in many ways and adapting to it rather than trying to prevent all harmful uses in the first place I think we need a variety of approaches across all these different actors

Gregory C. Allen

Yeah. And just to use an analogy for how broad the group of stakeholders is, if you think about a ride hailing service, a taxi service like Uber, you have the automobile manufacturers who have to make sure that this is a solid car design that was manufactured safely and appropriately to specification. Then you have Uber, where in some countries Uber owns the car, and so they’re responsible for ensuring that it gets maintenance appropriately. And then you have the driver who’s responsible for ensuring that they are actually following the law and driving the car safely. And if you apply that analogy to AI, you have the model developer, then you might have the sort of business use case deployer, which could be a bank, a medical device company.

Who? A financial institution, whoever. And then you finally have the end customer who’s receiving those services and making sure that they’re using them appropriately. And so. If you think about that sort of different body of use cases, as I said before, the capabilities are not symmetric across all of those. But there are sort of obligations. And so, Shana, I want to come back to you and ask this model that you’re proposing, what exactly does it mean for the different stakeholders in the ecosystem? How does their life change if we adopt the system that you’re in favor of?

Shana Mansbach

Yeah, I mean, the overarching answer is we create trust throughout the system, which is the missing piece here. I think there are a couple of pieces that I would pull out. You had mentioned liability earlier, and let me talk about that a little bit. What this system does, it does not assign liability. It doesn’t say, you know, deployers, developer, it’s you, it’s you, it’s you. We’re seeing, at least in America, courts move their way through this. Sister. court cases move their ways through the court system and we’ll see where that is but where that ends up being but what is really missing is a standard of care and this is I think one of the real advantages that this system has so right now at least how it works in our current tort system is that if you’re Waymo kill someone someone can sue and then a judge and a jury has to figure out so again we’re not answering who should be sued but let’s say that the family of someone who got hurt or killed is suing Waymo what happens is that the jury has to decide whether whether the person who was sued did the right thing and if you are not technical that is the hardest thing even if you are technical and maybe even Waymo doesn’t know So what this system would do is confer, if you are verified, it would confer, the verification would confer a rebuttal presumption of having met a heightened standard of care.

So what we’re doing is clarifying and defining up front before an actual harm happens what a deployer or whoever is sued is actually supposed to do instead of having this very, very messy system where someone after the fact has to figure out what went wrong and who’s responsible for that. I can talk about other layers of this back here, but I think the liability piece is really key. I mean, we just see this. I think it’s a reflection of the trust problem here where when you’re a deployer, I mean, God, I think everyone that I talk to, you know, again, hospital systems, retail, banks, anyone who needs to be consumer facing is really worried about this problem.

I mean, when I get sued, what do I do? And maybe there’ll be. a populist backlash and everyone will hate everyone who’s using AI systems. And it’s much better to, ahead of something like that, ahead of that happening, have that standard of care defined up front and have that seal of approval conferred.

Gregory C. Allen

And Hiroki, as you think about the different stakeholders in the system and especially the idea of auditors, which now there are a number of organizations being founded, it seems like almost every day, who are proposing to provide external evaluation services that can help companies understand, as Shane has said, this product or this service or this company meets the seal of approval and we vouch for it as an independent entity. What kind of momentum do you see for this independent assessment part of the story across regulatory frameworks?

Hiroki Hibuka

Independent evaluation. Independent evaluation is essential given that we are all using AI systems for all different situations, starting from language models to healthcare systems to car driving. But it would be not easy to persuade corporate executives to use the independent audit without clear economic incentives. For example, if you get the certification for autonomous driving, then you can sell the car to the big market. Then, of course, you pay for the audit. But if you take this audit for this language model, then you can prove that this language model is relatively safer than the other models. But it doesn’t necessarily make enough incentive for model developers to conduct the audit or evaluation systems, independent evaluation, because there is no clear financial incentives.

Gregory C. Allen

Actually, could I? I can ask you to elaborate on that. So where might these financial incentives come from? You mentioned one, which is the regulators force you to do it. That’s one. Maybe insurance is one. another like where where might these incentives come from

Hiroki Hibuka

I think it should start from the regulated areas such as cars health care systems finance systems or infrastructures because everybody needs a strong requires strong trust on those systems if it doesn’t work well then somebody might have a baby kills that’s a big problem and maybe you could say hey but in the end if you are killed you can be compensated but it’s not the end of the story while if the damage could be compensated by money by the company and stakeholders are okay with that maybe companies like to just run the system go and and compensate to the victims for example if the language model says something discriminated the company can just say hey we’re very sorry we introduce better guardrails and we pay for that if you want compensation

Gregory C. Allen

in terms of what is possible, what interventions work, what the risks are. But I want to ask about how we go from that degree of consensus to something that might be more of like a standard around procedural implementation. You know, Shana’s term of art is standard of care, which matters a lot in the American legal system. I’m sure it matters a lot in other legal systems. I’m just ignorant about, you know, how and where. And so I’m curious, you know, what do you see as the gap? If these independent evaluators, these independent auditing organizations are emerging, how do they go from we think we’re good at this to, no, this is the accepted best practice?

You know, we have accepted consensus on the risks and the interventions, but, like, how do you turn that into a procedure? Just to give an example to the folks in the audience, I used to work at a rocket company, and the safety standard in the American aerospace industry is AS9100. And in the history, of our company, there’s kind of like a before AS9100 moment, and then there’s an after AS9100 moment. And everything changed for our company, you know, after we got that third -party audit evaluation. A lot of our customers, you know, just said, we do not sign checks for companies that are not AS9100 certified. So, you know, you are deeply steeped in where we are today on the consensus, but how far are we from converting that into standards and procedures for third -party evaluation?

Yeah. I’ll also say one follow -up to Hiroki’s point, too, about auditing. Not only is there sort of a lack of incentives to conduct audits voluntarily now, but there might even be disincentives where one is it’s costly, and it slows you down, and there’s very intense competitive pressures to release faster. And there’s also potentially… like, information or security risks to sharing. You spent hundreds of millions, maybe billions of dollars developing a model, and then you have to share it with an external party before deployment. Like, serious risks to, or perceived risks, at least, to having that information leak or… So I think, yeah, there’s some serious challenges there. I guess there’s one other potential part of the story, which is sometimes you see companies want to be willfully blind, right?

If they have a report that says my product is not safe, well, now they know they’re going to lose the lawsuit. Whereas if they never commission the report, maybe they’ll win the lawsuit. So, Shana, what do you see as meaningful interventions that can help address this problem, both the cost side that Stephen mentioned and the other parts of the incentive structure?

Shana Mansbach

Yeah, let me make a couple of points. I mean, I think we’re talking about the cost of audits, and I think this… this is a big issue that we think about a lot. This system will not work if everyone, if there’s a flat fee, everyone is paying a ton. I mean, we are really, we think that an unsuccessful, there are many ways that a system looks unsuccessful, and one of those ways is if it is just protecting incumbents. And we’re thinking, we envision the system as something that works for, you could verify a general purpose LLM, you could also have narrow AI, you could have a tiny little tool, a little chatbot that is used in schools.

Those three different products should not be audited, not only at the same cost, but in the same way. I mean, compliance isn’t just the check that you’re writing, it is how much of a pain in the butt is it? How many lawyers do you need? How long will this take? So the great thing about this being a marketplace is that the system is right -sized to risk type, to size of these products. and again instead of having just a one size fits all this is what you have to do to comply because I think that that is a real issue it really quickly I just want to go back to you know the question that you asked Hiroki about incentives I mean you can imagine a system where this is mandatory and maybe in some areas you can imagine that but I think that there are three real real carrots for wanting to get verified we talked a little bit about liability so obviously the liability clarity that this is a big carrot I think the insurance piece the insurance piece is real right now we are seeing the big insurers saying we’re not going to touch this we’re not going to insure any AI products because we have no idea what’s inside of them at least in America the way that life insurance works is if you want insurance you have to have a lot of money and a lot of money and a lot of money and a lot of money you have to jump on a scale and tell someone how healthy you are and what are the things that you do and the insurer decides okay are you worthy of being insured and at what premium I think that’s actually a pretty direct analog for what we’re trying to do here where the books are opened and an insurer can look at whether they don’t have to do the testing themselves, but they can look at whether the system has been verified and say, okay, we will actually insure you or we will insure you at a more affordable premium.

I think the third thing is just straight -up market competitive advantage. If I’m a school superintendent and I am choosing between two learning chatbots to put in my schools, I’m not going to choose the one that has not been verified. I want the one that has been verified, that is safest. Yes, because I’m worried about getting sued, but because I want my kids to be safe. And you can imagine a situation much like Underwriters Lab in the United States where basically all consumer products like light bulbs, toothbrushes, basic things that you buy in a store like Walmart, all have the UL seal of approval, and those are the ones that get sold in stores. They have a huge market advantage.

They pay a little bit, but not very much. And in exchange for doing that, they go to market in a way that, or they compete in a market in a way that the ones that don’t go through verification. do. I’m so sorry, Gary, you asked me an actual question and I just answered everyone else’s question and probably not my own.

Gregory C. Allen

It’s okay. You get out of jail free card because you mentioned insurance, which is something I’m deeply interested in right now. I mean, in that space orbital launch vehicle example that I just mentioned, you can’t get insurance for space launches of satellites until you’re AS9100 certified. And that is 10 % of the cost of getting a satellite into space is just the insurance on the rocket. And so basically companies that can’t get insurance can’t compete in the market. And as Shana mentioned, and I think this is a super undercovered story, there are now many of the major insurers in the United States at least are saying, for your enterprise risk policy, AI is not included. So if you are a major bank and you are doing big, important financial transactions, as soon as you start using AI, you’ve lost all your insurance.

And I think the Trump administration in the United States has a very light -touch regulatory approach. And my concern there is that, well, just because the government is not doing anything big and bold on regulation doesn’t mean there will be no regulation. The insurers will step in. And if the insurers exit the market, maybe not in legal terms, but in economic outcome terms, that could be very similar to draconian regulation. So, Shana, you’re mentioning the Underwriters Lab, which is an organization that writes standards that are relied upon by underwriters, the people who are issuing insurance. This is a huge part of the regulatory and governance ecosystem that I think is really important. And so now I’m hoping, Stephen, that you’re going to tell me, that you’ve been reached out to by a bunch of insurance companies, and they’re all reading your report eagerly and thinking about this.

But maybe, maybe not. What’s the case?

Stephen Clare

Not yet, but it’s a really long report. 312 pages, but it goes like that. Maybe I can come back to the best practices point a little bit, because I think we’re talking about auditing here, and at least I know there’s a lot of steps involved, I’m sure, but at least at the technical level, the main tool we have right now to audit the capabilities of the RISC -MD AI model are evaluations. And although in my opening I sort of talked about, oh, it’s great we have this toolkit that’s emerging and it’s strengthening, and that is true, I think on evaluations in particular, as far as like, okay, let’s say we have auditors that are looking at these companies, looking at models, what are they actually looking at to audit or evaluate the models?

I think we actually have a big gap here, a big evaluation gap in terms of, well, how are we actually assessing? So if we’re moving towards best practices, not only do I think we don’t have a sense of the best practices right now, but if we did, they’d be different in a year, because the capabilities are moving too quickly for these technical tools to be in date, for very long. So for example, you’ll have, you know, these evaluations often look like a set of questions related to a certain topic, and you ask the model, so you have a bunch of questions about biosecurity or a bunch of questions about cybersecurity. And if it’s above, if it scores high enough on the test, you say, whoa, this is a dangerous capability, and we need to implement more safeguards or something.

And as far as what’s best practice or safe risk management for a company, we evaluate in terms of, well, does it seem like the safeguards apply proportionately to the risk that you’ve assessed? But I think in many cases, these evaluations we’re using are already not super informative about real -world risk because they’re too narrow. Because you have to build a set of questions that gives you some information about the vast range of use cases in the real world. And as models have become more capable and general and adopted more widely, this has become much more difficult. And I don’t think there’s very many actors out there that are constantly thinking about new ways to evaluate the capabilities.

And so I think this… This is like an important gap in terms of our toolkit. that is, again, quite urgent because these models have been released and we’re using our current evaluations, which are already, in many cases, out of date and not super informative about real -world risk. Shannon, do you want to jump in here?

Shana Mansbach

Yeah. Stephen, I agree with you so much. I mean, all of us are obsessed with benchmarks because that’s kind of all we have, and they’re just so narrow. I spend a lot of time with organizations that we think will become these IVOs, and testing is so, so hard. I mean, think about this. We have a fundamentally stochastic system, so I can ask something 10 times, system 10 times, and I’m going to get 10 different answers. So what does that mean in a safety context? Another problem that we have, what a model outputs is not the same thing as what someone does with it. So think about in the context of mental health. Maybe the model says to 10 different people different versions of, I think you should kill yourself.

Nine times, maybe for nine of those users, that’s fine, they will laugh it out. But for one of those users, there’s going to be a real problem here. and also the multi -turn nature of AI. I mean, you build relationships with these systems and you ask long queries and the stuff just gets really complicated really quickly as technical minds could explain far better than I could. So what we’re trying to do here is incentivize better testing because right now the only people creating evals or eval organizations or doing God’s work, doing awesome stuff, but what does it mean? You’re the best meter out there. I mean, there’s not an incentive to go from good to the best.

And the other actor working, of course, are the labs. And I think many of the labs are actually attempting to be responsible actors here, but again, there’s an incentive gap. I think the only way you’re going to solve this is to have an ecosystem where all of the actors are competing to have the best services, to have the best evaluations, to have the best feedback, to have the best feedback, to have the best feedback, And we hope one day one of these IVOs says, I’ve developed a new type of testing that figures out this kid safety thing that no one has ever thought about. And then the next day someone says, well, we have to be better because then everyone will want to be verified from that organization.

So you are incentivizing ever better testing. And as Stephen says, I think that just given how quickly and dramatically the capabilities and the risks of these systems are increasing, we need really good testing and tooling that can keep up with that. And the only way to do that is to incentivize

Gregory C. Allen

So, Stephen, if I could come to you about what Shana just said. You pointed out how the state of the art in evaluations and assessment is constantly shifting as the capabilities are shifting. I sometimes hear the frontier labs say, yes, and that’s why we’re the only ones who can do the testing, because we’re the ones out there on the frontier. But Shana is making this point about misaligned incentives, which I think we saw. In a conversation you and I had a couple weeks ago in the XAI Grok undressing children kind of example, there’s perverse incentives sometimes at work here in terms of the companies evaluating themselves. So how do you reconcile that gap between the frontier AI labs often do have a unique perspective and a unique understanding, but also it’s really hard to see how we could ever be comfortable with them being the only ones assessing themselves?

Stephen Clare

Well, I can talk about a bit in the context of the report where we try to work with everybody to get the state of the science across the whole landscape. And there I think it is true that there’s this big information asymmetry between the people in the labs who both have the most technical capacity and also the most access to leading models and all of the information about testing and development and all of the information about the technology that’s being used in the lab. And if you don’t draw on that knowledge, you can’t really do anything about it. you’re not going to be able to understand what’s actually going on in the AI world but then I think we brought in a lot of perspectives from academia and society and government feedback to sort of get a full perspective of the landscape as far as what to do going forward to deal with this I think probably it looks something like this with partnerships that are aiming to draw on that knowledge but then aiming for transparency and information sharing that gives third parties and external actors a better understanding of what’s actually going on because it’s true like even writing the report we were reliant on these papers that labs will occasionally publish and drop with like very useful data on how people are using the models or adoption rates but we’re kind of reliant on these like ad hoc publications and then that leaves a lot of gaps across the landscape and different risks and so we you know constantly had the word uncertainty or unknowns in the report because we lack that data outside of the labs

Gregory C. Allen

And do you think that that’s likely to remain the case, or do you think that that could change over time? As we’ve seen, literally, the safety staff of some of these labs quit and start their own auditing companies. So are they likely to have their skills atrophy as they get farther from the development process, or do you think it’s credible that these third -party organizations can build, the word that comes to mind is like economies of scale that are relevant to be able to continue advancing the state -of -the -art of safety and governance, even as the technology keeps evolving?

Stephen Clare

I’m not sure, but I think what we can do is sort of look at the trend, and the trend is towards, I think, a stronger ecosystem around AI labs. As more people, as these problems of lack of data and lack of independent verification are identified more, there’s more people working on it. And then I think we’ve seen some movement towards greater transparency with AI labs as well. So frontier safety frameworks are now a governance mechanism that’s in the EU. AI are in the code of practice, and it’s become institutionalized. It started as a voluntary, anthropic, just published, a responsible scaling policy. And so you see these movements towards sharing more information in more structured ways.

I think also, yesterday, there were the new commitments from the companies at the summit, which were related to sharing data about usage. So I think as a broader set of actors in society are paying attention to AI, because, again, we’re feeling the effects more clearly. It’s becoming more of an economic priority. We’ll see more demand from outside the labs to share this information, and maybe that will lead to some changes.

Gregory C. Allen

Hiroki, you’ve written a ton about AI, but in your capacity as a lawyer, you also have a lot of understanding of many different industries. Are there any lessons learned from other industries here that have solved this sort of technical expertise exists here, but the need for independence exists here? What kind of precedents do you see that we can learn from?

Hiroki Hibuka

Okay, so before that, let me add one more incentive, which is public procurement. If the government says, we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, LLM or model is safe and then government procures this standard then it will be a big incentive for developers so that is one thing and when I try to answer your questions I think democratic debate is necessary as to what kind of risk level is acceptable and also what kind of test measures are good because there is any single specific answer as to this is acceptable level of perspective.

For example in Japan every year more than 2 ,000 people were killed by a human driven car and the question is what kind of safety would we require for the autonomous vehicles? Is it okay if the kill number is less than 2 ,000 or would we like to require more safety than human drivers? If so, what would be the level? There is no single answer to that kind of question so we need to debate. in a democratic manner as to what is our acceptable goal. And also about the test measures. For example, we can just simply compare the number of rates per kilometers, but if you test in a very safe straight highway, of course it’s easier to get to safety.

While if you try to drive in a pretty complex city, it’s gonna be very difficult. So how to measure how to define the test method is another question. And I don’t go into the details, but the thing that discussion has been done in a lot of industries, car industries, or finance industries, or aerospace industries, we can certainly do a lot of lessons learned from the existing.

Gregory C. Allen

Yeah, one analogy that as you were talking, you jogged my memory, is the National Highway Transportation Safety Administration in the United States, which actually industry begged for this organization to be created. They did. in the 60s and 70s because they said, look, all of us are going to claim that we have safe cars, but only some of us are making big investments in becoming safe, and we want to reward the people whose good behavior is making big safety investments. And so they created this new organization which would give cars a safety rating on one to five, five star or one star. And so now the companies can only get a five star rating if they’re actually doing what it takes to be safe.

And consumers, you know, they’re not always qualified to rip open their car’s engine and see what it looks like under the hood, what’s safe, but they can interpret that five star rating. And so my idea was to ask you, Shana, to elaborate on this in the context of your model, but I’m now scared of the beeper, which is quite loud and scary. So please join me in thanking our terrific panel. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The International AI Safety Report was drafted as the shared evidence base for the 2023 Bletchley Safety Summit and is backed by more than thirty countries and intergovernmental organisations.”

The AI Safety Summit was held at Bletchley Park with participation from China, the United States, the European Union and over 25 other nations, demonstrating broad multi-country backing, and the summit produced the Bletchley Declaration establishing a shared understanding of AI risks [S76] and [S77].

Confirmedhigh

“Risks once theoretical are now observable at scale, with concrete harms such as deep‑fake proliferation and AI‑enabled cyber‑attacks.”

Discussions of AI risk management explicitly cite the spread of deepfakes and the rise of AI-enabled cyber attacks as emerging threats [S1].

Additional Contextmedium

“The International AI Safety Report is modelled on IPCC reports.”

The IPCC is referenced as a successful example of an international, evidence-based report that creates a shared factual base for policy, providing context for why the AI Safety Report would adopt a similar structure [S75].

External Sources (86)
S1
Who Watches the Watchers Building Trust in AI Governance — – Hiroki Hibuka- Shana Mansbach – Stephen Clare- Hiroki Hibuka
S2
Lights, Camera, Deception? Sides of Generative AI | IGF 2023 WS #57 — Hiroki Habuka, Civil Society, Asia-Pacific Group
S3
Who Watches the Watchers Building Trust in AI Governance — – Stephen Clare- Hiroki Hibuka- Shana Mansbach – Stephen Clare- Shana Mansbach
S4
Who Watches the Watchers Building Trust in AI Governance — -Gregory C. Allen: Moderator/Host of the panel discussion
S5
Who Watches the Watchers Building Trust in AI Governance — – Hiroki Hibuka- Shana Mansbach – Shana Mansbach- Gregory C. Allen
S6
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S7
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S8
Global telecommunication and AI standards development for all — Bilel Jamoussi:Thank you, thank you LJ and good afternoon everyone. I’d like to invite a list of colleagues for a big an…
S9
Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477 — Policymakers need to support collaboration between different sectors The analysis underscores the critical need for cyb…
S10
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — Lastly, inclusive involvement of the technical community in the policy-making process is advocated. The technical commun…
S11
AI Safety at the Global Level Insights from Digital Ministers Of — So I think there’s that. I do think that it needs to be obviously multi -sector. It’s a fairly obvious point. How do you…
S12
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S13
Why science metters in global AI governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S14
AI Meets Cybersecurity Trust Governance &amp; Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S15
https://dig.watch/event/india-ai-impact-summit-2026/who-watches-the-watchers-building-trust-in-ai-governance — I mean, I’m sure it’s kind of the boring but true answer. It’s the boring part of it. depends and it’ll vary a lot acros…
S16
AI That Empowers Safety Growth and Social Inclusion in Action — Despite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap betwee…
S17
Advancing Scientific AI with Safety Ethics and Responsibility — And also create more awareness about the main fundamental thing is that they will be expected to document whatever testi…
S18
Networking Session #232 Bringing Safety Communities Together a Fishbowl Style Event — Legal frameworks exist but enforcement remains problematic due to lack of understanding within judiciary and law enforce…
S19
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Current syllabuses are outdated and policy makers face pressure to update educational frameworks rapidly
S20
Main Session 2: The governance of artificial intelligence — Different sectors (financial services, agriculture, healthcare) require different regulatory approaches, but there’s a n…
S21
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived ina…
S22
Consumer data rights from Japan to the world | PART 1 | IGF 2023 — Minako Morita-Jaeger:Thank you, Javier. My PowerPoint. Yes, lovely. And then you can, yeah, and you can change it when I…
S23
Japan favours softer AI regulations — Japan is paving the way for a more lenient approach to AI regulation, as an official familiar with the deliberationsreve…
S24
AI as critical infrastructure for continuity in public services — “We shouldn’t be fixing things after the fact, but we should go on an input before the deployment.”[115]. “The second on…
S25
Open Forum #26 High-level review of AI governance from Inter-governmental P — Yoichi Iida: Thank you very much, Ambassador. You talked about a lot of various risks and challenges. In particular, y…
S26
WS #123 Responsible AI in Security Governance Risks and Innovation — These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, a…
S27
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Kyoko Yoshinaga:Thank you, Michael. Welcome to Japan. I’m Kyoko in Kyoto. Okay. So let me, first of all, give you a brie…
S28
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S29
Global AI Governance: Reimagining IGF’s Role &amp; Impact — Legal and regulatory | Privacy and data protection | Cybersecurity Regulatory approach – existing laws vs new framework…
S30
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Context-specific deployment focusing on appropriate use cases can unlock both productivity and trust simultaneously
S31
In brief — The scientific literature on the evaluation of humanitarian assistance is extensive. Approaches include the scient…
S32
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Kamesh Shekar:Thank you so much, Luca. And so, yeah, so I guess we have very few time to rush through the paper. But our…
S33
Opening — As new technologies emerge, there is a need to assess whether existing governance frameworks are sufficient or if new on…
S34
WS #134 Data governance for children: EdTech, NeuroTech and FinTech — Current laws and regulations may not provide sufficient coverage for emerging technologies like neurotechnology. Some co…
S35
New Technologies and the Impact on Human Rights — **Implementation Over Innovation**: There was consensus that established international frameworks provide adequate found…
S36
Global AI Policy Framework: International Cooperation and Historical Perspectives — Despite coming from different backgrounds (diplomatic/legal vs academic), both speakers advocate for patience and carefu…
S37
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — “But just thinking about closing the AI insurance divide, we released this paper, and in it we talk about around six cha…
S38
Who Watches the Watchers Building Trust in AI Governance — Actually, could I? I can ask you to elaborate on that. So where might these financial incentives come from? You mentione…
S39
Japan’s move toward active cyber defence: a strategic shift in national security — On 10 September, the Liberal Democratic Party (LDP)proposeda groundbreaking system of ‘active cyber defence’ (Nōdō-teki …
S40
The Protection of Children Online — National policies vary greatly in their reliance on technical measures. Countries such as Australia and Ja…
S41
Japan passes landmark cyber defence bill — Japan has passed theActive Cyber Defence Bill, which permits the country’s military and law enforcement agencies to unde…
S42
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe Metzger:Thank you, Bilel. Maybe to be as succinct as possible, just would like to mention four areas, which I t…
S43
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The discussion highlighted the importance of policy interoperability rather than uniform global governance, recognizing …
S44
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S45
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S46
Driving Social Good with AI_ Evaluation and Open Source at Scale — This highlights the complexity of contextual safety requirements and the need for flexible evaluation frameworks
S47
Procuring modern security standards by governments&amp;industry | IGF 2023 Open Forum #57 — An important observation highlighted by the coalition is the lack of recognition of open internet standards by governmen…
S48
Policies and platforms in support of learning: towards more coherence, coordination and convergence — – (a) Common standards for needs assessment and evaluation of learning programmes; – (b) Coordination and possibly int…
S49
AI Safety at the Global Level Insights from Digital Ministers Of — There’s a need to develop an independent evaluation ecosystem similar to accounting auditors, but the optimal structure …
S50
https://dig.watch/event/india-ai-impact-summit-2026/who-watches-the-watchers-building-trust-in-ai-governance — Independent evaluation. Independent evaluation is essential given that we are all using AI systems for all different sit…
S51
The Overlooked Peril: Cyber failures amidst AI hype — Developing and enforcing legal and policy instruments, such as the 11 UN cyber norms, is imperative. These norms provide…
S52
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — Lastly, inclusive involvement of the technical community in the policy-making process is advocated. The technical commun…
S53
Leveraging AI4All_ Pathways to Inclusion — The report identified three interconnected pillars essential for inclusive AI: design, access, and investment. The desig…
S54
Table of Contents — Closely linked and o/ften a consequence of government internal policies is public procurement. Where standards and commo…
S55
https://dig.watch/event/india-ai-impact-summit-2026/leveraging-ai4all_-pathways-to-inclusion — This is where you have to make sure AI is usable in real world conditions. I know we’re in the AI Impact Summit, but som…
S56
Please cite this document as: — 8. Members should take greater account of environmental criteria in public procurement of ICT goods and services and inc…
S57
WS #123 Responsible AI in Security Governance Risks and Innovation — These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, a…
S58
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment. This sug…
S59
Who Watches the Watchers Building Trust in AI Governance — This comment exposes a fundamental technical limitation in current AI safety approaches: the evaluation methods themselv…
S60
How to make AI governance fit for purpose? — Shan emphasized international collaboration through the ITU and global standards development, expressing concern about p…
S61
From Technical Safety to Societal Impact Rethinking AI Governanc — This comment set the entire tone and direction of the discussion, establishing the framework that all subsequent panelis…
S62
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Kyoko Yoshinaga:Thank you, Michael. Welcome to Japan. I’m Kyoko in Kyoto. Okay. So let me, first of all, give you a brie…
S63
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S64
Online trust: between competences and intentions — Trust (or the lack thereof) is a frequent theme in public debates. It is often seen as a monolithic concept. However, we…
S65
https://dig.watch/event/india-ai-impact-summit-2026/who-watches-the-watchers-building-trust-in-ai-governance — Sure. My panelists have set me up very well to say this. So I think as the International AI Safety Report shows, the cap…
S66
OECD DIGITAL ECONOMY PAPERS — These gaps result from misaligned incentives, a lack of awareness, externalities, a misperception of risks and informati…
S67
Advancing Scientific AI with Safety Ethics and Responsibility — Oversight should be distributed across multiple entities rather than relying on a single central authority, creating che…
S68
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Kamesh Shekar:Thank you so much, Luca. And so, yeah, so I guess we have very few time to rush through the paper. But our…
S69
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — – Clara Neppel: Senior Director at IEEE David Leslie: Can everyone hear me? Samara, can you hear me? Hello? Hello? …
S70
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Moderator – Massimo Marioni:AI’s role in securing the future. Dr. Helmut Reisinger, Chief Executive Officer, EMEA and LA…
S71
Opening address of the co-chairs of the AI Governance Dialogue — ## Themes from Previous Year 3. Establishing international technical standards that allow policy and regulation to rema…
S72
Laying the foundations for AI governance — – **Lan Xue**: Dean (Dean Xue Lan), expertise in governance and policy Artemis Seaford: That is a great question. So th…
S73
Multi-stakeholder Discussion on issues about Generative AI — Furthermore, Andrade highlights the significance of dialogue and cooperation in the global AI landscape. He particularly…
S74
Report outlines security threats from malicious use of AI — The Universities of Cambridge and Oxford, the Future of Humanity Institute, Open AI, the Electronic Frontier Foundation …
S75
Will science diplomacy survive? — Science in diplomacyis about using scientific evidence and advice for foreign policy decision-making. In these cases, so…
S76
China, the US, EU, and 25+ countries have joined forces to manage the risks of AI — At the AI Safety Summit hosted at Bletchley Park in England, representatives from China, the United States, the European…
S77
AI Safety Summit adopts Bletchley Declaration — On the first day of theUK AI Safety Summit, the government of the UK introduced the ‘Bletchley Declaration’ on AI safety…
S78
Knowledge Café: WSIS+20 Consultation: Strenghtening Multistakeholderism — Anita Gurumurthy: Thanks. So suggestions at the international level, as well as national to local. So I will start with …
S79
A Global Compact for Digital Justice: Southern perspectives | IGF 2023 — Anita Gurumurthy:But of course, like everything that is political and has an opportunity in the horizon, we will deal wi…
S80
Global Risks 2025 / Davos 2025 — Gillian R. Tett: Well, as somebody who obviously benefited in the past as being part of the media from vertical trust,…
S81
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — Audience:I hope someone can hear me, I still can’t have my video on. We can hear you clearly. Thank you. So this has bee…
S82
Setting the Scene  — This insight is particularly thought-provoking because it identifies that while technology and protection methods are im…
S83
UK report quantifies rapid advances in frontier AI capabilities — For the first time, the UK has published adetailed, evidence-based assessmentof frontier AI capabilities. The Frontier A…
S84
Climate change and Technology implementation | IGF 2023 WS #570 — This would enable broader adoption of these solutions, fostering real progress in addressing climate change and achievin…
S85
Non-regulatory approaches to the digital public debate | IGF 2023 Open Forum #139 — The lack of compliance of private tech companies and states with human rights obligations online propels effects of onli…
S86
Global Enterprises Show How to Scale Responsible AI — -Regulatory Approaches and Global Alignment: The panel debated whether global regulatory alignment is necessary or feasi…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
G
Gregory C. Allen
3 arguments167 words per minute2623 words939 seconds
Argument 1
The International AI Safety Report is the essential foundation for any AI governance discussion.
EXPLANATION
Allen emphasizes that the report represents the minimum body of knowledge required to meaningfully participate in AI governance debates, positioning it as the bedrock for future policy work.
EVIDENCE
He praises Stephen’s report as a remarkable document that forms the foundation for all conversations about AI governance and describes it as the minimum amount of knowledge needed to join the discussion [2-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report is described as an IPCC-style evidence base for AI governance and a shared scientific foundation for policy work [S12][S13].
MAJOR DISCUSSION POINT
Report as foundational knowledge for AI governance
AGREED WITH
Stephen Clare
Argument 2
Policymakers must translate existing technical safeguards into robust, diverse implementation across sectors.
EXPLANATION
While technical safety tools have improved, the remaining challenge lies in ensuring that policymakers adopt and enforce these safeguards consistently and at scale.
EVIDENCE
Allen notes that good news includes many techniques that demonstrably increase safety, but the real challenge now is that “the good news is we’ve done a lot about it… the challenges are now in the hands of policymakers to make sure that these safeguards get implemented robustly and diversely” [62-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for multi-sector collaboration and agile, sector-spanning regulation highlight the need for policymakers to operationalise technical safeguards [S9][S21].
MAJOR DISCUSSION POINT
Policy implementation of AI safety techniques
AGREED WITH
Stephen Clare, Shana Mansbach
Argument 3
The insurance market can serve as a powerful lever to drive AI safety standards, and the current lack of coverage creates market pressure.
EXPLANATION
Allen points out that major insurers are refusing to cover AI‑related risks, which could compel companies to adopt verification and safety standards in order to obtain necessary insurance and remain competitive.
EVIDENCE
He explains that many U.S. insurers are not including AI in enterprise risk policies, meaning banks and other firms lose insurance coverage when they use AI, and that this insurance gap can act as a lever for safety standards [244-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of insurance as a lever for AI safety, including parallels with other high-risk domains where coverage depends on safety certification, supports this claim [S1].
MAJOR DISCUSSION POINT
Insurance as lever for AI safety compliance
AGREED WITH
Shana Mansbach
S
Stephen Clare
4 arguments189 words per minute2162 words685 seconds
Argument 1
The International AI Safety Report functions as an IPCC‑like evidence base for AI governance.
EXPLANATION
Clare describes the report as a shared, state‑of‑the‑art evidence base that helps decision‑makers understand what is known and unknown about general‑purpose AI risks.
EVIDENCE
He explains that the report was founded at the 2023 Bletchley Safety Summit, is backed by over 30 countries and intergovernmental organisations, and aims to be the “IPCC report for AI,” summarising what we know and don’t know about AI risks [19-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report is positioned as an IPCC-like, state-of-the-art evidence base for AI risk assessment in global governance debates [S12][S13].
MAJOR DISCUSSION POINT
Report as evidence base for AI governance
AGREED WITH
Gregory C. Allen
Argument 2
Technical safeguards have markedly improved, making model jailbreaks significantly harder.
EXPLANATION
Clare highlights that recent models resist jailbreak attempts far better than earlier versions, with the time required for successful evasion increasing from minutes to many hours.
EVIDENCE
He cites the UK Security Institute’s attempts, noting that at the start of 2025 a jailbreak could be achieved in minutes, whereas for the latest models it now takes seven to ten hours, demonstrating that “it’s becoming much, much harder to evade them” [42-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Observations that evading modern models now takes many hours, making jailbreaks substantially more difficult, are noted in recent safety assessments [S1].
MAJOR DISCUSSION POINT
Improvement of technical safeguards
AGREED WITH
Gregory C. Allen
Argument 3
Organisational safety frameworks are expanding but remain inconsistent, creating a governance challenge around compliance.
EXPLANATION
Clare observes that while more AI developers now publish frontier safety frameworks, the scope and rigor of these frameworks vary, and many companies still apply them unevenly.
EVIDENCE
He notes that twelve leading AI developers now have frontier safety frameworks, but the risks covered and recommended practices differ across firms, leading to inconsistent application and a need for broader compliance mechanisms [48-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of gaps between adopted safety principles and their practical implementation underline the inconsistency of organisational frameworks [S16].
MAJOR DISCUSSION POINT
Need for consistent application of safety frameworks
AGREED WITH
Gregory C. Allen, Shana Mansbach
Argument 4
Current AI evaluation methods are narrow and quickly become outdated, necessitating new dynamic assessment tools.
EXPLANATION
Clare argues that existing evaluations rely on limited question sets that fail to capture real‑world risk, and because AI capabilities evolve rapidly, these tools lose relevance fast.
EVIDENCE
He describes evaluations as “a set of questions related to a certain topic,” which are often too narrow to be informative about real-world risk, and stresses that the rapid evolution of models makes many of these evaluations obsolete within a short time frame [258-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Critiques of existing benchmarks as quickly becoming obsolete and insufficient for real-world risk assessment are highlighted in recent governance reviews [S1].
MAJOR DISCUSSION POINT
Gap in AI evaluation tools
AGREED WITH
Hiroki Hibuka, Shana Mansbach, Gregory C. Allen
H
Hiroki Hibuka
3 arguments149 words per minute1274 words509 seconds
Argument 1
All countries already possess both hard and soft AI regulations; the key difference lies in sector‑specific versus holistic regulatory approaches.
EXPLANATION
Hibuka contends that the debate should focus on how existing legal regimes are adapted for AI, noting that the EU, Japan, the UK and the US all employ a mix of hard and soft law, but differ in whether they regulate AI holistically or by sector.
EVIDENCE
He references the EU AI Act as a prominent regulation, then explains that privacy, copyright and sector-specific laws already apply to AI, asserting that “all countries have both hard laws and soft laws” and that the main distinction is between holistic and sector-specific regulation [80-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions of sector-specific versus holistic AI regulatory models across jurisdictions illustrate this distinction [S20].
MAJOR DISCUSSION POINT
Existing legal frameworks and regulatory approaches
Argument 2
Japan’s pre‑emptive, soft‑law‑focused regulatory culture needs a more agile, multi‑stakeholder governance model.
EXPLANATION
Hibuka points out that Japanese companies excel at complying with prescribed rules but struggle to create their own governance mechanisms, suggesting that a flexible, stakeholder‑driven approach is required to keep pace with AI advances.
EVIDENCE
He notes that Japan experiences very low loss numbers, prefers setting rules in advance, and that Japanese firms are good at following given rules but not at creating their own governance, leading to a call for a more agile, multi-stakeholder approach [88-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on Japan’s preference for softer AI regulation and calls for more agile, multi-stakeholder governance support this view [S23].
MAJOR DISCUSSION POINT
Need for agile, multi‑stakeholder AI governance in Japan
Argument 3
Independent AI audits are essential but lack clear economic incentives; public procurement can provide a strong motivator.
EXPLANATION
Hibuka argues that without financial incentives, corporations are reluctant to undergo independent evaluation, and that government procurement policies could create market demand for verified, safe AI systems.
EVIDENCE
He explains that executives need clear economic incentives to adopt audits, cites the difficulty of persuading them without such incentives, and proposes that government procurement of verified models would create a powerful incentive for developers [187-195][318-328].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses note the absence of clear financial incentives for independent audits and suggest government procurement as a potential driver [S1].
MAJOR DISCUSSION POINT
Incentivising independent AI audits
AGREED WITH
Stephen Clare, Shana Mansbach, Gregory C. Allen
S
Shana Mansbach
4 arguments175 words per minute2464 words843 seconds
Argument 1
The rapid surge in AI capabilities has created a pervasive trust deficit among the public, deployers, regulators and developers.
EXPLANATION
Mansbach describes how the accelerating performance of AI systems leaves every stakeholder uncertain about safety, security and reliability, undermining confidence in AI deployments.
EVIDENCE
She outlines trust problems for the public, for deployers such as hospitals and banks, for regulators, and for developers, noting that “the capabilities are surging, and so too does the uncertainty around the risks” and that this creates a “trust problem” across all groups [106-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust challenges arising from fast-moving AI capabilities are highlighted in discussions of AI governance and security trust deficits [S14].
MAJOR DISCUSSION POINT
Trust problem caused by AI capability surge
Argument 2
An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance.
EXPLANATION
Mansbach proposes a government‑authorized marketplace where independent verifiers test AI systems against outcome goals (e.g., child safety, privacy), providing flexibility and continuous updates that match the rapid evolution of AI.
EVIDENCE
She describes IVOs as “government-authorized and overseen marketplace of independent verifiers” that assess outcomes such as children’s safety, data privacy, controllability, and interpretability, emphasizing independence, democratic accountability, flexibility, and a “race to the top” for better testing [111-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proposals for government-authorized verification marketplaces and independent oversight mechanisms align with this outcome-based model [S1][S24].
MAJOR DISCUSSION POINT
Outcomes‑based independent verification model
AGREED WITH
Hiroki Hibuka, Stephen Clare, Gregory C. Allen
Argument 3
Verification can establish a pre‑emptive standard of care, clarifying liability and reducing post‑incident legal uncertainty.
EXPLANATION
Mansbach argues that a verification seal would confer a rebuttal presumption that a developer or deployer has met a heightened standard of care, simplifying court decisions and providing clearer liability expectations before any harm occurs.
EVIDENCE
She explains that verification would “confer a rebuttal presumption of having met a heightened standard of care,” shifting the legal analysis from post-harm fact-finding to an upfront definition of required practices, thereby easing the burden on juries and courts [174-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of a verification seal that creates a rebuttal presumption of heightened care is discussed in recent governance literature [S1].
MAJOR DISCUSSION POINT
Verification as pre‑emptive liability standard
Argument 4
Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach.
EXPLANATION
Mansbach stresses that different AI products (LLMs, narrow AI, school chatbots) require tailored audit scopes and fees, and that a marketplace structure can align costs with the specific risk profile of each product.
EVIDENCE
She notes that “the system is right-sized to risk type, to size of these products,” and that a single uniform audit would be inappropriate for diverse AI offerings, emphasizing the need for proportionality in compliance costs [224-230].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for layered, risk-based regulatory approaches that avoid uniform treatment across AI products support this proportionality argument [S15].
MAJOR DISCUSSION POINT
Proportionate, risk‑based audit scaling
AGREED WITH
Hiroki Hibuka
Agreements
Agreement Points
The International AI Safety Report is the essential foundation/evidence base for AI governance discussions.
Speakers: Gregory C. Allen, Stephen Clare
The International AI Safety Report is the essential foundation for any AI governance discussion. The International AI Safety Report functions as an IPCC‑like evidence base for AI governance.
Both speakers emphasize that the Report provides the minimum knowledge required to participate in AI governance and serves as an IPCC-style shared evidence base for policymakers. [2-3][19-23]
POLICY CONTEXT (KNOWLEDGE BASE)
High-level AI governance forums have repeatedly emphasized the need for a shared evidence base, noting that reports such as the International AI Safety Report provide the empirical foundation for policy work (e.g., IGF 2025 high-level review) [S44] and echo calls for evidence-based AI policy across multiple stakeholder groups [S45].
Technical safeguards have improved, but policymakers must ensure their robust, widespread implementation.
Speakers: Gregory C. Allen, Stephen Clare
Policymakers must translate existing technical safeguards into robust, diverse implementation across sectors. Technical safeguards have markedly improved, making model jailbreaks significantly harder.
Both note that recent models are much harder to jailbreak, yet the remaining challenge is for policymakers to translate these technical gains into consistent, sector-wide safeguards. [62-65][35-40][51-57]
POLICY CONTEXT (KNOWLEDGE BASE)
Several jurisdictions, notably Japan and Australia, have pioneered the adoption of technical safeguards for online safety, illustrating the policy relevance of robust implementation [S40]; broader discussions stress that the main challenge lies in implementing existing safeguards rather than inventing new ones [S35].
Current organisational safety frameworks are inconsistent; a systematic, outcomes‑based verification mechanism is needed for consistent compliance.
Speakers: Gregory C. Allen, Stephen Clare, Shana Mansbach
Policymakers must translate existing technical safeguards into robust, diverse implementation across sectors. Organisational safety frameworks are expanding but remain inconsistent, creating a governance challenge around compliance. An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance.
All three agree that while many firms now publish safety frameworks, their scope and rigor vary, creating a compliance gap that could be closed by a government-authorized outcomes-based verification marketplace. [62-65][48-57][111-130]
POLICY CONTEXT (KNOWLEDGE BASE)
The need for an independent evaluation ecosystem comparable to accounting auditors has been highlighted as a priority for AI safety assurance [S49]; standards-as-implementation tools are also advocated to create systematic, outcomes-focused verification processes [S42].
The insurance market can serve as a powerful lever to drive AI safety verification and standards.
Speakers: Gregory C. Allen, Shana Mansbach
The insurance market can serve as a powerful lever to drive AI safety standards, and the current lack of coverage creates market pressure. Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach.
Both highlight that insurers are currently refusing AI coverage, creating market pressure, and that a verification seal could unlock insurance and lower premiums, providing a strong incentive for safety compliance. [244-250][221-231]
POLICY CONTEXT (KNOWLEDGE BASE)
Recent research on closing the AI insurance divide identifies insurance as a key market lever that can shape risk profiling and drive verification standards [S37]; discussions on financial incentives further underline insurance’s role in motivating compliance [S38].
Independent verification/audits are essential but require new incentive structures and a marketplace of IVOs to fill the evaluation gap.
Speakers: Hiroki Hibuka, Stephen Clare, Shana Mansbach, Gregory C. Allen
Independent AI audits are essential but lack clear economic incentives; public procurement can provide a strong motivator. Current AI evaluation methods are narrow and quickly become outdated, necessitating new dynamic assessment tools. An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance. Terrific. And if I could contrast what you said with what we might have said if we were having this conversation back at the Bletchley Park AI Summit.
All agree that third-party verification is needed; however, current incentives are weak. A marketplace of IVOs, supported by public procurement and clearer standards, can address the evaluation gap. [187-195][258-267][111-130][185-186]
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for a dedicated independent evaluation ecosystem for AI mirror the accounting-audit model and stress the need for new incentive structures [S49]; persuading corporate leaders without clear economic incentives remains a challenge, highlighting the importance of market-based mechanisms such as insurance or regulator-driven mandates [S50, S38].
Audit costs should be proportional to risk and product size, with mechanisms such as public procurement or market demand aligning incentives.
Speakers: Shana Mansbach, Hiroki Hibuka
Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach. Independent evaluation is essential … but it would not be easy to persuade corporate executives to use the independent audit without clear economic incentives.
Both stress that a one-size-fits-all audit is inappropriate; costs and rigor must match the specific risk profile, and incentives like public procurement can make audits attractive to firms. [224-230][187-195][318-328]
POLICY CONTEXT (KNOWLEDGE BASE)
Public procurement policies that reference technical standards are seen as a way to align incentives and ensure risk-based auditing [S54]; risk-based insurance pricing further supports proportional audit costs [S37]; broader incentive-design discussions emphasize regulator-driven and market-driven levers [S38].
Similar Viewpoints
Both recognize that a substantial body of existing regulations (hard and soft) already exists worldwide, but their application is uneven and sector‑specific, requiring updates rather than entirely new laws. [48-57][80-86]
Speakers: Stephen Clare, Hiroki Hibuka
Organisational safety frameworks are expanding but remain inconsistent, creating a governance challenge around compliance. All countries already possess both hard and soft AI regulations; the key difference lies in sector‑specific versus holistic regulatory approaches.
Both point to the inadequacy of current evaluation benchmarks and argue for new, continuously updated testing frameworks delivered by independent verifiers. [258-267][111-130]
Speakers: Stephen Clare, Shana Mansbach
Current AI evaluation methods are narrow and quickly become outdated, necessitating new dynamic assessment tools. An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance.
Unexpected Consensus
Insurance as a primary market lever for AI safety compliance.
Speakers: Gregory C. Allen, Shana Mansbach
The insurance market can serve as a powerful lever to drive AI safety standards, and the current lack of coverage creates market pressure. Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach.
While Gregory approaches the topic from a policy-maker’s perspective and Shana from a think-tank/market-design angle, both converge on the insight that insurers’ refusal to cover AI creates a strong incentive for verification and standards-an alignment not explicitly anticipated at the start of the discussion. [244-250][221-231]
POLICY CONTEXT (KNOWLEDGE BASE)
The AI insurance divide paper argues that insurance can serve as the primary market mechanism to enforce safety standards and drive compliance across AI providers [S37]; complementary analyses discuss how insurance-based incentives can be operationalised within regulatory frameworks [S38].
Overall Assessment

The panel shows strong convergence on four pillars: (1) the International AI Safety Report as the foundational evidence base; (2) technical safeguards have improved but need policy‑driven, sector‑wide enforcement; (3) independent, outcomes‑based verification is essential to bridge evaluation gaps; and (4) financial mechanisms—especially insurance and procurement incentives—can drive adoption of verification standards. There is high consensus on the need for a risk‑proportionate, market‑aligned verification ecosystem, and moderate consensus on the exact regulatory approach (sector‑specific vs holistic).

High consensus on the necessity of standards, verification marketplaces, and insurance‑driven incentives; moderate consensus on how existing regulatory regimes should be adapted. The alignment suggests momentum toward establishing a formal IVO marketplace linked to insurance and procurement requirements, which could shape future AI governance frameworks.

Differences
Different Viewpoints
What primary economic incentive should drive adoption of independent AI audits?
Speakers: Hiroki Hibuka, Shana Mansbach, Gregory C. Allen
Independent evaluation is essential but lacks clear economic incentives; public procurement could create demand (Hiroki) [187-195][318-328] Insurance can serve as a carrot, providing a rebuttal presumption of higher standard of care and market advantage for verified products (Shana) [221-238][250-251] The current lack of AI coverage by insurers creates market pressure; insurers stepping in could act like regulation (Gregory) [244-250]
All three speakers agree that independent audits are needed, but they diverge on the most effective lever: Hiroki stresses government procurement contracts, Shana highlights insurance coverage and competitive market signals, while Gregory points to the broader insurance gap as a de-facto regulatory driver [187-195][318-328][221-238][250-251][244-250].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders have identified regulator-mandated compliance, insurance-linked liability, and market demand as the main economic incentives that could spur adoption of independent AI audits [S38]; the difficulty of securing executive buy-in without clear financial benefits is also documented [S50].
Whether existing hard‑ and soft‑law frameworks are sufficient or new governance mechanisms are required.
Speakers: Hiroki Hibuka, Stephen Clare, Gregory C. Allen
All countries already have hard and soft AI regulations; the challenge is updating them rather than creating new rules (Hiroki) [80-86] Organisational safety frameworks are expanding but remain inconsistent, creating a governance gap that needs broader compliance mechanisms (Stephen) [48-57] Policymakers must translate technical safeguards into robust, diverse implementation across sectors (Gregory) [62-65]
Hiroki views the current legal mix as a sufficient foundation that merely needs refinement, whereas Stephen and Gregory argue that the present frameworks are fragmented and that new, possibly outcome-based, governance tools are required to ensure consistent safety and compliance [80-86][48-57][62-65].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the adequacy of current governance frameworks are reflected in multiple sources: an assessment of whether 20-year-old mechanisms can address new technologies [S33]; recognition that existing laws may not cover emerging domains such as neuro-tech [S34]; consensus that international frameworks provide a solid foundation but implementation is the key challenge [S35]; calls for patience and careful evaluation before introducing new regulations [S36]; and discussions on policy interoperability versus uniform global governance [S43].
Unexpected Differences
Effectiveness of Japan’s pre‑emptive, low‑loss regulatory culture versus the need for stronger technical safeguards.
Speakers: Hiroki Hibuka, Stephen Clare
Japan’s low loss numbers and preference for setting rules in advance suggest its current approach works (Hiroki) [88-90] Technical safeguards remain vulnerable and inconsistently applied across the industry, indicating existing approaches are insufficient (Stephen) [51-57]
Hiroki presents Japan’s pre-emptive, soft-law-focused model as largely successful, whereas Stephen stresses ongoing technical vulnerabilities and uneven adoption, an unexpected contrast given both discuss safety but from opposite assessments of current effectiveness [88-90][51-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Japan’s recent active cyber-defence legislation exemplifies a pre-emptive, low-loss regulatory approach, allowing proactive measures against cyber threats [S39, S41]; at the same time, Japan has been a leader in deploying technical safeguards for online safety, providing a historical contrast between pre-emptive policy and technical solutions [S40].
Overall Assessment

The panel shows moderate disagreement centered on how to create effective incentives for independent AI audits and whether existing regulatory regimes are adequate. While all participants agree on the necessity of stronger governance and verification, they diverge on the primary levers (public procurement, insurance, market competition) and on whether new outcome‑based mechanisms are needed beyond current hard/soft law structures.

The disagreements are substantive but not polarising; they reflect different policy‑design preferences rather than outright conflict, suggesting that a blended approach—combining regulatory updates, insurance‑driven standards, and procurement‑linked audits—could reconcile the viewpoints and advance AI safety governance.

Partial Agreements
They share the goal of establishing independent verification but differ on the mechanism: Stephen focuses on improving evaluation tools, Shana on creating a government‑authorized IVO marketplace, and Hiroki on coupling audits with public procurement incentives [258-267][111-130][187-195].
Speakers: Stephen Clare, Shana Mansbach, Hiroki Hibuka
All agree that independent verification/auditing is essential to close the trust and safety gap (Stephen notes evaluation gap; Shana proposes IVO marketplace; Hiroki stresses need for independent evaluation) [258-267][111-130][187-195]
They concur that existing evaluation methods are insufficient, but Stephen frames it as a technical gap needing new tools, while Shana emphasizes the need for a broader outcomes‑based verification ecosystem to keep pace with rapid capability growth [258-267][271-277].
Speakers: Stephen Clare, Shana Mansbach
Both highlight that current AI benchmarks/evaluations are narrow, quickly become outdated, and impede reliable risk assessment (Stephen) [258-267]; (Shana) [271-277]
Takeaways
Key takeaways
The International AI Safety Report (2023‑2026) provides a baseline knowledge set for AI governance and shows that real‑world risks from general‑purpose AI are now material. Technical safeguards (jailbreak resistance, safety frameworks) have improved markedly, but remain vulnerable to skilled attacks and are not uniformly applied across the industry. Governance is shifting from theoretical discussion to urgent implementation; policymakers must ensure that existing safeguards are adopted at scale. Regulatory approaches differ globally: the EU uses a hard‑law AI Act, Japan and the US rely on sector‑specific or principle‑based rules, but all need to update existing laws (privacy, copyright, sector regulations) to cover AI. Liability concerns are rising as AI is embedded in many sectors; current legal frameworks lack a clear standard of care for AI systems. A major trust deficit exists for the public, deployers, and regulators; an outcomes‑based marketplace of independent verification organizations (IVOs) is proposed to provide credible, up‑to‑date testing and certification. Responsibility for safety must be layered across developers (model training and safeguards), deployers (monitoring and risk assessment), and ecosystem monitors/independent auditors (verification and societal resilience). Incentives for independent audits are weak; potential carrots include insurance underwriting discounts, public‑procurement requirements, and market advantage (e.g., a “seal of approval” similar to UL or AS9100). Current evaluation benchmarks are narrow, quickly become outdated, and fail to capture stochastic, multi‑turn, real‑world risk; better, dynamic testing tools are needed. Lessons from other industries (automotive safety ratings, aerospace AS9100, insurance underwriting standards) can inform the development of AI safety standards and verification processes.
Resolutions and action items
Proposal to create a government‑authorized marketplace of Independent Verification Organizations (IVOs) that conduct outcomes‑based testing and issue verification seals. Encourage regulators and insurers to tie compliance with IVO verification to liability standards, insurance premiums, and eligibility for public procurement contracts. Call for the development of sector‑specific safety standards that combine hard law, soft law, and voluntary safety frameworks, with periodic updates to keep pace with AI advances. Suggest that AI labs increase transparency and share safety data with external auditors and the broader community to reduce information asymmetry.
Unresolved issues
How to design and fund economically viable incentives (insurance, procurement, regulatory mandates) that make independent audits attractive to AI developers. What concrete, industry‑wide procedural standards (analogous to AS9100) should look like for AI systems and how they will be enforced. How to define a universally accepted “standard of care” for AI deployments across diverse sectors and jurisdictions. Methods for creating and maintaining up‑to‑date evaluation benchmarks that capture stochastic, multi‑turn interactions and real‑world risk profiles. The balance between self‑regulation by frontier labs and external verification, especially given the rapid evolution of capabilities. How democratic debate will determine acceptable risk thresholds (e.g., safety levels for autonomous vehicles) and translate them into enforceable metrics.
Suggested compromises
Adopt a layered, defense‑in‑depth responsibility model that does not place the entire burden on any single actor but distributes duties among developers, deployers, and independent auditors. Combine hard‑law requirements with soft‑law standards and voluntary safety frameworks to allow flexibility while ensuring baseline safety. Use market mechanisms (insurance discounts, procurement preferences, consumer‑facing seals) as carrots to encourage voluntary verification, rather than relying solely on punitive regulation. Implement a hybrid approach where sector‑specific regulations are complemented by overarching outcomes‑based standards that can be adapted as technology evolves.
Thought Provoking Comments
Technical safeguards are getting much harder to evade – jailbreak attempts that used to take minutes now take seven to ten hours, and many models are becoming resistant to classic prompt‑jailbreak tricks.
Highlights concrete progress in AI safety that counters the dominant narrative of only bad news, showing that engineering advances can meaningfully reduce risk.
Shifted the conversation from a purely pessimistic view to a more balanced one, prompting Gregory to contrast the Bletchley‑Park optimism with current realities and setting up the later discussion on how to sustain and scale these gains.
Speaker: Stephen Clare
Even though safeguards are improving, they remain vulnerable and their adoption is uneven; the real governance challenge is how to assure broader compliance and what to do when there is a lack of compliance.
Identifies the critical gap between technical capability and policy implementation, moving the focus from technical fixes to systemic governance issues.
Served as a turning point that moved the dialogue toward policy‑level solutions, leading directly to Hiroki’s comparison of regulatory approaches and Shana’s proposal for independent verification.
Speaker: Stephen Clare
The key question isn’t whether to regulate AI at all, but how to update existing hard‑law frameworks (privacy, copyright, sector‑specific regulations) and whether additional AI‑specific rules are needed.
Reframes the regulatory debate by positioning AI within the broader legal ecosystem, challenging the simplistic EU‑vs‑US dichotomy.
Redirected the discussion from creating brand‑new AI statutes to integrating AI considerations into current laws, prompting deeper comparison of sector‑specific versus holistic regulation and influencing Shana’s focus on outcomes‑based standards.
Speaker: Hiroki Hibuka
Japan’s culture favors setting rules in advance and strong compliance, but companies struggle with self‑governance and explaining decisions; we need a more agile, multi‑stakeholder soft‑law approach.
Provides a nuanced cultural perspective that highlights why a single regulatory model may not fit all jurisdictions, emphasizing the need for flexible, collaborative governance.
Enriched the conversation with a concrete example of how national context shapes AI policy, leading Gregory to ask about incentives for auditors and prompting Shana to discuss market‑driven verification mechanisms.
Speaker: Hiroki Hibuka
The core problem is a trust gap among the public, deployers, regulators, and developers; we need an outcomes‑based marketplace of government‑authorized independent verification organizations (IVOs) to certify that AI meets defined safety, privacy, and controllability standards.
Introduces a novel governance model that moves beyond command‑and‑control to a dynamic, market‑driven certification system, addressing both technical and institutional challenges.
Opened a new line of inquiry about how such a marketplace could function, leading to a detailed discussion on liability, insurance, and economic incentives, and influencing later comments from Hiroki and Stephen about audit costs and evaluation gaps.
Speaker: Shana Mansbach
Verification would create a rebuttal presumption of having met a heightened standard of care, giving developers and deployers a clear legal shield before any harm occurs.
Connects the technical verification concept to concrete legal benefits, showing how it could reshape liability and risk management in practice.
Prompted Gregory to explore the interplay between liability law and insurance, and spurred Hiroki to discuss financial incentives, thereby deepening the conversation about practical implementation.
Speaker: Shana Mansbach
Current evaluations are narrow and quickly become outdated; we lack robust, real‑world risk assessments, which is a major gap in our safety toolkit.
Critically assesses the state of AI auditing tools, highlighting that even the best‑available benchmarks may not capture emerging risks, thus questioning the reliability of proposed verification schemes.
Triggered Shana and Stephen to discuss the need for continuous, incentive‑driven improvement of testing methods, reinforcing the argument for a competitive IVO marketplace.
Speaker: Stephen Clare
Insurance can be a powerful carrot: insurers will only underwrite AI‑enabled products that have been verified, similar to how UL certification drives market adoption in other industries.
Identifies a concrete economic lever that could drive widespread adoption of verification, linking governance to market dynamics.
Shifted the discussion toward real‑world enforcement mechanisms, leading Gregory to draw parallels with aerospace AS9100 certification and reinforcing the feasibility of the proposed model.
Speaker: Shana Mansbach
Public procurement can serve as an incentive: if governments only buy AI systems that have passed verification, developers will have a strong motivation to obtain certification.
Adds another practical policy tool that leverages government buying power to accelerate adoption of safety standards.
Expanded the set of suggested incentives beyond liability and insurance, reinforcing the multi‑pronged approach advocated by Shana and highlighting how different levers can work together.
Speaker: Hiroki Hibuka
Overall Assessment

The discussion pivoted around three core insights: (1) tangible technical progress in safeguards, (2) the persistent gap between those safeguards and their consistent, enforceable adoption, and (3) innovative governance proposals that blend legal, economic, and market mechanisms. Stephen’s acknowledgment of both advances and shortcomings set the stage for Hiroki’s reframing of regulation as an integration problem, while Shana’s introduction of an outcomes‑based verification marketplace offered a concrete solution that resonated with the panel. Subsequent comments about liability, insurance, and public procurement turned abstract ideas into actionable incentives, steering the conversation from diagnosis to potential implementation. Collectively, these thought‑provoking remarks reshaped the dialogue from a bleak outlook on AI risk to a nuanced roadmap for building trust and accountability across stakeholders.

Follow-up Questions
How can consensus on AI risks and interventions be transformed into accepted best‑practice standards and procedural certifications for independent evaluators?
Moving from informal agreement to formal standards (like AS9100) is needed for widespread industry adoption and to give customers confidence in AI safety.
Speaker: Gregory C. Allen
What financial incentives can be created to motivate companies to undergo independent AI audits and verification?
Without clear economic benefits, firms may view audits as costly and optional; incentives such as regulatory mandates, insurance discounts, or procurement requirements could drive participation.
Speaker: Gregory C. Allen, Hiroki Hibuka
How can liability and insurance frameworks be aligned with AI verification to establish a clear standard of care for developers and deployers?
A defined standard of care linked to verification could reduce legal uncertainty, lower insurance premiums, and encourage responsible AI deployment.
Speaker: Shana Mansbach
What robust, up‑to‑date evaluation methodologies can capture the stochastic, multi‑turn, and real‑world risk profile of AI systems?
Current benchmarks are narrow and quickly become obsolete, limiting the effectiveness of audits and risk assessments.
Speaker: Stephen Clare, Shana Mansbach
How can we design benchmark and regulatory methods for abstract values such as privacy, transparency, and fairness where no clear standards currently exist?
Absent benchmark standards, regulators struggle to assess compliance across jurisdictions, hindering consistent governance.
Speaker: Hiroki Hibuka
What mechanisms can ensure consistent adoption of technical safeguards across the entire AI industry, not just frontier developers?
Safeguards are unevenly applied, creating systemic risk; a strategy is needed to promote uniform implementation.
Speaker: Stephen Clare
Can third‑party verification organizations sustain expertise as AI technology evolves, or will skill atrophy undermine their effectiveness?
Ensuring that external auditors keep pace with rapid AI advances is crucial for long‑term credibility of independent verification.
Speaker: Stephen Clare
What lessons from other industries (automotive, aerospace, finance) can inform the creation of independent AI verification and safety‑rating systems?
Existing safety‑rating frameworks (e.g., NHTSA, UL) may provide models for structuring AI governance and certification.
Speaker: Hiroki Hibuka
How can public procurement be leveraged as an incentive for AI developers to obtain safety verification?
Government purchasing decisions that require verified AI could create market pressure for compliance.
Speaker: Hiroki Hibuka
How should societies define acceptable safety thresholds for autonomous AI systems (e.g., comparing AI‑induced fatalities to human‑driver rates)?
Establishing democratic, quantifiable safety targets is necessary for policy decisions on autonomous technologies.
Speaker: Hiroki Hibuka
How can a marketplace of independent verification organizations be designed to scale cost‑effectively across diverse AI product sizes and risk levels?
A right‑sized, tiered audit system would avoid one‑size‑fits‑all costs and make verification accessible to both large models and niche tools.
Speaker: Shana Mansbach
What concrete steps are needed to operationalize a layered ‘defense‑in‑depth’ approach that allocates safety responsibilities among developers, deployers, and societal actors?
Clarifying duties at each layer is essential to avoid gaps where safeguards fail or are not applied.
Speaker: Stephen Clare
How can we mitigate perverse incentives where companies might avoid audits to escape liability, ensuring they do not remain willfully blind to risks?
Mechanisms are needed to prevent firms from skipping verification to dodge legal exposure, preserving the integrity of the oversight system.
Speaker: Gregory C. Allen (referencing Shana)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Secure Talk Using AI to Protect Global Communications & Privacy

Secure Talk Using AI to Protect Global Communications & Privacy

Session at a glanceSummary, keypoints, and speakers overview

Summary

The event opened with Wish Gurmukh Dev welcoming attendees and outlining Tanla Platforms’ three core principles-innovation, collaboration and impact-which underpin its Wisely.ai agentic AI platform aimed at combating spam and scams globally [4-9]. A fireside chat was introduced featuring Sanjay Kapoor, a veteran telecom leader and Tanla board member, and Vikram Sinha, CEO of Indosat Ooredoo Hutchison, who is driving the telco’s transformation into an AI-focused company [14-20].


Sanjay highlighted the rapid digitisation of the global economy, noting that digital payments are projected to exceed $14 trillion by 2027 and that both India and Indonesia face escalating cyber-crime losses amounting to billions of dollars each year [25-31]. Vikram recounted a 2024 MasterCard advisory board meeting that revealed $5 billion in losses for Indonesians and that 65 % of the population experience spam or scam weekly, prompting Indosat to prioritize protecting its 100 million customers [46-57]. He explained that Indosat chose Tanla as a strategic partner rather than a vendor, integrating Wisely.ai into its operations, which led to a 9 % ARPU growth versus a 3 % industry average and a reduction in churn from 3.6-3.7 % to 1.6 % within a quarter [86-92].


When asked about return on investment, Vikram said measurable financial benefits appeared within six to eight months, emphasizing that AI-driven protection across voice and WhatsApp channels is essential for maintaining customer trust and business viability [96-106]. In the subsequent panel, Anshuman Kar emphasized that scams cost over $1 trillion globally, with SMS accounting for 70 % of fraud in India and 65 billion SMS messages sent monthly, and cited Wisely.ai’s protection of approximately $500 million in estimated losses in its first six months [153-164].


Panelists including Ratan Kumar Kesh and Neha Mahatme discussed challenges such as senior-citizen vulnerability, account-mule schemes, limited data visibility, and the rapid evolution of offensive AI that outpaces defensive models [191-214][236-244]. Bipin Preet Singh added that fragmented fraud-prevention efforts across fintech and banking hinder effectiveness, calling for a national digital payments intelligence platform and greater data sharing, as advocated by the RBI, to enable coordinated detection of scams [255-306][341-342]. Anshuman concluded that while attack surfaces are increasingly interconnected, current defenses remain fragmented, and a coordinated, real-time intelligence architecture across telecom, finance and regulators is required to safeguard the digital economy [345-355].


The session closed with Robert J. Ravi describing BSNL’s AI initiatives for network optimisation and customer experience, reinforcing the view that AI must be embedded across infrastructure to achieve comprehensive protection [372-383]. Overall, the discussion underscored that collaborative AI solutions, supported by cross-industry data sharing and regulatory coordination, are critical to transforming digital trust from a promise into an operational infrastructure [136-140][402].


Keypoints

Major discussion points


The scale of digital payments and the escalating fraud problem demand AI-driven trust.


Sanjay highlighted the rapid digitisation of the global economy, the $14 trillion digital-payments forecast and the billions lost to AI-powered scams, framing trust as a systemic risk that must be addressed at the board level [25-31].


Indosat’s partnership with Tanla’s Wisely.ai platform uses AI to protect millions and shows early business impact.


Vikram described how a shocking 2024 scam-loss report ( $5 bn lost, 65 % of Indonesians hit weekly ) triggered the decision to partner with Tanla, leading to a full-stack AI factory, GPU-cluster deployment and real-time protection [46-53]; he then cited concrete results – 9 % revenue growth vs. 3 % industry, churn falling from 3.7 % to 1.6 % – as proof of the platform’s value [86-92].


Demonstrating ROI is essential for scaling AI investments.


Sanjay asked how the initiative moves from a “customer-complaint” issue to a board-level ROI discussion [94-95]; Vikram responded that within six-to-eight months the AI solution delivered measurable P&L benefits (higher ARPU, lower churn) and reinforced the strategic shift from pure connectivity to “peace of mind” for customers [96-102].


Panelists across telecom, banking, fintech and payments stress the fragmented nature of fraud detection and call for integrated, data-shared ecosystems.


Anshuman set the stage by quantifying the fraud magnitude and the proliferation of SMS/OTT channels [153-169]; Ratan Kumar Kesh explained how banks use transaction-pattern analytics but still face “mule” account abuse [191-210]; Neha pointed out that behavioral-journey data, limited visibility and the faster evolution of offensive AI hinder prevention [236-244]; Bipin highlighted the need for a national-level data-intelligence authority and shared how in-house models outperform generic ones, yet siloed efforts limit impact [255-267][280-287].


BSNL’s vision extends AI beyond fraud to network optimisation, edge computing and federated learning for inclusive rural services.


Robert Ravi described AI-driven network diagnostics, the AI-Vani customer-experience system, and plans for edge data-centres and federated learning that keep user data private while improving service quality, especially in underserved regions [372-383][388-395][398-401].


Overall purpose / goal


The event was designed to showcase how AI can transform digital trust: first by presenting Tanla’s Wisely.ai solution and its partnership with Indosat, then by surfacing sector-wide challenges through a multi-stakeholder panel, and finally by outlining a broader, collaborative roadmap (including telecom, finance and regulatory bodies) for a secure, inclusive digital economy.


Overall tone and its evolution


– The opening remarks are formal and celebratory, welcoming guests and emphasizing innovation [1-4].


– The conversation quickly shifts to a serious, urgent tone as Sanjay and Vikram discuss the massive fraud losses and the need for decisive leadership [25-31][46-53].


– As the dialogue moves to partnership details and ROI, the tone becomes optimistic and solution-focused, highlighting measurable wins [86-92][96-102].


– The panel discussion adopts a collaborative yet critical tone, acknowledging fragmented defenses and calling for ecosystem-wide data sharing [153-169][191-210][236-244][255-267].


– The closing remarks from BSNL and the host return to a hopeful, visionary tone, emphasizing future-ready AI infrastructure and inclusive rural outreach [372-383][388-395][398-401].


Overall, the discussion progresses from celebration to problem-identification, through evidence-based solutioning, to a collective call for coordinated action, ending on an aspirational note about building a trustworthy digital future.


Speakers

Wish Gurmukh Dev – Host/MC representing Tanla Platforms and its group companies Carex and Value First [S1].


A. Robert J. Ravi – Chairman and Managing Director, Bharat Sanchar Nigam Limited (BSNL); telecom leader with over three decades of service, gold-medalist in Electronics & Communication Engineering [S4].


Vikram Sinha – President, Director and Chief Executive Officer of Indosat Ooredoo Hutchison (formerly Indosat Orido Hutchison) [S5].


Ratan Kumar Kesh – Executive Director and Chief Operating Officer, Bandhan Bank [S6].


Anshuman Kar – Chief Customer Success Officer (formerly Chief Growth) at Tanla Platforms; moderator of the panel discussion [transcript].


Neha Gutma Mahatme – Director, Amazon Pay India [S9].


Audience – General audience members; no specific titles mentioned.


Sanjay Kapoor – Host of the fireside chat; former CEO of Bharti Airtel, current Board Member of Tanla Platforms, distinguished global telecom leader [S13].


Bipin Preet Singh – Founder and CEO of MobiQuik, leading fintech entrepreneur; also a customer of Tanla Platforms [S15].


Additional speakers:


Uday – Tanla partner referenced as a strategic partner and collaborator on the AI solution [transcript].


Vipin – Panelist addressed during the discussion on ecosystem responsibility [transcript].


Pratham – Participant addressed at the start of the panel Q&A [transcript].


Ashutosh – Mentioned by the moderator while introducing the panel [transcript].


Ruthen – Person addressed by the moderator regarding national-scale responsibility for citizen protection [transcript].


Full session reportComprehensive analysis and detailed insights

The evening opened with Wish Gurmukh Dev thanking the audience and welcoming them on behalf of Tanla Platforms and its group companies Carex and Value First. He outlined Tanla’s three enduring principles-innovation, collaboration and impact-and introduced Wisely.ai as an agentic AI platform designed to identify, prevent, eliminate and record spam and scam activity worldwide, already operating in Indonesia, India and with major Indian banks to protect millions of users in real time [1-4][5-7].


A fireside chat was then announced. Sanjay Kapoor, a four-decade veteran of global telecom, former CEO of Bharti Airtel and current Tanla board member, was introduced as the visionary steering the company toward a world-class AI-driven communications enterprise. The guest speaker was Vikram Sinha, President, Director and CEO of Indosat Oridu Hutchinson, who has overseen the telco’s transformation from a traditional network operator into an AI-focused technology company committed to “AI for all” and digital inclusion [8-12].


Sanjay set the strategic backdrop by noting that the global economy is digitising at unprecedented speed. He cited projected digital payments of over US $14 trillion annually by 2027, more than five billion people online, and the addition of nearly two billion new internet users in South and Southeast Asia. He highlighted India’s digital economy expected to surpass US $1 trillion by 2030 and Indonesia’s GMV already exceeding US $100 billion, while warning that this scale brings rising cyber-crime, digital fraud and organised scam operations that cost billions each year and constitute a systemic trust risk that must be addressed at the board level [15-22][25-31][S1].


Vikram responded with a concrete illustration from a 2024 MasterCard advisory-board meeting in London, where the Global Anti-Scam Association reported that Indonesians had lost US $5 billion that year, with 65 % of the population experiencing spam or scam on a weekly basis. The victims were predominantly middle-income and lower-income women and elderly women, an “eye-opening” data point that compelled Indosat, a 58-year-old operator likened to Indonesia’s BSNL, to move beyond merely connecting customers and to protect its 100 million subscriber base [30-38][46-57].


Recognising the urgency, Indosat chose Tanla not as a simple vendor but as a strategic partner. Vikram emphasized, “We don’t need a vendor, we want a partner,” underscoring the need for co-creation rather than a product-only relationship [66-68]. Tanla’s GPU-cluster, including GB200 H100 units, was deployed to train bespoke models, enabling the detection of close to two billion spam instances and the flagging of 2.3 million scammers in real time [66-68][70-73][80-84][122-127].


The business impact of the Wisely.ai integration was evident in Indosat’s quarterly results. ARPU grew 9 %, outpacing the industry average of 3 %, while churn among serious-base customers (tenure > 90 days) fell from 3.6-3.7 % to 1.6 % within six to eight months of deployment, demonstrating a clear ROI and reinforcing the shift from pure connectivity to providing “peace of mind” for customers [86-92][96-102].


Vikram also highlighted his hands-on leadership, noting that he spends five days each month in the field, visiting villages and new-capital outlets to ensure the solution meets on-ground needs [100-104].


Following the fireside chat, Anshuman Kar opened the panel by quantifying the fraud problem: global scam losses now exceed US $1 trillion, and SMS accounts for roughly 70 % of Indian fraud, with 65 billion SMS and 15 billion OTT messages sent each month in India. He cited Wisely.ai’s early success, estimating that the platform prevented about US $500 million in losses within its first six months of launch [153-164][S1].


Panelists explored why existing defences remain fragmented. Anshuman stressed the need for “co-ordinated, real-time intelligence” across telcos, banks and fintechs [170-176][349-353]. Ratan Kumar Kesh described how banks use AI-driven rule-engines to flag out-of-routine transactions, yet warned that “mule” accounts-legitimate-looking accounts rented to launder money-remain a major, under-addressed threat, especially for senior citizens [191-210][191-199]. He also recounted a pick-pocket anecdote illustrating law-enforcement gaps that leave scammers untraced even when multiple agencies possess relevant data [310-322]. Neha Gutmā Mahatme added that fraud is a behavioural journey that begins long before a payment, and that limited visibility into external social-engineering data, combined with the faster evolution of offensive AI, hampers defensive models constrained by privacy and regulatory limits [236-244][240-242]. Bipin Preet Singh reinforced the systemic nature of the problem, noting that 99 % of the scams reported by his fintech’s customers involve money stolen from other banks, and called for a national Digital Payments Intelligence Authority to enable ecosystem-wide data sharing [255-259][279-283]. An audience member questioned whether the existing Digital Payments Intelligence Platform already provides sufficient integration, highlighting ongoing gaps despite its launch [332-340].


The governance discussion returned to responsibility for protecting citizens at national scale. Ratan highlighted law-enforcement gaps, while Bipin suggested that the RBI-led Digital Payments Intelligence Authority could assume a coordinating role, acknowledging that effective implementation will require both regulatory leadership and industry participation [279-283][310-322].


A separate perspective was offered by A. Robert J. Ravi, Chairman and Managing Director of BSNL. He described AI-driven network diagnostics that pinpoint complaint hotspots, the AI-Vani system that routes callers to the appropriate agent, and a “recharge expert” AI that mitigates spam on WhatsApp. Looking ahead, Ravi outlined plans for edge data centres and federated-learning models that keep user data on-device while still benefiting from collective training, thereby extending AI-based protection to rural users without compromising privacy [372-383][388-395][398-401].


In synthesis, the participants reached broad consensus that AI-powered anti-fraud solutions such as Wisely.ai are already delivering real-time protection and measurable financial returns (e.g., ARPU uplift, churn reduction, $500 m loss avoidance). They agreed that scaling these benefits requires a shift from vendor-type relationships to strategic partnerships and, crucially, an ecosystem-wide data-sharing architecture that bridges telco, banking, fintech and regulator signals. The panel highlighted persistent challenges: the arms-race between offensive and defensive AI, limited external behavioural data, the need to protect vulnerable groups (senior citizens, low-income women), and the difficulty of balancing security with customer-experience friction. Future directions identified include federated learning, edge AI, and coordinated national-level intelligence platforms that respect privacy while delivering rapid, cross-border fraud detection [345-355][S33][S39].


Overall, the event demonstrated that AI-enabled anti-fraud solutions are moving from pilot projects to measurable business outcomes, but scaling them requires ecosystem-wide data sharing, coordinated regulation, and continued innovation to stay ahead of increasingly sophisticated scammers [136-140][402].


Session transcriptComplete transcript of the session
Wish Gurmukh Dev

Thank you everyone. Thank you very much. Thank you. Once again, ladies and gentlemen, a very good evening and welcome to what promises to be a truly memorable evening. On behalf of Tanla Platforms and our group companies, Carex and Value First, I extend a warm and a hearty welcome to our enterprise and telco customers, our global strategic partners, board members, and our incredible team. At the core of Tanla’s DNA are three enduring principles, innovation, collaboration, and impact. For three decades, this DNA has driven us to build innovation at scale, touching billions of users, and what excites us the most is the greenfield landscape that we have always explored. Along with it, it has helped us work in close partnership with our esteemed customers, our regulatory ecosystem, our telco partners, and the broader ecosystem to ensure that every step has been collaborative and always ahead of the curve.

And lastly, it has helped us ensure that every innovation we pioneer creates a tangible and a measurable impact in the world. And it’s these principles that shaped Wisely .ai, our agentic AI platform built to identify, prevent, eliminate, and bring to the books the growing menace of spam and scam, not just in India, but world over. Today, Wisely .ai is live and delivering real impact at Indosat in Indonesia, at BSNL in India, and with our leading banks in India, safeguarding millions of users in real time every single day. Tonight, we don’t just want to talk about it, we want to bring it to life. So without further ado, let’s bring the story to life. Please welcome our guest for the fireside chat.

A fireside chat from the theme Vision to Impact, driving customer engagement with AI -driven trust. It gives me immense pleasure to invite our host for the fireside chat, who has spent nearly four decades as a distinguished global telecom leader, leading one of India’s most iconic companies as CEO of Bharti Airtel, shaping global mobile policy as a key voice on the board and executive committee of GSMA, and building a legacy that stretches from telecom to digital services and beyond. We are honored to have him as our board member at Tanla Platforms, where his global perspective and vision, continue to shape our journey towards becoming a world -class AI -driven communications enterprise. Ladies and gentlemen, please put your hands together for Mr.

Sanjay Kapoor. A guest on the fireside chat is a seasoned global telecom leader who has not only defined the arc of the industry, but also built it. A career spanning across some of the most dynamic markets across Asia and Africa, he has held senior leadership roles from being CEO of Bharti Airtel Africa and Managing Director of Bharti Airtel Seychelles to serving as a CEO of Orido Group in Maldives and Director -CEO of Indosat Orido before taking on his current role. Today, he leads one of Indonesia’s most transformative telcos, Indosat Orido Hutchison, driving its evolution from a telco into an AI tech co, anchored by a bold vision. of AI for all and a deep commitment to digital inclusion and security for every Indonesian.

Please join me in welcoming the President, Director and CEO of Indosat Oridu Hutchison, Mr. Vikram Sinha. I hand over the baton to our esteemed host, Sanjay, to take it forward and we all look forward to it. Thank you.

Sanjay Kapoor

Thank you. Thank you for your kind words and welcome Vikram. Before we really get down to asking a few questions from the person who is going to be on the firing range for today’s chat, let me set up a pollute for what we are going to be discussing. We all know that the global economy is rapidly digitizing, but the trust has become its most crucial foundation. Digital payments are expected to surpass $14 trillion annually by 2027, with more than 5 billion people online. In South and Southeast Asia, nearly 2 billion people are coming online at a record speed, driven by affordable smartphones, low -cost data, and national digital infrastructure initiatives. India’s digital economy is projected to cross $1 trillion by 2030, while Indonesia’s has already exceeded $100 billion in GMV.

Yet, this scale brings vulnerabilities. Both markets are facing rising cybercrime, digital fraud, and organized scam operations, causing billions of dollars worth of losses each year. Globally consumers have lost over a trillion US dollars in scams Today’s fraud is no longer isolated phishing It is AI powered, it is cross border, it is automated and industrial in scale This is not just a consumer experience issue anymore It is an economic issue, it is a systemic risk issue, a trust issue and it demands great leadership to combat it It’s my privilege to welcome Vikram who I have known for years and years We worked together at ATL2 He is the President, Director, CEO of Indosat, Orido, Hutchison and is serving over 100 million customers in that country Under his leadership, Indosat has accelerated its transformation into a digital first AI technology access across both urban and rural communities.

Indosat has evolved as an AI tech company and partnered with Tanla, guided by a powerful vision, AI for all. And that’s a very powerful statement that they make. So Vikram, welcome here. And we’ll get down to some sharing of insights and questions to you. We’ve all known about digital fraud becoming more intense. We all know it’s eroding trust. And you as a CEO and with your lens, when did you really move it from being a customer complaint issue to a board level issue? Because there must

Vikram Sinha

First of all, again, it’s an absolute honor and privilege, especially having it with Sanjay. You know, I have a long learning history. So thank you. Thank you, Sanjay. And it’s an absolute honor. I think coming back to your question, let me share with all of you a true story. I’m also on the advisory board of MasterCard. I still remember early 2024, one of the board meeting in London, advisory board meeting, the Asia SCAM and the GASA, Global Anti -Scam Association, presented the data and I was blown off. That report shows that in 2024 itself, 5 billion US dollar Indonesians have lost. What touched me is these are all middle income, lower income women, elderly women. This was an eye -opening data for me, number one.

Number two, the next key highlight, Sanjay, it was every Indonesian, 65 % of the Indonesians are facing spam or scam on a weekly basis. So that itself was a trigger for me that Indosat being such an iconic brand. Let me tell you, Indosat is like BSNL of Indonesia. 58 year old company, first company to connect Indonesia to the world. It became Indosat Oridu Hachisan. But people have lot of expectation. So as a CEO, that was the trigger Sanjay that our role is not only to connect. Our role is to also protect my 100 million customer. And that is where I got very serious that we need to solve this problem for our 100 million customer.

Sanjay Kapoor

Yeah, I mean, I think every board worth its while today. gets intimidated by this problem that is hitting them. And I’m so glad to hear from you that your board is fully aligned with you on this cause and you’ve been able to convince them to say, I really want to go ahead making some serious investments and changes because of this. And my leading question from here is that I just said that scammers are using AI, voice cloning, automated phishing campaigns, synthetic identities. How did you think of AI in the middle of all this? You know, because it is a new technology. People are still surfacing where they’re headed. But you’ve picked it up as the foundational infrastructure for protecting, you know, this at a national scale.

So tell us about that.

Vikram Sinha

So let me put it this way. I’m a very strong believer of fake it till you make it. So I started talking about AI two years back and I had very little understanding. Of what AI will do. I’m telling you a true story. I was in. many people still struggle with that today. But let me tell you fast forward. I was invited by, I think, Sundar and Google Circle. There were 15 CEOs. This was around a year back. I was on a breakfast table. The joke started by saying that AI is everywhere other than P &L. This is how the breakfast started. But within an hour, I understood companies, countries who have been all in and ahead of the curve and who are solving real problem, they have started seeing value.

So for me, if I have to solve a real problem at scale, and that is where we said that if these scammers are using AI, we have heard many stories and the way they clone their voice, you know, you will be so scared what all are happening. Then we were very clear, we want a partner, we don’t need a vendor We want a partner who can work with us and use AI to solve this real problem And I have to say Sanjay, I think you are on their board, Uday is here We work with 96 vendors, we categorize among them 20 strategic partners But there are 4 or 5 where I invest time, which becomes very strategic for us Because I was trying to solve a real problem, I met Uday and then our commitment was aligned And that is how we wanted to make sure that not only we solve it, we do it in a way which should become a global case study

Sanjay Kapoor

And with an AI -led model that you have put in place What are the benefits that are accruing to you at a customer level to begin with?

Vikram Sinha

Yes, because as I said, you know, there’s a lot of AI as a toy. We are very clear what we don’t want to do. So now that this is my first showcase, I put it on my quarterly result. And then you have been a CEO, you have reported quarterly after quarterly, until unless you have a substance, you don’t put any example on your investor deck. So if you look at my last quarterly investor deck, we have put it there that with Tanla platform, three things I’ll highlight. You know, quarter four, ARPU grew for the industry 3%, we grew 9%. Number one. Number two, our churn. Our churn, because markets are mature. You know, you don’t have to be over -obsessed with gross assets.

And you know, you have to deliver experience. our churn for serious base greater than 90 days from a level of 3 .6, 3 .7 have come down to 1 .6. And this is just a beginning, Sanjay, you know, because the model is getting trained. And I’m very confident that we will see much more value going forward.

Sanjay Kapoor

And, you know, from here, being an ex -CEO and being a board member for very many years, now, this question of ROI always haunts every board to say, you’re making an investment in this, it seems to be doing good for your customers. What about the ROI on what you’ve done?

Vikram Sinha

You know, this is, again, and it’s a fair question, you know, investment on AI is not small. So until and unless you see the impact of AI on your P &L, it will not be scalable. So. So very clearly, within six, eight months, we have seen whether it is ARPU, whether it is churn. And the most important thing is, Sanjay, where we lost, if I go back to my last two decades of experience, we as Telco, we were very inward looking. The biggest thing which we missed was focusing on customer love. I think this is very fundamental. This problem which I am solving is so, so fundamental that the role of Telco is not only to connect, it is also to give peace of mind.

Protection is a big statement. And the channel which is getting used is voice, WhatsApp. So you need to solve it for your customer. Otherwise, you have no business.

Sanjay Kapoor

I mean, I hear you and you are a passionate CEO who believes in keeping his ears in and the ears out. And I see you

Vikram Sinha

I had no idea about Tanla or anything. In fact, first time when Uday came to meet me, I thought it was a startup. I’m again being very honest. But then somebody told me they are solving for banks in India. I think we have to understand if somebody is solving this problem for banks. Because, you know, spam is one thing. But the bigger issue is scam. And these scams which happen, these are small tickets. These are like 50 rupees, 100 rupees, 500 rupees. These are like 50 rupees. and it goes up to as high as $10 ,000. You know, I’m just giving you example. But then I realized that they have done some good work in India. But I have to say, Sanjay, you know, where it moved from vendor to strategic partnership, we were also very keen that my team want to do a bit of an engineering with them.

So we have a full stack AI factory. We have our own GPU cluster. I think there are a few things where we have done before India. Our cluster of GB200, H100 was live. So I told Uday, let’s train the data because see the power of compute and GPU. You know, we all talk about TikTok. TikTok was all designed on GPU. They don’t even use CPU. So if you have to be ahead of the curve, today on that platform, let me give you two data points. Close to 2 billion spam instances. Scam. Clearly threat intelligence protected. 2 .3 million scammers flagged and customers are getting real time as you know we have grown up on the Airtel values I spent 5 days every month in the market I was in a village I was going to the new capital of Indonesia which is Far Flung on my way I stopped my car I saw an outlet I asked him my language still is not good in their language I asked him what do you like about Indosat IM3 he said this Sat Spam Spam Scam it’s solving real problem so again we just launched it on Whatsapp channel also I think Whatsapp is one of the challenge which is maximum getting misused so we have to continuously evolve and this is where Tanla has committed that we will make sure we do it together and we do it properly

Sanjay Kapoor

excellent you know these fireside chats have a time limitation so we have to keep it up and my stopwatch is telling me we’ve exceeded time already. So let me wind it off. Vikram, first and foremost, thank you for these insights. What stands out from our conversation today is how digital trust moves from concept to reality, which is what you’ve just described. When over 100 million subscribers are protected by AI, when billions of communications are analyzed in real time, and when millions of malicious actors are stopped within the ecosystem, trust is no longer a promise. It becomes an infrastructure. And what Indosat has shown through AI for All is that inclusion and protection are not trade -offs.

They must advance together. So thank you for your insights, Vikram, and it is a pleasure having you today.

Vikram Sinha

Thank you, Sanjay. Thank you.

Wish Gurmukh Dev

Can I request both of you just pose for a picture, please? Thank you very much, Sanjay. Thank you very much, Vikram. Wow. Two global leaders, one who defined the yesteryears of telcos across the world, and the other who’s redefining and bending the arc to set the future of telecom leveraging AI. Thank you so much, Vikram and Sanjay, for this scintillating talk. Thank you very much. Our next session, ladies and gentlemen, is… going to be a panel discussion wherein we have Anshuman Kaur Chief Growth my apologies Chief Customer Success Officer of Tanla Platforms who is going to moderate the panel AI for Citizen Protection and Securing the Digital Economy May I request Anshuman to kindly come on to the podium please First of all panelists Mr.

Ratankesh Executive Director and Chief Operating Officer Bandhan Bank Bandhan Bank is one of the largest and the fastest growing banks in India with over 32 million customers served by the across 35 states He is leading multiple functions including technology, operations, customer experience and transformation functions second of our panelists Mr. Bipinpreet Singh founder and CEO MobiQuik a leading fintech entrepreneur at the forefront of India’s digital payments evolution Bipinpreet Singh has built MobiQuik into India’s largest digital wallet with over 180 million users please welcome Mr. Bipinpreet okay we’ll go ahead with the third panelist while we wait for Bipin our third panelist Ms. Nehaji Mahatme director Amazon Pay India a payments and fintech leader driving customer centric digital financial experiences at scale shaping how millions transact seamlessly and securely through the Amazon Pay India app please welcome Ms.

Nehaji Mahatme Ms. Neha Kavya can I request you to just check with Bipin please Maybe Anshuman. Yeah. Oh, Bipin is on his way. To Bipin, ladies and gentlemen, founder and CEO Moby Quick, leading fintech entrepreneur at the forefront of India’s digital payments evolution. Thank you.

Anshuman Kar

Good evening, everyone. As we just heard in this fireside chat, the problem is big. We just heard numbers of over… over $1 trillion being lost in the global economy because of scams and frauds. If you think about India in particular, SMS as a channel, almost 70 % of the scams originate from that channel. And as messaging itself has expanded into other OTT channels as well, and I’ll share some numbers, now 65 billion SMSes are spent monthly in India. Another 15 billion are sent monthly over OTT channels. So when you look at these numbers, it is clear that it is a channel, while it is important and critical, it’s only proliferating even further. So in that context, when you think about, and I joined Tanla relatively recently, compared to the three decades of history as a chief customer officer, and it has been a privilege to see the build and the deployment of WISD AI platform.

And it is an honor to have Vikram here. As a CEO, and there is nothing better to hear that validation directly from the customer. And you heard the impact that it’s having. on the end users in terms of protecting them from scams and spams. In fact, the estimations are within six months of launch, we have protected almost $500 million in estimated losses. And then as you think about where this takes us in the future, scams and scamsters are continuing to evolve. They’re not sitting idle. And so we have to stay a few steps ahead in the innovation curve. That becomes critical. And when they’re becoming more sophisticated, they’re becoming more personalized, and they’re actually probably also becoming more successful at times.

So tonight, I will not, before I get into solutions, I want to focus on the problem. Is the problem really getting better? Or is it getting worse? And why? So we have a distinguished panel here, and they all provide very different vantage points in the industry. We have banking, who sees the transaction risk and, frankly, a lot of regulatory accountability as well. You have fintech, who sees a lot of velocity and scale, and they also are obsessed with customer experience. And then you have platforms like Amazon Pay that have the commerce side, they have the payment side. So they see a lot of behavior signals across multiple parts of the platform. But from a citizen perspective, an average user, it’s a seamless journey.

They don’t operate in this individually. So that’s something that we will delve into. And as part of that, we would love to deep dive in terms of, how the key stakeholders in this ecosystem need to work to thwart this menace that is in front of us. So with that, I want to welcome our distinguished panelists. Thank you for joining us for this discussion. Thank you. So let me start off with you, Pratham. Recent Supreme Court judgment, a couple of weeks back, talked about, I think, 54 or 56 ,000 crores being lost to scams. In fact, they called it dacoity. I don’t think I’ve heard that term lately. I think it was around Chambal and all. I used to watch movies when I had heard that term.

But it is of that magnitude and scale. So the question, Pratham, is why is this still a problem? And what is really not working?

Ratan Kumar Kesh

mostly senior citizens and at times even IIT Bombay professors so that’s like the spectrum you can look at it they are being defrauded the second is a lot of customers are now able to open accounts in most of the banking companies they open banking accounts and those accounts are being utilized to siphon off the funds stolen from somewhere and getting routed so there are two parts of the problem in different sense the bigger trouble is the second one the second one is being done willingly with a country having 1 .5 billion population there are a lot of people who would be willing to open accounts and then across multiple banks the India stack makes it pretty simple to have an account onboarded very quickly in just about few minutes and then they go and rent that account and get a fee per month and that number and the lure of making easy money is so high and that’s why it’s so difficult and to me that’s not working So at one end, we celebrate India AI Summit, all the global leaders, heads of states, the big AI celebrities are coming all over here.

And we are talking about our countrymen who are actually defrauding poor senior citizens, and they have got hard -earned money of their entire life, and those are being siphoned off. So that’s very, very sad to see. So I think what is not working is the mindset. It’s not going to stop so soon. So it’s a bit of a philosophical response, but I’ll come to the bit more technical response a little later. The second is our customers getting defrauded using multiple things. That part I think it has got improved significantly because most of the banks now have very sophisticated rule engines. So now as banks like ours, we process. millions of transactions. Like I was just saying that my UPI volume, we are just an 11 -year -old bank.

My daily UPI volume is something like 60 lakhs per day. Now, the volume and velocity of transactions are very, very high. But the good part is that depending on the customer’s profile, the customer routine transaction pattern, we can identify an out -of -routine transaction. If a customer happens to withdraw 10 ,000 rupees from a particular ATM, generally, he or she withdraws from somewhere else, we can say it’s a non -routine transaction. Someone withdraws 10 ,000, suddenly withdraws 25 ,000, we know it’s a non -routine. Somebody never makes a rent payment, suddenly starts making a rent payment using one of the payment channels, we say it’s an out -of -routine. So once you have out -of -routine transactions, we are able to identify those.

Sometimes we prevent those transactions. Sometimes we go back, do enhanced due diligence, and then allow. So that part is working fine, and the tools are getting more and more mature. AI is helping us to build the algo a lot better. So that part is clearly working. So if you are seeing the numbers, the fact that the velocity and the volume has gone up in terms of percentage is coming down. But the mule part of it is very, very scary, which is customer account being utilized as a rent. You know, the other day, one of my employees from the fraud prevention team called up one of the fraudsters, saying that, you know, this is a senior citizen, you called so many times, you tried to defraud, why are you doing this?

So his response was that, can you tell me how much money you make a month? It must be 50 ,000, 70 ,000? I’ll give you double, you start giving me data. The fraudster is telling my employees, saying that, can you share with me more data? Don’t worry, I’ll not tell anybody, just give me data, I’ll give you 70 ,000, I can even pay you 1 ,50 ,000 per account. Now, there again, we are using a whole bunch of technology, including the algorithm in terms of the transaction monitoring algorithm to really prevent that. And I think Carix has developed one which we are implementing now, which is this anti -phishing tool. which has got some very interesting capabilities. If some of you are interested, you could talk to the Karik spokes.

I think that is quite interesting. They sit on the DLT platform. They scan that particular SMS, where is it originated from, some of the links that they provide in terms of collecting data, whether it is genuine or fake. They look at the keywords using some of the AI algo and then come back and use some of the techniques to stop preventing and sending the SMS. To the potential customer who could then get defrauded. So I think these are some of the things which are working. So largely that is what is the spectrum I would see, Ansuman. It is a long answer, but that is what it is.

Anshuman Kar

No, this is fantastic insight and it is obviously great. In fact, my parents are so scared they do not even use ATM cards because of the risk of being defrauded. In fact, they wait for me to come and they will do recharge on the phone, otherwise they are going to the stores to do it. Because this is becoming really scary and especially vulnerable populations, like senior citizens and all, are particularly exposed. let me ask let me turn it over to you Neha right you sit at the intersection you see AI patterns I’m sure Amazon uses AI all over you analyze behavior patterns and all across commerce across payments right but why are we not able to stop scams across the whole journey

Neha Gutma Mahatme

so I want to kind of talk about three four aspects that we’ve learned on why it is difficult so first of all I think GAMP is not at a transaction or a payment transaction level I think it is a behavioral journey it starts much before the payment really happens and that’s really where the fundamental issue is and that’s what I think as an industry we need to kind of solve for the second and if I kind of belabor on that point I think the social engineering happens much ahead it is not really when the transaction is happening or the payment is happening the deepfakes, the voiceovers the fake identities the layering of transactions, I think it’s all making it very difficult to stop scams at the point of the transaction.

The second point is the data side was limits visibility. So while as Amazon we have really good data internally on the platform but I think we miss the data of how these social engineering patterns are getting created outside of Amazon and that really prevents us. The third is the human psychology evolves faster than models. So while you can really build models, refine algorithms but can’t beat the offensive AI because the AI is being used on both sides. It’s not that the preventive AI is working, there is also offensive AI. And the offensive AI works unconstrained while the defensive AI has constraints of privacy, constraints of regulations, constraints of customer experience. And so I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s benefits that you need to provide.

So I think that’s the third part and therefore the last part which is really the crux of the point is that AI is helping detect anomalies. It is not helping detect the malintent or the behavior and unless and until I think we solve the malintent or the behavior I think the scans, the

Anshuman Kar

It’s a fantastic response. And basically what you’re saying is no one institution is seeing the whole journey. You’re seeing pieces of it. But there’s a lot of parts that are interconnected. Let me bring Vipin into the conversation. Vipin, you see a lot of transaction velocity and scale. But you’re also focused on customer experience because that can become a friction point if you do go overboard in times to protect. How does AI come in? How do you calibrate your model and AI to balance that potential friction that potentially can impact your growth and legitimate customers versus protecting from fraud?

Bipin Preet Singh

Thank you. Thank you, first of all, Ashutosh and Tanla Solutions for inviting me here. It’s a privilege. We are also customers of Tanla, so happy customers. Thank you. I think when it comes to AI and the usage of AI for fraud, I want to give some perspective with respect to first the kind of fraud. So we operate in payments and financial systems. And I think what’s happened in the last 10 years is the financialization and digitalization of financials has happened at an exponential scale. And so many different entities have gotten interconnected, just like what you are saying, that a loophole in just one place is sufficient to create fraud and scam throughout the ecosystem. So one thing we have to be very clear that one entity cannot control scams.

It’s very, very difficult. Because like the experience that we have at MobiQuake is 99 % of whatever scams that our customers complain of are not the money. It’s not the money that they get stolen out of MobiQuake, but it is actually stolen out of some other bank. and come into MobiQuick. So we are the recipients and, you know, and we get the complaints that the money has come here and we need to take action, whether it is coming through UPI, coming through credit card fraud and all those things. Now, therefore, you know, the standards of education and the standards of 2FA, second factor authentication, they need to be there. Perhaps are not equally enforced. The awareness and the education is not equally enforced.

And that brings me to the second point, which is that, you know, the AI, the scams have also become very, very sophisticated. Right? In our company, and we are a fintech company, we employ so many smart people. There are people who have fallen fraud to scams where they got a WhatsApp from me asking to buy gift cards with my photo and people have bought it, you know, without trying to verify or seeing what the number is. On the WhatsApp thing, because, you know, this is… It’s becoming, with AI, it’s becoming harder and more sophisticated. You know, the modus operandi of the scamsters is becoming extremely smart. And they are getting very, very smart at understanding the profile of the customer.

It’s not, they don’t target everyone. They have a very clear idea on who is likely to fall for a scam. So, there is need, for example, and there is a, I think there is an effort going on at the RBI’s end. And the government said to create a intelligence body which will share data across the entire payments ecosystem. I think it’s called Digital Payments Intelligence Authority or something like that. And that is something that’s extremely important because until data sharing starts to happen at an India level scale, you cannot identify. I can identify patterns. I can identify a pattern which works for me. But, you know, the scamsters will get smarter because they will keep changing their MO outside of Mobiquit.

So it’s very very difficult to keep adapting to that. So there is a national level initiative that is required. Third thing which I want to say is on the LEA front. I think the almost the entire country all the police everyone knows where the scams are. And this I am not able to understand why no action gets taken. It is the same places. It is the same origins. Right. But somehow no action gets taken. And I think until there is fear of law until enforcement has happened that that which will fraud you know payments fraud will come. At our end you know what we are doing is we are creating our own in -house models.

So as far as technology is gone in terms of machine learning and now with AI. We have created our own models. We have created our own models trained on our own data sets because they work best. for our kind of transaction pattern. But they may not work best for the kind of solutions that other companies may need. In fact, we have explored solutions of fraud and machine learning from other companies, but they have a very poor performance because they are trained on some industry -level data, which is not the same patterns that we get. So, at our end, we are a tech company, we can adapt. But I cannot say the same for all the entities, at least on the financial domain.

And that’s a big problem because until there is a national… And I think the regulator, especially RBI, is very, very concerned about this. I think we have heard now, and in my recent conclave where I went to, that they are saying that enough of making transactions easy. Now we need to go in the reverse direction. Now we need to make transactions a little difficult so that there is some friction because otherwise, you know, people are losing money.

Anshuman Kar

It’s great points. And as you said, I think the silos of data that we have, and in fact, you talked about training the models. In fact, you know, again, when we went to Indosat as well, you had to go and train them, even including language nuances as well. So these things become critical in terms of adapting. You mentioned about like RBI initiative and all, and I direct this to you, Ruthen. Who ultimately owns the responsibility of protecting citizens at national scale? Because can banks do it alone? Can RBI do it alone? Or are you dependent on upstream and downstream signals, upstream signals like that you get from telcos because a lot of these things originate from the channels?

How should responsibility be structured? to protect people and to help.

Ratan Kumar Kesh

I think Vipin spoke about that point that look, for a fraud to happen, we know that somebody would have gone to an e -commerce platform to make a payment and that payment is coming through a payment channel and the account is held in Bandhan Bank and the payment is made to let’s say for a product or through some platform the payment is made to an access bank credit card. Now and then the fraudster is actually sitting somewhere out there who is pretty much somebody amongst us. Now I will give you a very funny story of I lived in Mumbai for 20 plus years the local trains are full of used to be those days pickpockets. I came from I was living out of India and came back and I was going to meet a friend of mine that was my first trip in the local train and my purse got stolen and he said don’t worry how much money I said that’s okay but I had credit cards and all of that so somehow managed to block those cards and he says let’s go to the police station I said but I had my identity cards there he says don’t worry it’s fairly ethical pickpockets in Mumbai you go and tell the police you have to tell which train which local train from what to what and what exactly it must have opened probably so I said from Borivali to Andhri it must have happened in between it was a whatever 950 local train so after two days the police called me and handed over me the identity cards so so there was nothing else there but then I got back the identity card so I think that’s like the police has an ability to really find out who the people are and I find it quite baffling to figure out that this fraudster is somewhere there the telephone numbers are issued multiple of them by a telecom authority the other pan cards were issued which are used to get this sort of account opening and video QIC and the telephone numbers yet we are not able to find out who these people are and technology has to protect all of that which as all of us are saying that we are trying our best to protect and the bad part and the sad part is that whenever there is a fraud happens and a customer goes to the regulator like the ombudsman and it says 20 lakh fraud it’s okay which bank is went from it says it went from HDFC bank where did it fall first it fell in Bandhan bank okay two of you together 10 10 lakhs pay it and be done with this that’s easy isn’t it I mean we of course are regulated entity regulator has no choice but to sort of do whatever best they can do and which we do and that’s absolutely fine we must have had some lacunae in our process but how do the whole chain has to work together I mean including the citizens instead of becoming gullible in terms of having the awareness about banking product the banks and the payment companies and the country has to create more awareness the police, cyber police and the local police they have to work together minister of home affairs is working very hard in terms of really making it happen so it’s an ecosystem problem and I think if all of us come together, all of us create more awareness and make it really ruthless probably that’s the only way to happen otherwise it’s no easy

Anshuman Kar

Thank you, thanks a lot I am told time is up I wanted to solicit two questions as well from the audience but in the interest of time I will just summarize this session in fact we have all talked about going kind of end to end it’s not just identification using the AI models but the prevention, the elimination and ultimately holding the scampers accountable with law enforcement and that comes a big part there is law and the enforcement of the law they can sometimes mean two different things so ultimately one of the things I hope just one more point in the name of hyper personalization the amount of data that you collect and the amount of data that you collect and the amount of data that you collect and the amount of data that you collect and we have an ability to go back to Neha and say, you know what, you’ve been searching for a home.

I can tell you which is the right home for you. That’s great. Neha feels delighted because she can actually choose the right house. But the same data is getting misused to do other things. And as much as I can collect data as a bank or a real estate company, Proster can also collect the same data. So as you said, the AI is on both sides and they are like more offensive. And so it’s a question of who stays a few steps ahead of the other, right? What is striking from this discussion, hopefully, and I’m summarizing is like, as you can hear, everyone is doing something, right? Oh, please. Please go ahead. We can take one or two questions quickly.

Sure, please. Can you please help the gentleman with the mic? This is not the best way.

Audience

So there should be some integrated approach. So I think the government of India is already working on that and they have a digital payment intelligence platform. So it’s not for that purpose or you are referring to some other issue.

Anshuman Kar

Sorry, is that a question? I think the question is there are some government initiatives.

Audience

Initiative is already there. Is it not enough? To have an integrated model for this fraud protection and other thing. Because you said. financial institutions are having their own trend model and accordingly they are protecting their customer this thing but if we have that integrated model and government of India through their one company there they have initiated with the collaboration because that RBI is working on mule hunter through RBI mule hunter is the initiative yeah from RBI in his hub so that is already all the banks are doing and they are doing with their own three month five month data individually not a complete umbrella like that so for that everything is coming in the this digital payment intelligence platform so with this initiative Anna that it is so which you have referred will not be addressed to

Bipin Preet Singh

yes yes absolutely I think you know at least I feel that there is going to be there’s a strong potential because for the first time data across the financial economy is being used for the first time and it is being used for the first time ecosystem will come together in one place and I think that is a big deal and then once that data comes together then hopefully the best people will work on it and understand patterns at a national scale because the problem in digital is everything is connected so you have to study it in an integrated manner and I am very

Anshuman Kar

Thank you for that question and we are obviously hoping that all of these show results but at the same time we are talking about national but scammers are not limited to even national geographies they are international as well so the scope and the breadth the surface area of the threats are only expanding so we have to really look beyond and if I may from my personal experience one of the things is in the world of AI data is actually the differentiator not so much the models because all public and the willingness to share the data itself is a potential barrier real time data and this is where it is not about just banks and financial institutions it’s also potentially telecom, right, because they see a lot of initial signals in terms of messages being sent and communication and so forth, as Vikram just talked about as well.

So let me summarize again this session.

Audience

Can I ask one question?

Anshuman Kar

May I request you to take it offline, please, because of the time constraint, if you don’t mind. Thank you. Thank you for cooperating, sir. It’s great to hear. I’m sure there’s a lot of interest in this topic and it shows the resonance of what we are discussing. So, again, I think to summarize, as you said, the attack surface is all interconnected, but our defense is right now fragmented. And therein lies the opportunity. And while the next frontier cannot be just more smarter individual AI model, it has to be really coordinated intelligence. And that obviously has to happen real time. It has to happen across the ecosystem. And it has to happen within the guidelines of a national level trust architecture.

So with that I will want to thank you to all the panelists and also for all of you to participate in this discussion and really contributing to this to shaping what the future looks like because this is really not just trust, this is the foundation of creating the digital economy and the growth that underpins it. So thank you so much. Thank you very much all the panelists and Anshuman may I all request you to stay on the stage for a quick photograph. Wow. From using technology for transaction monitoring and layering it with AI to solving for behavioral intent and offensive AI along with solving for customer friction on customer experience by layering it on technology and captive models along with regulatory tenets.

It was a very insightful and a very meaningful panel. Thank you very much each one of you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Our last session for the evening is one of the most interesting ones. It’s what we do on our third element or the third pillar of DNA, that is impact. It’s the impact spotlight, wisely .ai, our client’s perspective. Very few leaders in India telecom landscape carry the depth of experience and the institutional weight that our next speaker brings to the stage. As chairman and managing director of BSNL, he has orchestrated one of the sector’s most compelling turnarounds, driving the rollout of India’s first indigenous 4G network and restoring the organization to a clear path of growth, profitability and purpose.

With over three decades of service spanning tri the government of Tamil Nadu and an advisory role to the government of Uganda, recognized with the Visist Sanchar. For distinguished service and a gold medalist in electronics and communication engineering, he remains one of the most. consequential voices in India’s telecom and digital governance story. Ladies and gentlemen, please put your hands together for A. Robert J. Ravi, Chairman and Managing Director, BSNL.

A. Robert J. Ravi

important step we are thinking about, that’s what I was talking about. On the network side, can I bring AI? In bringing AI in the network side, I can even get patents, customer patents, calling patents, network initiatives. We were able to even see exactly how and where most of the complaints were happening in the network. And this also helped me for tweaking my entire setup today, where I’m very sure at the end of probably the study and whatever research we are right now going on the AI, as a user, if you are a BSL customer, you can intelligently speak to your RAN. When I speak, when I tell intelligently speaking, it could be various things. So you can have…

you can request a specific dedicated data specific dedicated voice traffic that means today I am in this place I need to video stream I need a 1G so not a 1G at least under 10 throughput available all the time it will be made possible so that type of a user enabled platform which we are building that will control not only from the customer angle from the network angle if this becomes successful today no customer in future when this reality actually comes in no customer could be so easily fished or smacked that’s the way which we are trying to go ahead of course going into what exactly happened was the last one we can see that the user impact we were able to authoritatively say that that so many number of connections to close to 280 million spans we have installed today.

This is on one side. Now we are integrating this particular aspect on a customer experience platform also. How do we benefit out of it? In my customer experience today also, we have brought in we have something called the AI Vani system. It’s just a Vani which comes and says, whatever you want, you can speak to the particular agent. And then we brought in something called a BSNR recharge expert system. It’s a complete AI driven. So when user suppose even as a user because spam suppose when he stopped the spam for the SMS side, next thing we have to concentrate is on the data side, which again I’m talking of course we are speaking to you all how do we do for the data side.

Data is not only on whatsapps or social media how can we expand the horizon of this particular area that’s where what we thought can I build in intelligence into the system itself so when you wanted to do probably recharge or even it works as a worm in the network to easily identify the sites which needs to be blocked which should not be available for my customers such a sort of independent intelligent network needs to be built in which we are targeting up that’s the second pillar the last pillar before I wind up is going to be the rural thing the rural side with Bharatanat coming in at a very big space rolling out of Bharatanat’s network we are trying to see when you’re closing closely going close to the customer at the end we are seeing lot of fractured coming in can I put edge data centers using this edge data centers can I really run what we call even SLMs.

Today we talk about different LLM models. These LLM models require a lot of information. The data is a key engine for it. And we are all travesty to see why should I share my data. So this is the next concept of what we bring in what we call a federated learning. So your data resides with you. I just learn your data and I federate over it. So all this is possible when I go to the rural end of the edges. So there I will be able to protect the customer to a next level. I am very sure I think we can keep talking on this very interesting topic but since the time is short I thank the organizers for giving me opportunity.

But my request to you all still there is lot of work to be done. Unless we have built a system where we confidently say our citizens that you are 100 % safe in my network. our job is not done. And this is possible only when we bring in technology and play it across in a platform which really intelligently builds this network. Thank you.

Wish Gurmukh Dev

Thank you, it’s been a wonderful evening absolutely thrilling to have two CEOs exchange and share with the audience the real life problems and how they converted into an opportunity that is going to shape the future of telecom in one part of the world followed by a panel thank you Anshuman and thank you once again to all the panelists who made the effort to come in and share their own perspectives on what could be changed structurally from a regulatory perspective, from an ecosystem collaboration perspective to customer experience without friction and lastly dear CMD Mr. Robert Ravi for sharing the deep collaboration that BSNL and Tandla platforms have come into and are trying to set a lighthouse to what could really be a customer experience driving safe and secure customer transaction thank you very much it’s been a true honor and a privilege to host everyone here thank you I am very thankful on behalf of Tanla platforms, our group companies Carex and Value First for hosting you all here Thank you very much Thank you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Wish Gurmukh Dev thanked the audience and welcomed them on behalf of Tanla Platforms and its group companies Carex and Value First.”

The knowledge base lists Wish Gurmukh Dev as the host/MC representing Tanla Platforms and its group companies Carex and Value First, confirming the report’s statement. [S1]

Additional Contextmedium

“Sanjay noted that more than five billion people are online and that digitisation is occurring at unprecedented speed.”

External data shows internet users have grown to about 5.4 billion globally, illustrating the scale mentioned and providing supporting context. [S110]

!
Correctionhigh

“He highlighted the addition of nearly two billion new internet users in South and Southeast Asia.”

The knowledge base reports only about 40 million new digital users in Southeast Asia in 2020, far less than the “nearly two billion” figure cited, indicating the claim is likely overstated. [S111]

Additional Contextmedium

“The scale of digital activity brings rising cyber‑crime, digital fraud and organised scam operations that cost billions each year.”

Estimates of global cyber-damage range from $2.3 trillion to $10.5 trillion by 2025, underscoring the magnitude of financial losses referenced. [S117]

External Sources (117)
S1
Secure Talk Using AI to Protect Global Communications &amp; Privacy — -Wish Gurmukh Dev- Host/MC representing Tanla Platforms and group companies (Carex and Value First)
S2
29, filed Jan. 22, 2010, at 9-10. — (last visited March 3, 2010) (‘Net Literacy’s programs are independently beginning to be developed by students from New …
S3
WSIS+20 Open Consultation session with Co-Facilitators — – **Jennifer Chung** – (Role/affiliation not clearly specified)
S4
Secure Talk Using AI to Protect Global Communications &amp; Privacy — -A. Robert J. Ravi- Chairman and Managing Director of BSNL, telecom leader with over three decades of service, gold meda…
S5
Secure Talk Using AI to Protect Global Communications &amp; Privacy — The main fireside chat featured Vikram Sinha, CEO of Indosat Ooredoo Hutchison, who shared how his company transformed f…
S6
Secure Talk Using AI to Protect Global Communications &amp; Privacy — -Ratan Kumar Kesh- Executive Director and Chief Operating Officer of Bandhan Bank, leading technology, operations, custo…
S7
Secure Talk Using AI to Protect Global Communications &amp; Privacy — – Sanjay Kapoor- Vikram Sinha- Ratan Kumar Kesh- Anshuman Kar – Vikram Sinha- Anshuman Kar
S8
Opening Remarks (50th IFDT) — – Moderator: No specific role or title mentioned
S9
Secure Talk Using AI to Protect Global Communications &amp; Privacy — – Bipin Preet Singh- Ratan Kumar Kesh- Neha Gutma Mahatme – Ratan Kumar Kesh- Neha Gutma Mahatme
S10
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S11
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S12
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S13
Secure Talk Using AI to Protect Global Communications &amp; Privacy — -Sanjay Kapoor- Host for fireside chat, distinguished global telecom leader, former CEO of Bharti Airtel, board member a…
S14
https://dig.watch/event/india-ai-impact-summit-2026/secure-talk-using-ai-to-protect-global-communications-privacy — First of all, again, it’s an absolute honor and privilege, especially having it with Sanjay. You know, I have a long lea…
S15
Secure Talk Using AI to Protect Global Communications &amp; Privacy — – Neha Gutma Mahatme- Bipin Preet Singh – Ratan Kumar Kesh- Bipin Preet Singh
S16
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — All kinds of fantastic applications already that we’re seeing right across the economy. We’re using increasingly agentic…
S17
Google and Microsoft launch separate Artificial Intelligence (AI) platforms for cybersecurity — Google Cloud has launched Security AI Workbench, an AI-driven security platform that combines several of the company’s e…
S18
Omnipresent Smart Wireless: Deploying Future Networks at Scale — The collection of large amounts of data for citizen services raised questions about how this information, particularly p…
S19
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — ### Cross-Industry Collaboration The discussion highlighted successful cross-industry collaboration examples, including…
S20
WS #148 Making the Internet greener and more sustainable — Nathalia emphasizes the importance of collaboration between different stakeholders to achieve a greener Internet. She su…
S21
Transforming Health Systems with AI From Lab to Last Mile — Data privacy, security and ethical safeguards Federated learning allows models to be trained on locally stored patient …
S22
AI for Good Technology That Empowers People — “So to make it even faster and achieve the sub 10 milliseconds, you actually have to bring in inference and training to …
S23
Published by DiploFoundation (2011) — Keywords: data protection regulation; call centres; adequate country; first job; e-commerce; Paraguay This framework wi…
S24
Enhancing Digital Resilience: Cybersecurity, Data Protection, and Online Safety — 3. Developing strategies to effectively reach and educate rural populations Audience: Thank you very much. I think my q…
S25
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs…
S26
WS #208 Democratising Access to AI with Open Source LLMs — Daniele Turra: Yeah, I’ll try to be very brief. So one key difference that we can see in open LLMs when it comes to t…
S27
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — ask I don’t think there is any country in the world whose government has given its citizens… In India’s context. Yes, …
S28
The AI gold rush where the miners are broke — The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economi…
S29
Panel Discussion AI in Healthcare India AI Impact Summit — Thank you for the question and for the invitation. So, you know, as you said, Switzerland and India, when you look at th…
S30
Living in an Unruly World: The Challenges We Face — Every year, large numbers of young Africans press into the labour market. If they can be provided with jobs Africa’s GDP…
S31
Seismic Shift — 1. International Monetary Fund, ‘India’s Economy to Rebound as Pandemic Prompts Reforms’, November 11, 2021, https://www…
S32
Secure Finance Risk-Based AI Policy for the Banking Sector — And these systems are fueled by vast data sets drawn from public and proprietary sources. On this foundation operate lar…
S33
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S34
AI Meets Cybersecurity Trust Governance &amp; Global Security — And that’s alarming because what’s going to happen in that context is it will focus on enterprises first. It will focus …
S35
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Anastasiya Kozakova:Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work …
S36
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — And now the next step is working with the hyperscaler is how do we commercialize these outside Saudi Aramco to the marke…
S37
From KW to GW Scaling the Infrastructure of the Global AI Economy — These investments must be monetised quickly to achieve acceptable returns, driving the need for compressed deployment ti…
S38
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we …
S39
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Multi-stakeholder Collaboration and Data Sharing**: Panelists emphasized that effective fraud prevention requires un…
S40
The State of Digital Fragmentation (Digital Policy Alert) — In terms of data governance, the analysis emphasises the need for dialogue and finding common ground for global data gov…
S41
Overview of AI policy in 10 jurisdictions — Summary: Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspire…
S42
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Real-world implementations are already emerging. ByteDance has introduced an AI-first smartphone in China that eliminate…
S43
Digital Ecosystems and Competition Law: Ecological Approach (HSE University) — Another important aspect emphasized in the provided information is the need for collaboration between different authorit…
S44
Digital democracy and future realities | IGF 2023 WS #476 — Finally, the analysis advises policymakers to be mindful of the diversity of the internet ecosystem. It suggests that po…
S45
Building inclusive global digital governance (CIGI) — The absence of concrete and positive implementation of data governance frameworks also hinders effective regulation and …
S46
Secure Talk Using AI to Protect Global Communications &amp; Privacy — A recurring theme throughout the event was balancing security measures with customer experience. Traditional approaches …
S47
Consumer protection — One notable area where AI excels isfraud detection. By relying on advanced algorithms, AI can swiftly analyse patterns, …
S48
Employing AI for consumer grievance redressal mechanisms in e-commerce (CUTS) — Artificial Intelligence (AI) has emerged as a powerful tool with the potential to revolutionise consumer governance. It …
S49
Embracing the future of e-commerce and AI now (WEF) — In conclusion, this analysis highlights the significant role that AI can play in enhancing logistics, e-commerce, and re…
S50
The State of Digital Fragmentation (Digital Policy Alert) — In terms of data governance, the analysis emphasises the need for dialogue and finding common ground for global data gov…
S51
Operationalizing data free flow with trust | IGF 2023 WS #197 — In conclusion, the analysis sheds light on various aspects related to the movement of data, privacy regulations, regulat…
S52
Comprehensive Report: World Economic Forum Panel Discussion on Cybersecurity Resilience — Current identity solutions vary widely – India has done well, Estonia has good systems, but the US still relies on local…
S53
Main Topic 3 –  Identification of AI generated content — Dr Laurens Naudts, from the AI Media and Democracy Lab at the University of Amsterdam, provided a legal perspective, dis…
S54
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Noushin Shabab:Okay, thanks, Jenny. I’m not sure if the slides, okay, great. So as my colleague perfectly stated and mos…
S55
UNSC meeting: Artificial intelligence, peace and security — Albania:Thank you, Madam President, for convening this important meeting and for bringing this issue to the Security Cou…
S56
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Julian Gorman from GSMA emphasized that combating scams requires cross-sector collaboration, noting that scammers operat…
S57
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — Johannes argues that effective fraud prevention requires assembling a ‘powerhouse of fraud fighters’ who approach the pr…
S58
Inside Visa’s war room: How AI battles $15 trillion in threats — In Virginia’s Data Centre Alley, Visaoperates a high-security fraud command centreto protect $15 trillion in annual tran…
S59
Google’s fight against AI scammers — Google initiatedlegal action againsttwo distinct groups of scammers exploiting the company’s platforms and users. The fi…
S60
FBI warns of AI-driven fraud — The FBI hasraisedalarms about the growing use of artificial intelligence in scams, particularly through deepfake technol…
S61
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — The conclusion drawn from the discussion is that there is an urgent need for greater attention and inclusivity in the de…
S62
Day 0 Event #248 No One Left Behind Digital Inclusion As a Human Right in the Global Digital Age — Brynteson’s research identifies digital exclusion as multidimensional and context-specific, often affecting overlapping …
S63
Digital Transformation for all: An Information Society that respects and protects human rights — – **Women**: Janina specifically mentioned women among vulnerable categories needing special focus The discussion repea…
S64
Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477 — The analysis also emphasizes the significance of including vulnerable populations in policy considerations. Often, vulne…
S65
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — – **Cross-Industry Collaboration and Stakeholder Engagement**: The conversation extensively covered the importance of br…
S66
Workshop 1: AI &amp; non-discrimination in digital spaces: from prevention to redress — Establish cross-sector collaboration between different types of regulators rather than siloed approaches
S67
WS #479 Gender Mainstreaming in Digital Connectivity Strategies — Different sector regulators often operate in silos with limited coordination with ministries responsible for gender, edu…
S68
AI That Empowers Safety Growth and Social Inclusion in Action — Collaborative approach between governments, industry, academia and civil society rather than siloed regulatory or self-r…
S69
Secure Talk Using AI to Protect Global Communications &amp; Privacy — “Both markets are facing rising cybercrime, digital fraud, and organized scam operations, causing billions of dollars wo…
S70
Deepfake and AI fraud surges despite stable identity-fraud rates — According to the 2025 Identity Fraud Report by verification firm Sumsub, the global rate of identity fraud has declined …
S71
AI reshapes eCommerce tasks and security — AI is set to redefineretailin 2025, offering highly personalised shopping experiences.AI assistantsare expected to manag…
S72
AI takes over eCommerce tasks as Visa and Mastercard adapt — Visa and Mastercard haveannounced major AI initiativesthat could reshape the future of e-commerce, marking a significant…
S73
Lakera secures $20M for AI protection, Gandalf helps track threats — Leaders of Fortune 500 companiesdevelopingAI applications face a potential nightmare: hackers tricking AI into revealing…
S74
https://dig.watch/event/india-ai-impact-summit-2026/secure-talk-using-ai-to-protect-global-communications-privacy — Thank you everyone. Thank you very much. Thank you. Once again, ladies and gentlemen, a very good evening and welcome to…
S75
From KW to GW Scaling the Infrastructure of the Global AI Economy — Prefabricated systems and reference designs are essential for scaling at speed while addressing skill development challe…
S76
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — And now the next step is working with the hyperscaler is how do we commercialize these outside Saudi Aramco to the marke…
S77
Leveraging AI4All_ Pathways to Inclusion — Despite significant progress, several challenges remain unresolved. The fundamental scaling problem persists across sect…
S78
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques a…
S79
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Multi-stakeholder Collaboration and Data Sharing**: Panelists emphasized that effective fraud prevention requires un…
S80
Comprehensive Report: World Economic Forum Panel Discussion on Cybersecurity Resilience — Miebach argues that improving identity verification systems globally is a critical investment that both private and publ…
S81
Building Inclusive Societies with AI — Strengthen National Rural Livelihood Mission for better worker aggregation and quality improvement
S82
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Real-world implementations are already emerging. ByteDance has introduced an AI-first smartphone in China that eliminate…
S83
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued)/ part 2 — Mozambique: Thank you, Mr. Chair, for giving us the floor. At the outset, allow me to express our profound appreciation …
S84
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S85
Opening remarks — At the outset of the event, the speaker extends a warm welcome to attendees, expressing delight at seeing both veteran p…
S86
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S87
[Opening] IGF Parliamentary Track: Welcome and Introduction — The tone is consistently formal, welcoming, and optimistic throughout. It maintains a diplomatic and collaborative atmos…
S88
Launch / Award Event #52 Intelligent Society Development &amp; Governance Research — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm abo…
S89
WS #70 Combating Sexual Deepfakes Safeguarding Teens Globally — The discussion maintained a serious, urgent, and collaborative tone throughout. Speakers demonstrated deep concern about…
S90
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S91
WS #6 Bridging Digital Gaps in Agriculture &amp; trade Transformation — The tone was largely optimistic and solution-oriented. Speakers were enthusiastic about the potential of the Internet Ba…
S92
AI, Data Governance, and Innovation for Development — The overall tone was optimistic and solution-oriented, with speakers focusing on practical ways to overcome obstacles th…
S93
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S94
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S95
Global Perspectives on Openness and Trust in AI — The discussion maintained a thoughtful, critical, and collaborative tone throughout. While panelists raised serious conc…
S96
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S97
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S98
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — The tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enthusiastic and …
S99
Keynote-N Chandrasekaran — The tone is consistently optimistic, ambitious, and forward-looking throughout. The speaker maintains an enthusiastic an…
S100
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S101
AI in education: Leveraging technology for human potential — The tone is consistently optimistic and inspirational throughout, with Mills maintaining an enthusiastic and visionary a…
S102
Closing Ceremony and Orientation for WAIGF 2025 — Audience: Good evening everyone. I am Abdul Idris, a Nigerian. I’m a program analyst from National Assembly Service. Tha…
S103
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Garg also referenced observations about the contrast between current AI systems requiring gigawatts of power and hum…
S104
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Nobuhisa Nishigata:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ….
S105
Radio and TV broadcasting: Diplomacy going live — Franklin D. Roosevelt introduced the so-called’fireside chats’, i.e. radio talks addressing the problems and successes o…
S106
Comprehensive Report: President Trump’s Address to the World Economic Forum in Davos — The session began with opening remarks by Laurence D. Fink, who provided framing around making capitalism more inclusive…
S107
Invest India Fireside Chat — And I made this statement for India. India, AI is pivotal to drive economic productivity, military power, and informatio…
S108
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — -Aman Khanna: Vice President of the Asia Group (mentioned as moderator for upcoming fireside chat session) -Nitin Bajaj…
S109
MALAYSIA DIGITAL ECONOMY BLUEPRINT — The immense speed and reach of digitalisation in recent years are unprecedented. The size of the digital economy in 2017…
S110
Leaders TalkX: Securing the Digital Realm: Collaborative Strategies for Trust and Resilience — Preetam Maloor from the ITU presented a sobering comparison between the digital landscape in 2005 and 2024. He pointed o…
S111
40 million new digital users in Southeast Asia in 2020 — A recent report published by Google, Temasek Holdings and Bain & Company found that an estimated 40 million people from …
S112
Digital inclusivity – Connecting the next billion — Dr. Bhanu Neupane from UNESCO discussed the organisation’s initiatives to preserve linguistic and cultural diversity onl…
S113
Open Forum #13 Bridging the Digital Divide Focus on the Global South — Dr. Chern Choong Thum from Malaysia’s Ministry of Communications provided a public health perspective, stating that “dig…
S114
The Government’s AI dilemma: how to maximize rewards while minimizing risks? — This system has notably reduced the digital divide and provided benefits to economically weaker sections, including rura…
S115
#205 L&amp;A Launch of the Global CyberPeace index — Suresh Yadav: Thank you, Vinit. I hope you can hear me, Vinit, if you can. Loud and clear, we can hear you. Thank you ve…
S116
India’s digital economy expected to contribute over 20 percent to GDP — EXCERPT :During the two-day G20-Digital Innovation Alliance summit in Bengaluru, Union Minister of State for Electronics…
S117
Pathways to De-escalation — Damage estimates of $2.3 trillion increasing to $10.5 trillion by 2025, representing 8-9% of global GDP John Defterios …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
W
Wish Gurmukh Dev
2 arguments82 words per minute1069 words773 seconds
Argument 1
Wisely.ai platform delivering real‑time protection in multiple markets
EXPLANATION
Wish introduced Wisely.ai as Tanla’s agentic AI platform that is already live and protecting users in real time across several operators and banks. The platform aims to identify, prevent and eliminate spam and scam at scale.
EVIDENCE
Wish stated that Wisely.ai is live and delivering real impact at Indosat in Indonesia, at BSNL in India, and with leading banks in India, safeguarding millions of users in real time every single day [9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk notes that Wisely.ai is live and delivering real-time protection for Indosat, BSNL and banks [S1].
MAJOR DISCUSSION POINT
AI‑driven anti‑fraud platform deployment
AGREED WITH
Vikram Sinha, Ratan Kumar Kesh, Anshuman Kar, Neha Gutma Mahatme
Argument 2
Cross‑industry collaboration emphasized as essential
EXPLANATION
Wish highlighted that Tanla’s success rests on close partnership with customers, regulators, telco partners and the broader ecosystem, stressing that collaboration across sectors is vital to combat fraud. He later reiterated the need for coordinated effort among industry players.
EVIDENCE
Wish said the core principles of Tanla include collaboration, noting work with customers, regulatory ecosystem, telco partners and broader ecosystem to stay ahead of the curve [6]. He also later called for cross-industry collaboration as essential during the transition to the panel session [153].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
WEF Business Engagement Session highlights cross-industry collaboration as key, and WS #148 stresses multi-stakeholder cooperation [S19][S20].
MAJOR DISCUSSION POINT
Ecosystem partnership importance
AGREED WITH
Anshuman Kar, Ratan Kumar Kesh, Bipin Preet Singh, Audience
A
A. Robert J. Ravi
3 arguments129 words per minute757 words349 seconds
Argument 1
AI‑driven network services (AI Vani, recharge expert) improve security
EXPLANATION
Ravi described AI‑powered services such as the AI Vani voice assistant and a recharge expert system that enhance customer experience while providing security features. These tools enable intelligent routing and protection against spam and scams within the network.
EVIDENCE
Ravi explained that the AI Vani system allows users to speak to a specific agent and that a BSNR recharge expert system is a complete AI-driven solution, both contributing to security and user experience [382-386].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk records Ravi describing the AI Vani voice assistant and a recharge-expert system that enhance security [S1].
MAJOR DISCUSSION POINT
AI integration into network services
Argument 2
Federated learning keeps user data local while training models
EXPLANATION
Ravi introduced federated learning as a technique where user data remains on the device while the model learns from aggregated insights, preserving privacy while improving AI capabilities. This approach is especially relevant for rural deployments.
EVIDENCE
Ravi described federated learning as a method where data resides with the user, the system learns from it, and the model is federated over the data without moving it centrally [392-395].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Transforming Health Systems with AI discusses federated learning for privacy-preserving model training [S21], and AI for Good notes it as an enabler for edge AI [S22].
MAJOR DISCUSSION POINT
Privacy‑preserving AI training
Argument 3
Edge data centres and LLMs for rural protection
EXPLANATION
Ravi highlighted the use of edge data centres combined with large language models (LLMs) to deliver AI services in rural areas, enabling low‑latency protection and personalized experiences for underserved users. This strategy aims to bridge the digital divide while enhancing security.
EVIDENCE
Ravi mentioned deploying edge data centres and leveraging LLMs to protect customers in rural regions, noting the need for local processing and AI capabilities at the edge [387-391].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI for Good mentions bringing inference to the edge for sub-10 ms latency, supporting edge data-centre use [S22], and Enhancing Digital Resilience outlines strategies for rural outreach [S24].
MAJOR DISCUSSION POINT
AI infrastructure for rural inclusion
V
Vikram Sinha
8 arguments145 words per minute1305 words537 seconds
Argument 1
$5 bn loss & 65 % weekly spam exposure in Indonesia
EXPLANATION
Vikram shared alarming statistics from a 2024 Global Anti‑Scam Association report, indicating that Indonesians lost $5 billion to scams and that 65 % of the population faces spam or scam weekly. These figures motivated Indosat to prioritize anti‑fraud measures.
EVIDENCE
He cited a report showing $5 billion lost by Indonesians in 2024 [47] and that 65 % of Indonesians experience spam or scam on a weekly basis [50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk cites the $5 bn scam loss and 65 % weekly spam exposure in Indonesia [S1].
MAJOR DISCUSSION POINT
Scale of fraud in Indonesia
Argument 2
Indosat‑Tanla AI model reduces churn and lifts ARPU
EXPLANATION
Vikram reported that after deploying the AI model with Tanla, Indosat saw its average revenue per user (ARPU) grow 9 % versus a 3 % industry average, and churn fell dramatically, demonstrating the commercial benefit of AI‑driven fraud protection.
EVIDENCE
He highlighted that ARPU grew 9 % while the industry grew 3 % and churn dropped from 3.6-3.7 % to 1.6 % after the AI rollout [87-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk reports ARPU growth of 9 % versus 3 % industry and churn dropping to 1.6 % after AI rollout [S1].
MAJOR DISCUSSION POINT
Business impact of AI anti‑fraud
AGREED WITH
Sanjay Kapoor, Anshuman Kar
Argument 3
Full‑stack AI factory with GPU clusters for model training
EXPLANATION
Vikram explained that Indosat has built a complete AI infrastructure, including its own GPU cluster (featuring H100 GPUs) to train large models, enabling rapid development and deployment of anti‑spam/ scam solutions.
EVIDENCE
He described a full-stack AI factory, a GPU cluster with GB200 H100 GPUs, and the importance of compute power for training data [122-128].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes a full-stack AI factory and GPU cluster in the Indosat-Tanla partnership [S14], and Sovereign AI for India describes large GPU deployments supporting such infrastructure [S25].
MAJOR DISCUSSION POINT
Technical foundation for AI
Argument 4
ARPU grew 9 % vs industry 3 % after AI rollout
EXPLANATION
Vikram reiterated the revenue uplift, noting that the AI‑enabled service helped Indosat outperform the broader market, reinforcing the ROI narrative.
EVIDENCE
Quarter-four ARPU grew 9 % compared with a 3 % industry increase, as shown in his investor deck [87-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk reports ARPU growth of 9 % versus 3 % industry after the AI rollout [S1].
MAJOR DISCUSSION POINT
Revenue uplift from AI
Argument 5
Churn fell from 3.6 % to 1.6 %
EXPLANATION
He pointed out that churn among customers with more than 90 days of tenure dropped from roughly 3.6‑3.7 % to 1.6 % after AI implementation, indicating higher customer satisfaction and retention.
EVIDENCE
Vikram noted churn reduction from 3.6-3.7 % to 1.6 % following the AI model deployment [91-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk records churn dropping from 3.6-3.7 % to 1.6 % following the AI deployment [S1].
MAJOR DISCUSSION POINT
Retention improvement
Argument 6
ROI visible within 6‑8 months of deployment
EXPLANATION
Vikram stated that measurable financial benefits, such as ARPU growth and churn reduction, became evident within six to eight months of launching the AI solution, confirming a rapid return on investment.
EVIDENCE
He said that within six to eight months they observed impact on ARPU and churn, confirming ROI [98-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk states that measurable financial benefits, including ARPU uplift and churn reduction, were evident within six to eight months of launch [S1].
MAJOR DISCUSSION POINT
Speed of ROI realization
AGREED WITH
Sanjay Kapoor, Anshuman Kar
Argument 7
Strategic partnership with Tanla, not just a vendor
EXPLANATION
Vikram emphasized that Indosat sought a strategic partnership with Tanla, focusing on joint problem‑solving rather than a simple vendor relationship, to co‑create AI solutions for fraud mitigation.
EVIDENCE
He explained that Indosat wanted a partner, not a vendor, and identified Tanla as a strategic partner after meeting Uday, aligning commitments for a global case study [80-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk emphasizes that Indosat sought a strategic partnership with Tanla rather than a simple vendor relationship [S1].
MAJOR DISCUSSION POINT
Nature of collaboration
Argument 8
Churn reduction indicates improved experience alongside security
EXPLANATION
Vikram linked the drop in churn to both enhanced security and a smoother customer experience, suggesting that AI‑driven protection can simultaneously boost satisfaction and loyalty.
EVIDENCE
He connected churn reduction (to 1.6 %) with delivering better experience alongside security improvements [91-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk links the churn reduction to a better customer experience together with enhanced security [S1].
MAJOR DISCUSSION POINT
Security and CX synergy
R
Ratan Kumar Kesh
5 arguments181 words per minute1453 words480 seconds
Argument 1
Senior citizens and account‑mule fraud across India
EXPLANATION
Ratan described how senior citizens are heavily targeted by scams and how fraudsters exploit bank accounts as mules, moving stolen funds across multiple institutions, creating a systemic problem in India.
EVIDENCE
He highlighted that senior citizens and even professors are defrauded, and that account-mule fraud is facilitated by easy account onboarding via India-Stack, leading to large-scale siphoning of funds [191-199].
MAJOR DISCUSSION POINT
Vulnerable groups and mule fraud
AGREED WITH
Wish Gurmukh Dev, Vikram Sinha, Anshuman Kar, Neha Gutma Mahatme
Argument 2
Bank rule‑engine flags out‑of‑routine transactions using AI
EXPLANATION
Ratan explained that banks employ AI‑enhanced rule engines to detect transactions that deviate from a customer’s normal pattern, such as unusual withdrawal amounts or new payment types, enabling early fraud detection.
EVIDENCE
He detailed how AI-driven rule engines identify non-routine withdrawals or rent payments and can either block or flag them for further due diligence [198-208].
MAJOR DISCUSSION POINT
AI‑based transaction monitoring
Argument 3
Rule‑engine improvements reduce fraud incidents
EXPLANATION
Ratan noted that the evolution of sophisticated rule‑engine algorithms, powered by AI, has markedly improved the ability of banks to prevent fraudulent transactions, thereby lowering incident rates.
EVIDENCE
He mentioned that AI helps build better algorithms for detecting out-of-routine activity, and that these tools are working effectively to reduce fraud [211-213].
MAJOR DISCUSSION POINT
Effectiveness of AI rule‑engine
Argument 4
Law‑enforcement gaps make tracking scammers difficult
EXPLANATION
Ratan recounted personal experiences where police investigations failed to identify fraudsters despite clear evidence, underscoring systemic challenges in law enforcement coordination and accountability.
EVIDENCE
He narrated a story about a stolen purse, police involvement, and the inability to trace the fraudsters, illustrating gaps in enforcement [315-322].
MAJOR DISCUSSION POINT
Enforcement challenges
AGREED WITH
Wish Gurmukh Dev, Anshuman Kar, Bipin Preet Singh, Audience
DISAGREED WITH
Anshuman Kar, Bipin Preet Singh
Argument 5
Banks’ out‑of‑routine alerts risk false positives affecting CX
EXPLANATION
Ratan warned that while AI‑driven out‑of‑routine alerts are valuable, they can generate false positives that inconvenience legitimate customers, highlighting the need to balance security with user experience.
EVIDENCE
He described how out-of-routine transaction detection can sometimes prevent legitimate activity, requiring enhanced due-diligence and potentially causing friction for customers [198-208].
MAJOR DISCUSSION POINT
Potential CX friction from AI alerts
DISAGREED WITH
Vikram Sinha
S
Sanjay Kapoor
1 argument137 words per minute757 words329 seconds
Argument 1
Global $1 trn scam losses and $14 tn digital payments forecast
EXPLANATION
Sanjay highlighted the massive scale of digital payments projected to reach $14 trillion by 2027, while noting that worldwide scam losses already exceed $1 trillion, framing fraud as a systemic economic risk.
EVIDENCE
He cited forecasts of $14 trillion in digital payments by 2027 [26] and global scam losses exceeding $1 trillion [31].
MAJOR DISCUSSION POINT
Macro‑level fraud and payment growth
A
Anshuman Kar
3 arguments152 words per minute1866 words733 seconds
Argument 1
$500 m estimated loss prevented in first six months
EXPLANATION
Anshuman reported that the Wisely.ai solution, within six months of launch, prevented approximately $500 million in potential fraud losses, demonstrating tangible early impact.
EVIDENCE
He stated that within six months of launch, almost $500 million in estimated losses were protected [163-164].
MAJOR DISCUSSION POINT
Early financial impact of AI solution
AGREED WITH
Vikram Sinha, Sanjay Kapoor
Argument 2
Call for coordinated intelligence across telcos, banks, fintech
EXPLANATION
Anshuman urged stakeholders from telecommunications, banking, and fintech sectors to collaborate and share intelligence in real time, arguing that fragmented defenses leave gaps that fraudsters exploit.
EVIDENCE
He emphasized the need for coordinated intelligence across telcos, banks, and fintech to thwart scams, noting the fragmented nature of current defenses [170-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
WEF Business Engagement Session highlights cross-industry collaboration for safety, and WS #148 stresses ecosystem-wide cooperation as essential [S19][S20].
MAJOR DISCUSSION POINT
Ecosystem‑wide collaboration
AGREED WITH
Wish Gurmukh Dev, Ratan Kumar Kesh, Bipin Preet Singh, Audience
DISAGREED WITH
Ratan Kumar Kesh, Bipin Preet Singh
Argument 3
Real‑time coordinated intelligence as next frontier
EXPLANATION
Anshuman concluded that the future of fraud defence lies in real‑time, cross‑industry intelligence sharing rather than isolated AI models, positioning this as the next strategic direction.
EVIDENCE
He summarized that the next frontier is coordinated, real-time intelligence across the ecosystem [349-353].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same cross-industry collaboration themes in WEF Business Engagement Session and WS #148 point to real-time, ecosystem-wide intelligence sharing as the next strategic direction [S19][S20].
MAJOR DISCUSSION POINT
Future direction for anti‑fraud intelligence
AGREED WITH
Audience, Bipin Preet Singh, Ratan Kumar Kesh, Vikram Sinha
N
Neha Gutma Mahatme
4 arguments253 words per minute502 words118 seconds
Argument 1
AI detects anomalies but not malicious intent; need behavioral analysis
EXPLANATION
Neha argued that current AI systems are good at spotting statistical anomalies but cannot infer the underlying malicious intent, emphasizing the need for deeper behavioral analysis to stop scams effectively.
EVIDENCE
She explained that AI detects anomalies but not the mal-intent or behavior, and that solving the behavior aspect is essential for effective fraud prevention [236-244].
MAJOR DISCUSSION POINT
Limitations of anomaly‑based AI
DISAGREED WITH
Vikram Sinha
Argument 2
Offensive AI evolves faster than defensive models
EXPLANATION
Neha highlighted that scammers are increasingly using AI tools (e.g., deepfakes, synthetic identities) which evolve more rapidly than defensive AI models, creating an arms race in fraud detection.
EVIDENCE
She noted that offensive AI works unconstrained while defensive AI faces privacy, regulatory, and experience constraints, making it harder to keep pace [240-242].
MAJOR DISCUSSION POINT
AI arms race
AGREED WITH
Sanjay Kapoor, Vikram Sinha, Ratan Kumar Kesh
Argument 3
Limited external data hampers detection of social‑engineering cues
EXPLANATION
Neha pointed out that while Amazon has rich internal data, it lacks visibility into external social‑engineering patterns that precede transactions, limiting the effectiveness of fraud detection models.
EVIDENCE
She mentioned that Amazon misses data on how social-engineering patterns are created outside the platform, which hampers detection [237-239].
MAJOR DISCUSSION POINT
Data gaps for social engineering
DISAGREED WITH
Bipin Preet Singh, Audience
Argument 4
Privacy, regulatory and customer‑experience constraints restrict defensive AI
EXPLANATION
Neha explained that defensive AI must operate within strict privacy, regulatory, and user‑experience boundaries, which can limit its ability to act as aggressively as offensive AI used by fraudsters.
EVIDENCE
She described constraints on defensive AI, including privacy, regulation, and customer-experience limits, contrasting them with unconstrained offensive AI [242-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Transforming Health Systems with AI notes privacy constraints on model training [S21]; AI for Good discusses regulatory limits on defensive AI [S22]; Enhancing Digital Resilience mentions the need to balance security with user experience [S24].
MAJOR DISCUSSION POINT
Regulatory and UX constraints on AI
A
Audience
2 arguments159 words per minute188 words70 seconds
Argument 1
Recent Supreme Court finding of ₹56 k cr scam losses
EXPLANATION
An audience member referenced a recent Supreme Court judgment that quantified scam‑related losses at roughly ₹56,000 crore, underscoring the massive scale of fraud in India.
EVIDENCE
The audience cited the Supreme Court judgment mentioning 54-56 000 crore lost to scams and described it as a dacoity-like magnitude [183-190].
MAJOR DISCUSSION POINT
Judicial acknowledgment of fraud scale
Argument 2
Digital Payments Intelligence Platform already launched, but integration still needed
EXPLANATION
The audience noted that while India has launched a Digital Payments Intelligence Platform to aggregate fraud data, full integration across banks and other stakeholders remains incomplete.
EVIDENCE
They mentioned the existing Digital Payments Intelligence Platform and questioned whether integration is sufficient, highlighting ongoing gaps [332-340].
MAJOR DISCUSSION POINT
Partial data‑sharing implementation
AGREED WITH
Wish Gurmukh Dev, Anshuman Kar, Ratan Kumar Kesh, Bipin Preet Singh
DISAGREED WITH
Neha Gutma Mahatme, Bipin Preet Singh
B
Bipin Preet Singh
3 arguments158 words per minute949 words358 seconds
Argument 1
RBI’s Digital Payments Intelligence Authority for ecosystem data sharing
EXPLANATION
Bipin referenced the RBI’s initiative to create a Digital Payments Intelligence Authority, which would facilitate data sharing across the payments ecosystem to improve fraud detection at a national level.
EVIDENCE
He mentioned the RBI’s Digital Payments Intelligence Authority as a crucial step for ecosystem-wide data sharing [279-283].
MAJOR DISCUSSION POINT
Regulatory data‑sharing mechanism
AGREED WITH
Anshuman Kar, Audience, Ratan Kumar Kesh, Vikram Sinha
DISAGREED WITH
Anshuman Kar, Ratan Kumar Kesh
Argument 2
Generic AI models underperform; need custom data sets
EXPLANATION
Bipin argued that off‑the‑shelf AI models trained on industry‑wide data often perform poorly for specific fintech use‑cases, necessitating custom models built on proprietary data.
EVIDENCE
He explained that generic AI models have poor performance and that MobiQuik builds its own models trained on its own data sets for better results [295-299].
MAJOR DISCUSSION POINT
Need for domain‑specific AI models
Argument 3
AI must avoid creating friction for legitimate users
MAJOR DISCUSSION POINT
Balancing security with user experience
Agreements
Agreement Points
Wisely.ai and related AI anti‑fraud solutions are already delivering real‑time protection and measurable financial impact across multiple markets.
Speakers: Wish Gurmukh Dev, Vikram Sinha, Anshuman Kar
Wisely.ai platform delivering real‑time protection in multiple markets Indosat‑Tanla AI model reduces churn and lifts ARPU $500 m estimated loss prevented in first six months
Wish highlighted that Wisely.ai is live and protecting users in Indonesia, India and with banks [9]; Vikram reported that the AI model boosted ARPU by 9 % versus 3 % industry and cut churn to 1.6 % [87-92]; Anshuman noted the solution prevented about $500 m of losses within six months of launch [163-164].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry evidence shows AI-driven fraud defenses generate measurable ROI, as demonstrated by large-scale deployments such as Visa’s AI-powered fraud command centre protecting trillions of dollars in transactions [S58] and broader findings that AI excels at rapid pattern analysis for fraud detection [S47].
Cross‑industry and ecosystem collaboration is essential to combat digital fraud and scams.
Speakers: Wish Gurmukh Dev, Anshuman Kar, Ratan Kumar Kesh, Bipin Preet Singh, Audience
Cross‑industry collaboration emphasized as essential Call for coordinated intelligence across telcos, banks, fintech Law‑enforcement gaps make tracking scammers difficult RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Digital Payments Intelligence Platform already launched, but integration still needed
Wish stressed partnership with customers, regulators and telcos as a core principle [6]; Anshuman called for coordinated, real-time intelligence across telcos, banks and fintech [170-176][349-353]; Ratan highlighted ecosystem challenges and the need for cooperation among banks, telcos and law enforcement [191-199][311-315]; Bipin referenced the RBI’s Digital Payments Intelligence Authority to enable ecosystem-wide data sharing [279-283]; the audience pointed out the existing Digital Payments Intelligence Platform but noted integration gaps [332-340].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy discussions stress the need for cross-sector collaboration, highlighting four pillars that include ecosystem data sharing via APIs and joint risk assessment [S56], and broader calls to break down silos across industry, government and civil society [S65][S66][S68].
AI is a key tool against fraud but faces an arms‑race with offensive AI used by scammers.
Speakers: Sanjay Kapoor, Vikram Sinha, Neha Gutma Mahatme, Ratan Kumar Kesh
Global $1 trn scam losses and $14 trn digital payments forecast Scammers are using AI, voice cloning, automated phishing campaigns Offensive AI evolves faster than defensive models Scammers using AI, voice cloning, automated phishing campaigns
Sanjay highlighted the scale of AI-powered scams and the need for leadership [31][60-64]; Vikram noted that scammers are using AI and voice cloning, prompting the AI solution [79-80]; Neha explained that offensive AI evolves faster than defensive models and is unconstrained by privacy or regulatory limits [240-242]; Ratan also mentioned scammers using AI and voice cloning [79-80].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent alerts from law-enforcement and tech firms describe an escalating AI-driven fraud arms race, with the FBI warning of deep-fake scams [S60] and Google taking legal action against AI-based scammers [S59]; this underscores the dual-use nature of AI in security contexts.
Integrated, real‑time data sharing is required to overcome fragmented defenses against fraud.
Speakers: Anshuman Kar, Audience, Bipin Preet Singh, Ratan Kumar Kesh, Vikram Sinha
Real‑time coordinated intelligence as next frontier Integrated approach needed for fraud protection RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Ecosystem cooperation needed across banks, telcos, fintech Strategic partnership rather than vendor relationship
Anshuman identified real-time coordinated intelligence as the next frontier for fraud defence [349-353]; the audience called for an integrated approach and questioned the sufficiency of existing platforms [332-340]; Bipin emphasized the RBI’s Digital Payments Intelligence Authority to enable ecosystem data sharing [279-283]; Ratan stressed the need for ecosystem-wide cooperation to tackle fraud [191-199]; Vikram described the partnership with Tanla as strategic rather than a simple vendor relationship, highlighting joint problem-solving [80-84].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses highlight fragmented data-governance regimes and call for real-time, cross-border data sharing to close gaps in fraud detection, citing the need for coherent data-free-flow frameworks and broader ecosystem integration beyond existing platforms [S45][S50][S51][S56].
Protecting vulnerable groups such as senior citizens, women and low‑income users is a shared priority.
Speakers: Wish Gurmukh Dev, Vikram Sinha, Ratan Kumar Kesh, Anshuman Kar, Neha Gutma Mahatme
Wisely.ai platform delivering real‑time protection in multiple markets Middle‑income, lower‑income women, elderly women affected Senior citizens and account‑mule fraud across India Parents scared of fraud, especially senior citizens AI must address vulnerable populations in the behavioral journey
Wish highlighted that the victims were middle-income, lower-income women and elderly [48-49]; Vikram echoed concern for women and elderly in Indonesia [48-49]; Ratan described senior citizens being heavily targeted and defrauded [191-194]; Anshuman noted that his parents (senior citizens) are scared of fraud and avoid ATM cards [233-235]; Neha stressed that vulnerable populations are especially exposed to scams and need protection [236-240].
POLICY CONTEXT (KNOWLEDGE BASE)
International policy forums emphasize inclusive digital security, urging specific safeguards for seniors, women and low-income populations in cyber-security and consumer protection strategies [S61][S62][S63][S64].
AI‑driven anti‑fraud measures deliver measurable business ROI (ARPU growth, churn reduction, loss prevention).
Speakers: Vikram Sinha, Sanjay Kapoor, Anshuman Kar
Indosat‑Tanla AI model reduces churn and lifts ARPU ROI visible within 6‑8 months of deployment $500 m estimated loss prevented in first six months
Vikram reported ARPU growth of 9 % versus 3 % industry and churn dropping to 1.6 % after AI rollout [87-92]; Sanjay asked about ROI and highlighted the economic impact of scams [31]; Anshuman quantified the financial benefit of the solution as $500 m prevented losses within six months [163-164].
POLICY CONTEXT (KNOWLEDGE BASE)
Empirical studies and industry reports confirm that AI-based fraud mitigation improves key business metrics, with documented ARPU growth and churn reduction in telecom and banking sectors, reinforced by Visa’s reported loss-prevention outcomes [S58] and broader AI fraud detection benefits [S47].
Similar Viewpoints
Both emphasize that AI investment must show rapid financial returns given the massive scale of digital payments and fraud losses, with Vikram citing concrete ROI metrics and Sanjay framing the broader economic risk [31][87-92].
Speakers: Vikram Sinha, Sanjay Kapoor
ROI visible within 6‑8 months of deployment Global $1 trn scam losses and $14 trn digital payments forecast
Both point out limitations of current AI‑based detection: Ratan notes that rule‑engine alerts can miss intent and cause friction, while Neha stresses that defensive AI cannot keep pace with offensive AI and lacks behavioral insight [198-208][240-242].
Speakers: Ratan Kumar Kesh, Neha Gutma Mahatme
AI detects anomalies but not malicious intent; need behavioral analysis Offensive AI evolves faster than defensive models
All three stress the necessity of a unified data‑sharing platform to enable real‑time fraud detection, highlighting existing initiatives but also gaps in integration [349-353][279-283][332-340].
Speakers: Anshuman Kar, Bipin Preet Singh, Audience
Call for coordinated intelligence across telcos, banks, fintech RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Digital Payments Intelligence Platform already launched, but integration still needed
Both underline that fragmented defenses are insufficient and that multi‑stakeholder collaboration is critical to combat scams effectively [6][153][170-176].
Speakers: Wish Gurmukh Dev, Anshuman Kar
Cross‑industry collaboration emphasized as essential Call for coordinated intelligence across telcos, banks, fintech
Unexpected Consensus
Agreement on protecting senior citizens and other vulnerable groups across telecom and banking sectors.
Speakers: Vikram Sinha, Ratan Kumar Kesh, Anshuman Kar, Neha Gutma Mahatme
Middle‑income, lower‑income women, elderly women affected Senior citizens and account‑mule fraud across India Parents scared of fraud, especially senior citizens AI must address vulnerable populations in the behavioral journey
While telecom leaders typically focus on network and service issues, both Vikram (telco) and Ratan (bank) explicitly highlighted the impact of scams on senior citizens and low-income users, a concern more commonly raised by consumer-focused participants, indicating a cross-sector consensus on protecting vulnerable demographics [48-49][191-194][233-235][236-240].
POLICY CONTEXT (KNOWLEDGE BASE)
Regulatory dialogues have repeatedly called for sector-wide safeguards for seniors and other at-risk groups, linking financial services and telecom under shared consumer-protection mandates [S61][S62][S63][S64].
Recognition that existing regulatory data‑sharing initiatives (Digital Payments Intelligence Platform/Authority) are insufficient without broader ecosystem integration.
Speakers: Bipin Preet Singh, Audience, Anshuman Kar
RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Digital Payments Intelligence Platform already launched, but integration still needed Call for coordinated intelligence across telcos, banks, fintech
Although the RBI initiative was presented as a solution, all three participants concurred that the platform alone does not achieve full integration, revealing an unexpected shared view that further coordinated effort is required [279-283][332-340][349-353].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of current data-sharing frameworks note their limited scope and stress the need for wider ecosystem participation, aligning with critiques of fragmented governance and calls for holistic data-governance models [S45][S50][S65][S66].
Overall Assessment

There is strong consensus that AI‑driven anti‑fraud solutions like Wisely.ai are already delivering real‑time protection and measurable business benefits, but their effectiveness depends on cross‑industry collaboration, integrated data sharing, and attention to vulnerable users. Participants across telecom, banking, fintech and regulatory domains align on the need for coordinated intelligence and rapid ROI, while also acknowledging challenges such as the AI arms race and privacy constraints.

High consensus on the importance of AI, collaboration, data sharing, and protecting vulnerable groups, with moderate consensus on the sufficiency of current regulatory mechanisms. This alignment suggests a favorable environment for joint initiatives, policy support, and investment in shared AI infrastructure to strengthen digital trust.

Differences
Different Viewpoints
Sufficiency of data sharing for fraud detection
Speakers: Neha Gutma Mahatme, Bipin Preet Singh, Audience
Limited external data hampers detection of social‑engineering cues RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Digital Payments Intelligence Platform already launched, but integration still needed
Neha argues that Amazon lacks visibility into external social-engineering data, limiting fraud detection [236-244]. Bipin points to the RBI’s Digital Payments Intelligence Authority as a mechanism to enable ecosystem-wide data sharing [279-283]. The audience notes that a Digital Payments Intelligence Platform exists but questions whether its integration is sufficient [332-340]. The three positions reveal a disagreement on whether current data-sharing initiatives are adequate for effective fraud prevention.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates highlight that current data-sharing mechanisms fall short of providing the granularity and timeliness needed for effective fraud detection, echoing concerns raised about regulatory silos and the need for expanded, real-time data exchange [S45][S50][S56][S66].
AI’s ability to address malicious intent versus only detecting anomalies
Speakers: Neha Gutma Mahatme, Vikram Sinha
AI detects anomalies but not malicious intent; need behavioral analysis Indosat‑Tanla AI model reduces churn and lifts ARPU, indicating effective protection
Neha states that AI can spot statistical anomalies but cannot infer malicious intent, calling for deeper behavioral analysis [236-244]. Vikram, by contrast, presents the AI model as delivering tangible business benefits and protecting customers, implying that AI alone can solve the problem [66-70][87-92]. This creates a tension between viewing AI as a partial tool versus a comprehensive solution.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions differentiate between AI’s anomaly-detection capabilities and its potential to proactively counter malicious intent, with ethical frameworks urging deeper integration of intent-aware models in cybersecurity [S54][S47].
Impact of AI on customer experience – friction versus benefit
Speakers: Ratan Kumar Kesh, Vikram Sinha
Banks’ out‑of‑routine alerts risk false positives affecting CX Churn fell from 3.6 % to 1.6 %, indicating improved experience alongside security
Ratan warns that AI-driven out-of-routine transaction alerts can generate false positives, creating friction for legitimate users [198-208]. Vikram counters by highlighting a sharp churn reduction after AI deployment, interpreting it as evidence that security improvements have enhanced customer experience [91-92]. The two speakers disagree on whether AI implementation currently harms or helps the user journey.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholder analyses stress the balance between security and user friction, noting that AI-driven solutions can both reduce friction for legitimate users and inadvertently introduce new barriers if not carefully designed [S46][S58].
Who should own responsibility for national‑scale fraud protection
Speakers: Anshuman Kar, Ratan Kumar Kesh, Bipin Preet Singh
Call for coordinated intelligence across telcos, banks, fintech Law‑enforcement gaps make tracking scammers difficult RBI’s Digital Payments Intelligence Authority for ecosystem data sharing
Anshuman asks which entity ultimately owns the responsibility for protecting citizens at scale [310-315]. Ratan highlights fragmented law-enforcement capabilities and the difficulty of tracing fraudsters despite multiple stakeholders [316-322]. Bipin points to a regulatory solution via the RBI’s Digital Payments Intelligence Authority [279-283]. The speakers diverge on whether the lead should be regulatory, industry-driven, or a joint effort.
POLICY CONTEXT (KNOWLEDGE BASE)
International panels propose shared stewardship models, assigning roles to regulators, industry consortia and civil society to collectively manage national-level fraud risks [S56][S57][S65][S68].
Unexpected Differences
Cross‑industry collaboration versus siloed enforcement
Speakers: Wish Gurmukh Dev, Ratan Kumar Kesh
Cross‑industry collaboration emphasized as essential Law‑enforcement gaps make tracking scammers difficult
Wish repeatedly stresses that collaboration across customers, regulators, telcos and the broader ecosystem is vital for combating fraud [6][153]. Ratan, however, recounts concrete failures of police and regulatory coordination, highlighting fragmented enforcement that undermines collaborative goals [315-322]. The contrast between the aspirational call for partnership and the on-the-ground reality of siloed enforcement was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
A recurring theme in policy forums is the need to move from siloed enforcement to coordinated, cross-industry collaboration, as advocated in multiple IGF and WEF sessions calling for integrated governance structures [S65][S66][S67][S43].
Overall Assessment

The discussion reveals several substantive disagreements: the adequacy of current data‑sharing mechanisms, the limits of AI versus the need for behavioral insight, the trade‑off between security and customer friction, and the question of which stakeholder should lead national fraud protection. While all participants share the overarching goal of reducing fraud and building digital trust, they diverge on the most effective pathways to achieve it.

Moderate to high – the disagreements are not outright conflicts but reflect differing priorities, assumptions about technology efficacy, and views on institutional responsibility. These divergences suggest that achieving coordinated, effective anti‑fraud solutions will require clear policy frameworks, stronger data‑governance mechanisms, and balanced designs that address both security and user experience.

Partial Agreements
Both agree that fraud must be reduced and customer trust enhanced, but Vikram emphasizes a bilateral strategic partnership with Tanla as the solution, whereas Anshuman advocates for a broader, ecosystem‑wide coordinated intelligence framework [80-84][170-176].
Speakers: Vikram Sinha, Anshuman Kar
Indosat‑Tanla AI model reduces churn and lifts ARPU Call for coordinated intelligence across telcos, banks, fintech
Both recognize AI’s role in spotting irregular activity, yet Ratan focuses on rule‑engine implementation within banks, while Neha stresses that anomaly detection alone is insufficient without understanding intent and social‑engineering behavior [198-208][236-244].
Speakers: Ratan Kumar Kesh, Neha Gutma Mahatme
Bank rule‑engine flags out‑of‑routine transactions using AI AI detects anomalies but not malicious intent; need behavioral analysis
Both see a national data‑sharing platform as essential, but Bipin views the RBI authority as the forthcoming solution, while the audience questions whether the existing platform is already sufficient, indicating differing views on the stage of implementation [279-283][332-340].
Speakers: Bipin Preet Singh, Audience
RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Digital Payments Intelligence Platform already launched, but integration still needed
Takeaways
Key takeaways
Digital fraud is massive and growing: $5 bn loss in Indonesia in 2024, 65 % of Indonesians face weekly spam/scam, global scam losses exceed $1 trn, and India’s Supreme Court highlighted ₹56 k cr lost to scams. AI‑driven platforms (Wisely.ai, Indosat‑Tanla AI model) can deliver real‑time protection, reduce churn, and boost ARPU, demonstrating measurable business impact. ROI from AI anti‑fraud solutions becomes visible within 6‑8 months, with examples such as 9 % ARPU growth vs 3 % industry average and $500 m loss prevented in six months. Effective fraud defence requires ecosystem collaboration and data sharing across telcos, banks, fintechs, regulators, and law‑enforcement; strategic partnerships (e.g., Indosat‑Tanla) are preferred over simple vendor relationships. Current AI models face limitations: offensive AI evolves faster than defensive models, external data on social‑engineering is scarce, privacy/regulatory constraints limit defensive AI, and generic models underperform without custom data. Balancing security with customer experience is critical; reduced churn indicates success, but false positives and friction remain concerns. Future technological directions include federated learning, edge AI, and large language models to protect rural users while keeping data local.
Resolutions and action items
Indosat commits to continue partnership with Tanla, co‑developing and training AI models on its GPU cluster for spam/scam detection. Tanla to support Indosat with full‑stack AI factory and real‑time threat intelligence (2 bn spam instances, 2.3 m scammers flagged). Panelists agree to pursue greater data sharing across the payments ecosystem; RBI’s Digital Payments Intelligence Authority to be leveraged for national‑scale intelligence. Explore implementation of federated learning and edge data‑centres to extend protection to rural/edge users (as outlined by BSNL’s A. Robert J. Ravi). Banks and fintechs to continue enhancing rule‑engine and out‑of‑routine transaction monitoring, and to consider integrating external threat feeds from telcos.
Unresolved issues
How to create a truly integrated, nation‑wide fraud‑prevention model that combines data from telcos, banks, fintechs, and regulators in real time. Mechanisms for overcoming data‑visibility gaps on social‑engineering cues outside proprietary platforms. Effective coordination with law‑enforcement to identify and prosecute scammers; current enforcement gaps remain. Balancing defensive AI constraints (privacy, regulation, CX) with the rapid evolution of offensive AI. Specific governance framework for sharing sensitive data while respecting privacy and regulatory limits.
Suggested compromises
Adopt a strategic partnership model (e.g., Indosat‑Tanla) rather than a pure vendor relationship to share risk, expertise, and data. Implement AI solutions that prioritize low‑friction user experience to reduce churn while still providing security (e.g., calibrated alerts, selective friction). Use federated learning to keep user data on‑device while still benefiting from collective model improvements, addressing privacy concerns. Combine centralized threat intelligence (RBI platform) with decentralized, industry‑specific models to balance comprehensive coverage and domain‑specific accuracy.
Thought Provoking Comments
In early 2024, the Global Anti‑Scam Association reported that $5 billion was lost by Indonesians, and 65 % of Indonesians face spam or scam on a weekly basis.
This stark data quantifies the human and economic impact of scams, turning an abstract risk into a concrete, board‑level business imperative.
It shifted the conversation from general concerns about fraud to urgent action, prompting Sanjay to ask how the issue was elevated to the board and leading Vikram to describe the strategic partnership with Tanla.
Speaker: Vikram Sinha
We didn’t want a vendor; we wanted a partner who could work with us and use AI to solve this real problem.
Highlights a strategic approach to technology adoption—prioritizing deep collaboration over transactional vendor relationships.
Guided the discussion toward the importance of ecosystem partnerships, influencing later panel members to stress data sharing and coordinated intelligence.
Speaker: Vikram Sinha
Our quarterly results show ARPU grew 9 % versus a 3 % industry average, and churn for serious‑base customers fell from 3.6 % to 1.6 % after deploying the AI solution.
Provides concrete ROI evidence linking AI‑driven fraud protection to financial performance, addressing the board’s typical focus on P&L impact.
Validated the business case for AI investment, prompting Sanjay and the audience to explore scalability and ROI, and set a benchmark for other panelists.
Speaker: Vikram Sinha
Scams are a behavioral journey that starts long before a payment; we lack visibility into that data, and human psychology evolves faster than our models—offensive AI is unconstrained while defensive AI faces privacy and regulatory limits.
Identifies fundamental limitations of current AI defenses and introduces the concept of an arms race between offensive and defensive AI.
Deepened the technical discussion, leading panelists to acknowledge the need for broader data sharing and more adaptive models, and set the stage for Bipin’s call for a national intelligence platform.
Speaker: Neha Gutma Mahatme
99 % of the scams our customers report are not money stolen from us but from other banks; without ecosystem‑wide data sharing, we can’t detect the patterns. The RBI’s Digital Payments Intelligence Authority could be the key.
Emphasizes that fraud is a systemic, cross‑institutional problem and that regulatory‑driven data collaboration is essential.
Shifted the conversation from individual company solutions to a policy and regulatory perspective, reinforcing Anshuman’s earlier question about integrated approaches.
Speaker: Bipin Preet Singh
Fraudsters rent bank accounts for a fee, turning ordinary customers into ‘mules’; this account‑rental model is a major, under‑addressed threat vector.
Introduces a novel fraud mechanism that goes beyond phishing, highlighting the need for new detection and prevention strategies.
Prompted the panel to consider broader ecosystem responsibilities and the importance of law‑enforcement coordination, echoed later by Ratan’s police anecdote.
Speaker: Ratan Kumar Kesh
Is the problem really getting better? Or is it getting worse? And why?
Serves as a pivotal framing question that moves the discussion from anecdotal evidence to a systematic analysis of trends and root causes.
Reoriented the panel’s focus toward evaluating the trajectory of fraud, leading each participant to contribute perspectives on technology, regulation, and consumer behavior.
Speaker: Anshuman Kar
We are building federated learning models so data stays with the user while we learn from it, enabling AI at the edge for rural customers without compromising privacy.
Introduces an advanced AI paradigm (federated learning) as a solution to data‑privacy concerns while extending protection to underserved areas.
Expanded the conversation beyond fraud detection to broader AI applications in telecom, highlighting future‑proofing strategies and influencing the closing synthesis about coordinated intelligence.
Speaker: A. Robert J. Ravi
Overall Assessment

The discussion was driven forward by a series of high‑impact statements that moved the dialogue from abstract concerns about digital fraud to concrete, data‑backed business outcomes and systemic solutions. Vikram’s loss statistics and ROI figures forced the board‑level urgency, while Neha’s articulation of AI’s limitations and Bipin’s call for ecosystem‑wide data sharing reframed the problem as a national, cross‑industry challenge. Ratan’s insight on account‑rental fraud and the moderator’s probing question created a turning point toward deeper analysis of underlying mechanisms. Finally, Ravi’s vision of federated learning pointed to innovative, privacy‑preserving pathways forward. Collectively, these comments reshaped the conversation, introduced new problem dimensions, aligned stakeholders on the need for coordinated intelligence, and set a forward‑looking agenda for AI‑driven trust in the digital economy.

Follow-up Questions
Is the existing government initiative (Digital Payments Intelligence Platform / RBI Mule Hunter) sufficient as an integrated model for fraud protection across the ecosystem?
The audience asked whether the current national‑level platform provides enough coverage, indicating a need to evaluate its effectiveness and possible gaps.
Speaker: Audience (question to panel)
Who ultimately owns responsibility for protecting citizens at national scale – can banks act alone, can RBI act alone, or is coordination with upstream/downstream signals (e.g., telecom) required?
Clarifying governance and accountability is essential for building a coherent, nation‑wide fraud‑prevention framework.
Speaker: Anshuman Kar (directed to Ratan Kumar Kesh)
Why are we not able to stop scams across the whole customer journey despite AI patterns in commerce and payments?
Understanding the gaps in end‑to‑end detection will help design more comprehensive anti‑fraud solutions.
Speaker: Anshuman Kar (to Neha Gutma Mahatme)
How should AI models be calibrated to balance fraud protection with customer‑experience friction?
Finding the optimal trade‑off between security and usability is critical for scaling AI‑driven fraud controls.
Speaker: Anshuman Kar (to Bipin Preet Singh)
How effective is the anti‑phishing tool Carix (and its DLT integration) in preventing scam SMS, and can it be scaled?
Assessing Carix’s performance and scalability will inform decisions on broader deployment.
Speaker: Ratan Kumar Kesh
What are the barriers and opportunities for real‑time data sharing between telecom operators and financial institutions to improve fraud detection?
Data silos limit detection capabilities; research is needed on technical, regulatory, and privacy challenges of cross‑industry data exchange.
Speaker: Bipin Preet Singh; also Neha Gutma Mahatme
How can defenders keep pace with offensive AI used by scammers, given constraints of privacy, regulation, and customer experience?
The arms race between offensive and defensive AI requires study of model adaptability, legal limits, and ethical considerations.
Speaker: Neha Gutma Mahatme
How can federated learning be implemented in rural edge data centers to protect customers while preserving data privacy?
Exploring federated learning could enable AI benefits without centralizing sensitive user data, especially in underserved regions.
Speaker: A. Robert J. Ravi
What is the long‑term impact of AI‑driven fraud prevention on key financial metrics (ARPU, churn, overall P&L) beyond the initial six‑to‑eight‑month horizon?
Understanding sustained ROI is vital for continued investment and board confidence.
Speaker: Sanjay Kapoor (to Vikram Sinha)
How can mule accounts (used to launder money across banks) be detected and mitigated more effectively?
Mule accounts represent a systemic risk; research into detection patterns and inter‑bank collaboration is needed.
Speaker: Ratan Kumar Kesh
What mechanisms are needed for international coordination of fraud detection, given that scammers operate across borders?
Cross‑border threats require harmonized standards, data sharing, and joint enforcement strategies.
Speaker: Anshuman Kar (implied)
How does model performance differ when trained on proprietary telco/fintech data versus industry‑wide datasets, and what are best practices for model sharing?
Evaluating the trade‑offs informs decisions on collaborative model development versus proprietary approaches.
Speaker: Bipin Preet Singh
What is the impact of customer education and awareness programs on reducing fraud incidence, especially among vulnerable populations?
Behavioral factors are highlighted as a root cause; studying education effectiveness can guide outreach strategies.
Speaker: Ratan Kumar Kesh
How can telecom signals (e.g., spam calls, WhatsApp messages) be integrated with financial fraud detection systems to create a unified defense?
Integrating communication‑channel data with payment‑channel analytics could close detection gaps and improve real‑time response.
Speaker: Anshuman Kar (to Ratan Kumar Kesh)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Transforming Agriculture_ AI for Resilient and Inclusive Food Systems

Transforming Agriculture_ AI for Resilient and Inclusive Food Systems

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened by the Netherlands, Indonesia and the OECD examined how artificial intelligence can make food systems more transparent, responsible and inclusive, bringing together government, industry, academia and international organisations [1-3][4-5].


The Dutch ambassador highlighted AI’s rapid development in agriculture, noting its potential to boost productivity, reduce environmental impact and strengthen climate resilience, and described the Netherlands’ strong AI ecosystem and precision-farming successes such as up to 90 % water savings and disease-control models [13-24][25-29][30-34]. He also stressed the Netherlands’ commitment to support low- and middle-income countries through ICT-agri collaborations, tailor-made solutions for smallholders, and an inclusive AI agenda that aligns with the summit’s “people, planet and progress” motto [35-38][40-46][47-50].


The OECD representative pointed out that volatile shocks-from droughts to conflicts-make resilience a global priority, and cited evidence that AI-enabled precision spraying can cut pesticide use by 30 % and computer-vision weed detection can halve herbicide application without yield loss [54-62][63-66]. She warned that adoption remains uneven, with a digital divide evident between countries such as Australia (96 % digital tool use) and Chile (12 %), and identified barriers including high costs, limited skills, fragmented data governance and lack of trust [68-71][72-76]. To address these gaps, the OECD is developing an AI policy toolkit and a digital-governance framework that promote transparency, explainability and responsible data sharing for farmers and regulators [80-88][91-93].


Indonesia’s speaker described the archipelagic challenges of uneven ICT infrastructure, talent distribution and climate risks, and outlined AI use cases such as soil-nutrient prediction, optimal fertilizer and water dosing, intelligent farming, weather forecasting and logistics optimisation across its 17 000 islands [150-166][167-176][177-185][186-192]. He presented a national AI roadmap built on seven pillars-regulation, ethics, investment, data, innovation, talent development and use cases-and a “quad-helix” governance model that engages government, industry, academia, media and communities to ensure no stakeholder is left behind [196-203].


The industry expert warned that AI is often applied indiscriminately, urging a problem-driven approach that first secures high-quality data, clear objectives and market pathways, and suggested establishing sector-specific centres of excellence to tackle food-waste and cold-chain inefficiencies [213-224][228-247][250-257]. A researcher highlighted three persistent obstacles-data scarcity, farmer mistrust and limited scalability-and illustrated projects such as the World Cereal mapping initiative and low-tech chatbot advisory services that aim to embed AI in smallholder contexts [267-277][278-286][295-303]. He emphasized that building robust data infrastructure and actively involving farmers are essential for AI models to be effective at the grassroots level [304-308].


The moderator concluded that the discussion underscored AI’s vast potential for resilient, inclusive food systems, but that real impact depends on problem-focused development, trustworthy data practices and coordinated public-private partnerships [309-317].


Keypoints

Major discussion points


AI as a catalyst for higher productivity, sustainability and climate-resilient agriculture – The Dutch ambassador highlighted that digitalisation and AI can “significantly increase food productivity and reduce food losses” and cited concrete use-cases such as “water savings of up to 90 % through smart irrigation, optimal crop yields with minimal input, and predictive models for disease control” [13-21][29-30]. The OECD representative reinforced these benefits, noting that “AI-enabled precision spraying has reduced pesticide use by up to 30 % … and AI is revolutionising plant breeding, shortening cycles and delivering climate-adaptive varieties” [60-62].


The need for inclusive AI and bridging the digital divide – FAO’s Dejan warned that “inclusiveness and the digital divide was still strong… if a farmer or communities are outside of the digital ecosystem, they suddenly are outside of any ecosystem” [112-118]. He also gave a positive example of an Indian phone-based advisory service that “lowers the entry barrier to knowledge” [120-124]. The OECD added that “farmers and regulators need transparency … but fragmented data-governance frameworks introduce complexity” and that “structural barriers including high cost, limited digital skills, and lack of trust” hinder uptake [73-76][68-72].


Indonesia’s specific AI challenges and its national roadmap – Professor Sumari described the country’s “17 000 islands, 36 % land, 64 % water” and the resulting “telecommunication … infrastructure gaps and unequal distribution of AI talent” [148-166]. He outlined a “seven-pillar AI roadmap” that combines horizontal AI governance with sector-specific rules, stresses a “quad/hex helix” ecosystem of government, industry, academia, media and communities, and stresses transparency, explainability and sustainability [190-203].


Public-private collaboration and the role of sector-focused centres of excellence – Debjani Ghosh argued that “we throw AI at every problem… we need to know exactly what we are solving for” and that “industry must align on a clear problem statement and have a route to market” [206-214][224-247]. She proposed “a centre of excellence … to solve specific problems such as cold-chain logistics or climate-resilient crops” to avoid duplicated pilots and to scale impact [252-258].


Practical barriers to deployment and examples of low-tech-friendly solutions – Dr. Pratihast identified three core obstacles: “data scarcity, trust, and scalability” [278-286]. He illustrated ongoing work such as the “World Cereal Project” for global crop mapping and a “chat-bot in local languages for cocoa farmers” that combines computer-vision advisory with low-tech connectivity [295-300][301-307].


Overall purpose / goal


The session was convened to bring together government, industry, academia and international organisations to examine how artificial intelligence can be harnessed to make food systems more transparent, responsible, resilient and inclusive, while identifying the concrete challenges-data sharing, governance, infrastructure and equitable access-that must be overcome to ensure AI benefits are broadly shared [1-4][52-55].


Tone of the discussion


The conversation began with a formal, optimistic tone, emphasizing partnership and the promise of AI [1][6-10]. As speakers progressed, the tone shifted to cautiously realistic, acknowledging significant gaps, digital exclusion and trust issues [112-118][73-76]. Throughout, the tone remained constructive and collaborative, with participants offering concrete examples, policy frameworks and calls for coordinated action rather than criticism [148-166][206-214][278-306].


Speakers

Sara Rendtorff Smith


– Expertise: International policy, AI governance, food systems


– Role/Title: Session moderator, representing the OECD


– Affiliation: OECD (moderator) [S13]


Harry Verweij


– Expertise: AI and digitalization in agriculture, food security


– Role/Title: (Representative of the Netherlands)


– Affiliation: Netherlands


Dejan Jakovljevic


– Expertise: Digital agriculture, data informatics, AI for food systems


– Role/Title: CIO and Director, Digitalization and Informatics Division


– Affiliation: Food and Agriculture Organization of the United Nations (FAO) [S7]


Arwin Datumaya Wahyudi Sumari


– Expertise: AI applications in agriculture, knowledge-based AI frameworks, AI policy


– Role/Title: Indonesian Air Force officer; Professor at the State Polytechnic of Malang; Co-inventor of the Knowledge Growing System


– Affiliation: State Polytechnic of Malang, Indonesia [S3]


Debjani Ghosh


– Expertise: Frontier technologies, AI architecture, policy for inclusive AI


– Role/Title: Distinguished Fellow; Chief Architect of NITI Frontier Tech Hub; Former role with NASCOM


– Affiliation: NITI Aayog, Government of India [S1][S2]


Arun Pratihast


– Expertise: AI research for low-tech farming environments, data scarcity, trust and scalability of AI solutions


– Role/Title: Senior Researcher


– Affiliation: Wageningen University Environmental Research [S11]


Speaker 5


– Expertise: –


– Role/Title: –


– Affiliation: –


Additional speakers:


His Excellency Ambassador Fawai – Ambassador-at-Large and Special Envoy for AI, Kingdom of the Netherlands (mentioned in opening remarks).


Madam Gorshan – Co-chair of the sixth working group on economic growth and social good (referenced by Harry Verweij).


Admiral Samari – Co-chair of the sixth working group on economic growth and social good (referenced by Harry Verweij).


Ms. Goss – Name appears in the transcript; role not specified.


Professor Ramesh Chand – Esteemed member of NITI Aayog, expert in agriculture (referenced by Debjani Ghosh).


Full session reportComprehensive analysis and detailed insights

Sara Rendtorff Smith opened the session, introducing a multi-stakeholder panel on AI for transparent, responsible and inclusive food systems [1-5]. She noted that the panel included representatives from government, industry, academia and international organisations, among them Prof Arwin Datumaya Wahyudi Sumari (who was introduced by the moderator as “Professor Arvind Sumari”), Dayan Jakoblevich – Director of the Digital FAO and Agro-informatics Division (FAO Chief Information Officer) – and other experts [1-5].


His Excellency Ambassador Harry Verweij of the Kingdom of the Netherlands then outlined the Dutch vision of AI as a catalyst for higher productivity, lower environmental impact and greater climate resilience in agriculture [13-24]. He cited precision-farming examples – smart irrigation that can save up to 90 % of water, AI-driven optimal yield models and predictive disease-control tools [25-34] – and stressed that, despite its small size, the Netherlands is a global agro-innovation hub, anchored by firms such as ASML, NXP and Philips [27-28]. The ambassador highlighted Dutch support for low- and middle-income countries through ICT-agri collaborations, co-creation of tailor-made solutions for smallholders and SMEs, and an inclusive AI agenda aligned with the summit’s motto “People, Planet and Progress” [35-46]. He thanked India for hosting the summit, referenced the Indian Prime Minister’s speech to underline the inclusive agenda, and reaffirmed Dutch readiness to help Indonesia pursue OECD accession [45-48][47-50].


Sara, speaking for the OECD, emphasized that today’s volatile shocks – droughts, floods, pests, conflicts and economic crises – make resilience a global priority [54-55]. She presented evidence that AI-enabled precision spraying can cut pesticide use by up to 30 % without yield loss and that computer-vision weed detection can halve herbicide application [60-61]. AI is also accelerating plant breeding, producing climate-adaptive varieties such as drought-tolerant sorghum and hybrid rice with yield gains of +25 % under end-season drought [62-66]. Additional benefits include improved supply-chain traceability, market transparency and smart logistics [66-68]. Adoption, however, is highly uneven (96 % of Australian farmers vs 12 % of Chilean farmers using digital tools) [70-71], with barriers that include high costs, limited digital skills, fragmented data-governance frameworks and trust deficits [72-76]. To address these gaps, the OECD is releasing an AI policy toolkit – built on the OECD AI Policy Navigator, covering more than 2,000 policies across 80 jurisdictions and publicly available at osd.ai [80-84]; this effort is complemented by work on digital governance in agriculture [86-88] and the “global AI impact comments” deliverable of the summit [86-88]. Embedding trustworthy-AI principles within an enabling ecosystem is part of the same OECD digital-governance work [88-94].


After the introductions, Dejan Jakovljevic (FAO, Director of the Digital FAO and Agro-informatics Division) set the scene by warning that “inclusiveness and the digital divide was still strong” and that farmers outside the digital ecosystem risk being left out of AI-driven solutions [112-118]. He showcased a low-tech phone-call advisory service from India that provides multilingual, real-time guidance on shrimp cultivation, pest and disease management, thereby lowering the entry barrier to AI-based knowledge [120-124]. Jakovljevic argued that anticipatory AI – early-warning tools, decision-support “situation rooms” and predictive analytics – is essential to protect the roughly 700 million people who still lack food security [127-136][137-139].


Prof Arwin Datumaya Wahyudi Sumari described Indonesia’s archipelagic challenges: 17 000 islands, a 36 % land / 64 % water split, exposure to the Ring of Fire, uneven ICT infrastructure, time-zone disparities and an unequal distribution of AI talent [148-166]. He outlined a suite of AI-driven use cases, including soil-nutrient prediction for new rice fields, optimisation of fertilizer and water dosing, “intelligent farming” that integrates sowing, growth monitoring and harvest logistics, short-term weather forecasting to prevent crop failures, and logistics optimisation that could reduce transport costs and price disparities between islands [167-192]. Indonesia’s national AI roadmap rests on seven pillars – regulation, ethics, investment, data, innovation, talent development and use-cases – and is governed by a “quad-helix” model that brings together government, industry, academia, media and communities to ensure no stakeholder is left behind [196-203].


Industry expert Debjani Ghosh cautioned against “throwing AI at every problem” and urged a problem-driven approach that first defines clear objectives, secures high-quality data and establishes market pathways [206-224]. She identified food-waste reduction – through smarter logistics, cold-chain management and real-time distribution – as a priority leverage point [228-242] and proposed the creation of sector-specific Centres of Excellence (e.g., for cold-chain optimisation or climate-resilient crops) to align industry, data and commercialisation routes [252-258].


Dr Arun Pratihast highlighted three persistent obstacles to AI impact at the grassroots level: data scarcity and poor sharing, farmer mistrust of AI recommendations, and limited scalability of solutions that work only in high-tech environments [267-286]. He illustrated these points with the World Cereal Project, which aims to map global crop areas but suffers from missing data from major producers, and with a multilingual chatbot for cocoa farmers that combines computer-vision disease detection with low-tech connectivity [295-307]. He argued that robust data infrastructure – treated as a core component of the AI ecosystem – and active farmer participation are essential for models to be effective and trustworthy [304-308].


In her closing remarks, Sara thanked the participants and summarised the key take-aways: AI can markedly increase productivity, reduce inputs (water - 90 %, pesticides - 30 %), and enhance climate resilience; anticipatory tools can help predict and mitigate shocks; yet adoption remains uneven because of digital exclusion, data gaps, trust deficits and scalability issues. Realising AI’s promise will require problem-focused development, transparent and explainable models, responsible data practices and coordinated public-private-multi-stakeholder partnerships – echoing the consensus that inclusive governance and capacity-building are indispensable [309-317][52-55].


Overall, the panel expressed strong agreement that AI holds great potential for more productive, sustainable and resilient agriculture. Different speakers emphasized complementary aspects – the Dutch ambassador on productivity and environmental impact, Ms Ghosh on waste reduction, and Dejan Jakovljevic on low-tech, anticipatory solutions – underscoring the need for blended approaches that combine advanced AI capabilities with low-tech delivery channels, robust multi-helix governance and targeted public-private mechanisms to bridge the digital divide and ensure that AI benefits are equitably shared.


Session transcriptComplete transcript of the session
Sara Rendtorff Smith

Session started. Thank you. the Netherlands, and Indonesia, as you’ll see reflected on the panel. And together with our distinguished panelists, we’ll explore how artificial intelligence can support the transition towards food systems that are more transparent, responsible, and inclusive. So this session is bringing together leaders from government, industry, academia, and international organizations to examine both opportunities and the practical challenges ahead from data sharing and infrastructure to governance frameworks and the partnerships needed to ensure that AI benefits are broadly shared. And before we begin the panel discussion, it’s my honor to invite His Excellency, Ambassador Fawai, Ambassador -at -Large and Special Envoy for AI of the Kingdom of the Netherlands, who will deliver welcome remarks. Welcome, Ambassador.

Harry Verweij

Thank you, Sarah. Is this working? Yeah. Thank you all for sharing this wonderful moment for me because we’re here with Madam Gorshan and Admiral Samari from Indonesia. Together we formed the chair and co -chair of the sixth working group on economic growth and social good in preparation for the summit. And I just wanted to say how much I was impressed with you, Madam Gorshan, how you managed the working group and how the outcomes were drafted and delivered, especially also delivered in the plenary. It’s not up to me, but I say well done. Really great. But thank you very much. It was really a wonderful journey with you. So, ladies and gentlemen, the use of digitalization and artificial intelligence in agriculture is developing rapidly.

It offers enormous opportunities to increase the productivity and sustainability of local food production. It offers opportunities to improve nature conservation and to foster a sustainable foster climate resilience in an inclusive and sustainable way. When this is all – when this – it also contributes to the autonomy and stability of countries. For the Netherlands, strengthening global food security is a strategic priority. Reliable, sustainable, and affordable food systems are essential for societal stability, economic development, and particularly in vulnerable regions. The ambitions in our digitalization agenda for agriculture, nature conservation, and food are to connect digitalization to the transition of agriculture needed for more food security, reduction of environmental impact, and climate resilience via public and private investments. Our primary focus on increasing productivity with lower environmental impact and improving climate adaptation, strengthening the resilience of food systems through response.

use of AI and digital technologies. Concerning today’s topic, the Dutch ambition is to enhance food security by making food systems more resilient and sustainable for all stakeholders. In my vision, digitalization and AI are powerful tools for that. They have already proven that they can significantly increase food productivity and reduce food losses. In addition, AI solutions can enhance the efficiency and resilience of food systems by supporting farmers to respond to sustainability requirements, make risk assessments, implement sustainable farming practices, and enable them to provide trustworthy and quality data sets about those efforts to be shared throughout the supply chain. The Netherlands has a strong AI ecosystem. Thanks to our technical universities and partners, we have a strong ecosystem of AI and companies like ASML, NXP, and Philips.

Despite its relatively small size, the Netherlands is not only a huge trader in agricultural produce, but also a global key player in agro -innovation and technology development due to the interaction between plant and animal science and technological knowledge systems in the Netherlands. Companies, science and government invest mutually in solutions for societal challenges. Examples include precision farming with AI, such as water savings of up to 90 % through smart irrigation, optimal crop yields with minimal input, and predictive models for disease control. To support digitalization in the agricultural sector in low – and middle -income countries, the Netherlands facilitates Dutch ICT agribusinesses to collaborate with businesses and startups there. And as you are… We are aware in the Netherlands that strong ICT ecosystems and highly innovative agricultural ecosystems come together.

ICT agricultural solutions combine the in -depth agricultural knowledge and advanced technology development in my country. Examples are applications for early warning of pests and diseases, optimization of water use and optimized plant breeding processes. Dutch companies and knowledge institutions are open to co -work on tailor -made solutions. Every country has its own typical local challenges and requires tailor -made solutions. Today special attention will be drawn to AI -powered solutions for small farmers and SMEs in producing countries in order to enhance their access to global agricultural supply chains while protecting their data. Our goal is to improve the ICT ecosystem and improve the ICT ecosystem in our country. We are committed to work together on this through knowledge sharing, co -operation and collaboration.

creation and capacity building so that AI solutions are locally relevant, inclusive and accessible to farmers. The need for an inclusive AI has also been central to our discussions in the working group of the Economic Growth and Social Group leading up to the summit. It fits well the summit motto, people, planet and progress. So I would like to thank India for its leadership in focusing on an inclusive AI future and underline that the Netherlands stands ready to contribute by forging concrete partnerships, sharing knowledge and technology while striving for measurable results in order to ensure that AI serves all of humanity. And I recall the Honourable Prime Minister’s speech in Flendry to which he alluded as well.

Ladies and gentlemen, we are honored to organize this important event together with the OECD, the go -to organization when it comes to AI governance, and to discuss the opportunities for international knowledge sharing and cooperation with FAO, the Wageningen University in the Netherlands, and the distinguished co -chairs of the Working Group on Economic Growth and Social Growth, India and Indonesia. We warmly thank India for hosting this summit and look forward to continuing and strengthening our cooperation in the field of AI and agriculture, both bilaterally and within the global partnership on AI. We also thank our co -chair Indonesia for continuing cooperation and we would like to highlight our appreciation and firm support of Indonesia’s ambition to join the OECD and its commitment to global standards and evidence -based policymaking.

International knowledge sharing and cooperation is needed to accelerate the development and application of new technologies. With the help of trustworthy AI. Having AI. And agricultural ecosystems on the agenda in this important AI summit is extremely valuable and a. forward in order to make a positive impact for all stakeholders. I wish you a fruitful meeting and look forward to our conclusions, and thank you for this opportunity to listen. So the floor is now Sarah.

Sara Rendtorff Smith

Thank you, Ambassador. And on behalf of the OECD, I just want to thank once again the Netherlands for the leadership in convening this timely discussion. And as was just reflected in the Ambassador’s remarks, the Netherlands is obviously a pioneer in advancing food and agriculture innovation, and we are so delighted to have them as co -chairs as well of the OECD FAO Advisory Group on Responsible Agricultural Supply Chains. From the OECD’s perspective, we clearly see this dynamic of agriculture and food systems today operating in an increasingly volatile environment, and farmers face a wide variety of shocks, from droughts, floods, pests, to conflicts and economic crises. With growing frequency and severe… and so therefore strengthening resilience while also ensuring inclusion, as was also stressed by Ambassador Federe, is really an urgent global priority that I hope we can talk about today.

AI in this regard offers significant potential. We’re seeing AI systems and tools being applied to optimize the use of critical resources, as was already mentioned, such as water, fertilizer, and pesticides, and also to reduce environmental pressure while enhancing productivity. The OECD and JPEI, which also met today in a ministerial session, have been examining AI use cases in agriculture with a focus on the EU and on Southeast Asia, and we continue these dialogues. And what we’re seeing there is that the evidence from real -world deployment is really, really promising. So, for example, AI -enabled precision spraying has reduced pesticide use by up to 30 percent, and this is actually without compromising yield. while computer vision green on brown systems can cut herbicide used by up to half by targeting only the weeds that require the treatment and thus not the crops.

And in addition, we’re seeing how forecasting, monitoring, and early detection of climatic and biological threads means that AI systems can strengthen our capacity to respond to crises before they even escalate, so some degree of preemption. AI is also revolutionizing agricultural innovation itself and supporting more efficient plant breeding that can develop climate -adaptive variety in a fraction of the traditional time. And here we also have some interesting data seeing in Central Europe that researchers have identified drought -tolerant traits in crops such as sorghum and chickpea that boost yields by up to 25 % during end -season drought. And in Asia, meanwhile, we’re also seeing global AI hybrid rice platform demonstrating how AI can shorten breeding cycles by predicting optimal parent combinations and enhancing resilience in one of the world’s most vital staple crops.

Beyond the farm gate, AI is also reinforcing the resilience of our entire food supply chains. And AI -enabled traceability, market transparency, and smart logistics can reduce losses, improve compliance, and strengthen food safety systems. Evidence from these digital traceability initiatives across the OECD members demonstrates a growing maturity of exactly these systems, so something really to look out for. But technology alone, as we know, does not ensure impact, and so adoption is where we’re really looking now, and that remains quite uneven still. And this is obviously why we’re all here in Delhi. So while we’re seeing in Australia that 96 % of farmers are using digital tools, the same number for Chile is just 12%. And this is highlighting a digital divide that could deepen existing inequalities if we don’t look to address it.

There’s also important challenges in the use of AI, and this goes back to sort of the core work of the OECD, looking not just at the benefits but also the challenges associated with AI. Farmers and regulators need transparency in how AI systems make their decisions, but at the same time fragmented data governance frameworks introduce complexity to the use of AI tools that support the trade, traceability, and resilient food supply chains across the border. And this highlights the need for greater interoperability, which is also a theme at this summit. So structural barriers including high cost, limited digital skills, and lack of trust. These are some of the things that continue to slow the uptake of AI.

So bridging these gaps, which should be a priority for all of us, requires investment in connectivity and other digital infrastructure, in skills and affordable solutions. So smallholders, women, farmers in remote areas who play a critical role in enhancing global food security, they’re able to also benefit from AI’s potential. And farmers must be able that their data is collected, shared, and used responsibly. So in this area, the OECD is working to help countries put in place policies that promote these objectives through an AI policy toolkit. And this toolkit will provide practical, context -specific guidance to countries. The toolkit builds on our policy navigator. If you haven’t already visited it, it’s on osd .ai. And it so far covers more than 2 ,000 policies across 80 jurisdictions.

So this is where you can find examples. Examples of national AI strategies, but also in specific sectors. And we continue to update this, and for anyone in this room representing a country not represented, we encourage you to visit and to also contribute your policies. We’re also advancing work on digital governance in agriculture. This is within GPAY that I mentioned earlier, a priority there, where we examine governance models across countries and their applications for responsible digital transformation more broadly. We also see strong complementarities with the global AI impact comments, which is a key deliverable of this summit, and which shares concrete use cases of AI with known impact and scaling potential. So for the OECD advancing trustworthy AI consistent with our OECD AI principles requires a strong enabling ecosystem alongside technological progress.

And what we’re seeing is that if we succeed, we’re really in a position to raise productivity. sustainably and also strengthen resilience in agricultural supply chains, including by ensuring that the benefits of innovation are widely shared and existing divides are not deepened in the process. So I really look forward to this panel’s insights to help us take this conversation forward, looking at practical pathways to achieve this vision. And with this, it’s my pleasure to introduce our esteemed panel. Many have traveled far to be here. So first, I would like to introduce Professor Arvind Sumari, who is an Indonesian Air Force officer and professor at the State Polytechnic of Malang. Welcome. And also we have with us, next to Professor Sumari, we have Mr.

Dayan Jakoblevich. He’s Chief Information Officer and Director of Digital FAO and Agroinformatics Division at FAO of the United Nations, based in Rome. We also have with us… We have with us today the pleasure of having Debjani Ghosh, Ms. Debjani Ghosh. Distinguished Fellow and Chief Architect of NITI Frontier Tech Hub. And finally, it’s my pleasure to introduce Dr. Arun Pratihast, Senior Researcher at Wageningen University Environmental Research. So welcome to this session. And what we will see today is each of our speakers bringing a unique perspective on how AI can help build food systems that are resilient and inclusive, which is the topic of the session. And after the panel discussion, I will also be giving the floor to anyone in the room who might have questions.

So now let’s begin. I’ll hand the floor over to Dan, who will set the scene for the conversation. Dan, you have the floor.

Dejan Jakovljevic

Thank you very much. And I would like to welcome everyone on behalf of the Food and Agriculture Organization. I thank you to our hosts here. The summit from India, but also ECD and. government of the Netherlands ambassador thank you when we look at agri -food I heard in the interventions before about the agriculture and the food we look at agri -food systems from the FAO perspective why because the food itself as if we look at the agriculture food is one product but not only one so there is a whole ecosystem behind agriculture of products that are not necessarily food and they are equally important when we make considerations when we look at for example at the water use transport and many others so in from agri -food systems perspective AI brings us fantastic opportunities and if we look at our topic today in terms of inclusiveness and resilience and inclusion and inclusion and inclusion and resilience and inclusion and resilience and inclusion and inclusion and inclusion and inclusiveness and resilience and resilience and resilience inclusiveness is still a big issue if we just think back back maybe two, three years before the, let’s say, chat GPT came out, the inclusiveness and the digital divide was still strong and present.

And the key issue is that it used to be possible to exist outside of the digital ecosystem. We all know we could maybe go to the bank, but nowadays it’s not. So if a farmer or communities are outside of the digital ecosystem, they suddenly are outside of any ecosystem almost. And now with the AI, it makes it even worse. So this is something we need to continue to press on and jointly in making sure that everybody has equal opportunity within the digital ecosystems. And on the positive, let’s say, note, on the positive, let’s say, note, on the AI when it comes to inclusiveness. We see very encouraging opportunities with AI. What I mean by that is we can, in fact, lower the entry barrier to knowledge.

Just two days ago, I’ve seen here actually this opportunity at the event, great advancements, the new tool that was produced by government of India where farmers can, with a phone call, as not everybody has a smartphone, can get advisory in the area of agriculture, from shrimp cultivation to pest diseases and similar. So this is great. The service can be in many languages. So this is a fantastic opportunity example where AI can help us actually lower the entry point to the AI. In the same time, for governments, it’s even more so difficult. to have the capacity to build the AI infrastructure to provide such services. So this is, again, I think one area, and forums like this help us consider what it takes to build it.

When we look at the resilience specifically, I was very happy to hear in the previous openings you mentioned resilience in terms of, Jeff and from Ambassador, we heard on anticipation. So I would say this is the key word. The key word is anticipation. So anticipate the shocks to the agri -food systems that impacts the food security. We know we have natural disasters. We know we have also conflicts. We have many different factors that impact agri -food systems. So building the systems that are capable of absorbing the shocks of these situations and anticipating. Anticipatory actions to when the shocks happen, what can be done to kind of. go over these shocks. So this is where AI can be a great enabler, where we can then, with new capabilities, anticipate these shocks, and with the help of data and our joint work, really, put together decision -making tools, anticipatory tools, situation rooms, to be able to quickly not only anticipate, but when something happens, we don’t really improvise, but we have tools in hand to address these situations.

We still have about 700 million people without food on the table today. So from this perspective at FAO, and I’m sure we shared the same sense of urgency to actually do something. So I wanted to say from this perspective, we are very grateful to be part of this conversation and thank you for your time. And we can work together in finding the new solution. So I thank you for that. and I’m looking forward to our panel. Thank you.

Sara Rendtorff Smith

intelligence research group and are the co -inventor of the Knowledge Growing System, a cognitive artificial intelligence framework designed to enable adaptive and evolving decision making. So from Indonesia’s vantage point, we’d be interested to hear where you see the most significant AI capability gaps across the agricultural system and where you see the greatest opportunities at the same time for AI to make food supply chains more efficient and resilient, something we also heard as a priority. And we also know that Indonesia is one of the countries advancing an ambitious AI agenda. So if you could briefly outline also the key pillars of Indonesia’s AI roadmap, this is of interest and to explain how you are balancing horizontal AI governance with more sector -specific regulation in agriculture.

Over to you. Thank you.

Arwin Datumaya Wahyudi Sumari

Thank you, Sarah. First, I would like to deliver my appreciation and congratulations to the host, India, and also my chair, Ms. Goss, and also my dear colleagues from the land ambassador harry first letter for coaching our working group together and also other speakers and Sarah thank you and our audience regarding your question about the artificial intelligence for Indonesia as we already know together that Indonesia is not only the agriculture but also maritime nation we we were self -sufficient in in rice about 20 30 years ago and then it wasn’t a I for making our country had sufficient in in rice but nobody I is something that that can make our program to be to become a self -sufficient country in right can be achieved.

We are much aware that the ideology is developing very fast, not only in America or Europe, but also in Asia, especially in Indonesia. This rapid and democratic application across all agricultural potential areas presents significant challenges, especially given the potential location which are separated by ocean. And you already know that Indonesia has 17 ,000 islands separated by ocean. We only have 36 % of land, 64 % of water, and 100 % of air. And this is a challenge for us. If you don’t believe me, you can count the numbers of our islands. And this is a challenge for us. And we also have another challenge. We are living above the ring of fire. There are also other challenges for our people of Indonesia. And as I mentioned previously, this gap is further widened by lack of democratically supporting AI infrastructure, such as telecommunication.

We have three different times region, the west region, center region, and eastern region. And each one has different one hour, one to another. And also, there is a problem with unequal distribution of AI talent. I think the problem is not only in Indonesia, but also all over the world. In terms of the biggest opportunities for utilization. AI in the food supply chain, especially in agriculture country like Indonesia. efforts to do such as like we can use AI for prediction of soil condition and nutrition before opening new land for agriculture. Our president has a program to open almost 1 million hectares of new rice files. 1 ,000 hectares in some big island of Indonesia in order to get the safe efficiency in the next five years.

And then we also use the AI for prediction of the most appropriate food crops given the soil condition and nutrition of existing agricultural land. We have seven dozen islands and each island has different soil condition, different soil nutrition. And you can use AI to predict what kind of nutrition, what kind of soil condition, what kind of vitamin that belongs to that soil. So we can predict the proper crops, the proper plants that have to be planted in that area. The second one about optimizing the most optimal fertilizer content to produce the best harvest result as well as optimizing the volume of water required according to the type of fertilizer given. Some of my students, they did some experiment how to predict the percentage of fertilizer combined together to get the most optimum production of any kind of crops.

Even if it is corn, rice, or sweet potato. And then we also can use AI for intelligent farming. We don’t say smart farming. Smart is not really intelligent. Intelligent is different. There is knowledge that has to be grown in the system. So intelligent farming is just like a human. They grow their knowledge within their brain. By optimizing the seed planting in the land so that plants can grow and develop healthily to produce the best products to optimization of the harvest process until delivery to logistic warehouses. So it’s just like end -to -end mechanism. And then we also can predict the weather dynamics just as a short step of the flood and something like that. So we can predict the weather dynamics to obtain the right conditions.

So that’s the vision for planting seed and reducing the level of crop failures. The crop failures that… This often happens if the farmer, they fail to predict what kind of pest, what kind of, what type of the soil and everything. And then the last one, optimizing the logistic transportation route to reduce the operational and other unnecessary costs. You can count how much operational costs to deliver the crop production from one island to another island in Indonesia. The price in the eastern area can be double or triple times in eastern area. So if we buy rice in eastern area only $1, it can be $3. $5, $6 in. eastern area. So that’s why we need AI to optimize the transportation and logistic transportation routes.

Whether it is from water, from the ocean or sea, and also from the air. Regarding the policy and regulation, you asked about the air roadmap, right? And then about how to balance the horizontal AI government with sector -specific agriculture, right? Yeah, we are proud. UNESA is proud to be a leader in our region, exploring how AI policy and regulation can be powerful tools for promoting trustworthy AI, especially in critical verticals like the agricultural sector. This one. Agriculture is very important to UNESA because most of the people in Indonesia, they are farmers not only in Java Island but also in other big islands in Indonesia if you see, there are five big islands in Indonesia from western area like Sumatra and then Java and the southern area we have Borneo in the central, also Sulawesi, or Celebes and the biggest one in the eastern area is Papua Island still have so much area that can be explored to become a rice field our national AI roadmap is not merely a technological blueprint it is a strategic framework designed to create an ecosystem that harnesses AI for inclusive and resilient system, including food system, so there are two keywords in here inclusive and resilient inclusive means it must be transparent AI must be transparent, AI must be explainable.

We’ve been having problems with the neural network -based system that the black box cannot be explained in plain. And then the second was Sicilian. This is very important for agricultural -based nation. So the implementation of AI needs a strong and sustainable national ecosystem, like my dear colleague, Ambassador, first of all mentioned about ecosystem. The AI cannot be implemented, cannot be applied without a strong and sustainable ecosystem that collaborate all stakeholders, not only government, but also business, industries, communities, media, and also academia. so we have a concept of helix maybe you ever heard about quad helix, five helix, six helix that’s very important so when we are developing the ANS roadmap the government in this case Ministry of Digital Information and Communication and Digital Affairs is open a voluntary contribution from all stakeholders not only the government but also from industry, academia media and communities so our roadmap has seven pillars that include AI regulation AI ethics, that’s important the third one is investment like it was mentioned before about financing when I was working the attending the US forum in AI export they mentioned about financing financing is very important, without that there is no AI ecosystem financing and investment and then the third AI data, the fifth one AI innovation and then the next one AI talent development the last one is AI use case so because we embrace all stakeholders so we assure there is no one left behind.

Thank you.

Sara Rendtorff Smith

Thank you very much professor and we can come back to those in more detail later perhaps in the Q &A but I really want to thank you for sharing the promising use cases from Indonesia, very instructive I think for this discussion and now I would like to turn over you talked about the helix and how we work together to have the industry perspective from Ms. Ghosh India as we mentioned also co -chairs the summit working group and so I’d be interested to hear now that we’re seeing AI as quickly becoming foundational to agricultural productivity and food security but the big question now is whether as we mentioned it will deepen inequalities or indeed democratize the opportunity so from your vantage point Ms.

Ghosh what practical steps are needed to broaden access to AI capabilities so that emerging economies and smallholder farmers can also benefit and fully participate and as adoption accelerates hopefully broadly how should public -private partnerships evolve to scale responsible AI deployment and prevent the AI divide? Thank you.

Debjani Ghosh

It’s a very long answer question. I’ll try and keep my answer very short. But before I do that I have to acknowledge the presence of Yeah, okay. But before I do that I have to acknowledge the presence of I think one of the The biggest experts in this field of agriculture in this room, Professor Ramesh Chand, who’s also a very esteemed member of NITI Aayog. And I requested him not to come for this session. I’m going to be too nervous if you’re going to be sitting right in front of me. But yeah, let’s see if we live up to his expectations or not. You know, the biggest problem with AI today is that we throw AI at every problem that exists.

And we expect that something will happen out of it. Right. And as a result, we generalize the technology a bit too much. See, the thing with AI is if you really want to unlock the technology, you have to know what exactly are you solving for? What problems? And then you have to go deep because there are so much that has to come together for AI to work. For example, is the data in place? How good is the quality? Is the ecosystem in place? Are capabilities in place? So AI requires investments. And AI is a pretty deep investment overall. Right. So it’s very important to understand what problems do you want to solve with AI. And I think that’s one of the biggest issues today because we are not taking the time to think through it.

We keep saying AI is the magical world for everything. Right. So now let’s look at the food system. And I hope I’m correct. Professor Chan, I’ve learned this a bit from you also. But I think the biggest issue today is while the world is producing enough food to feed, I think, 8 billion people. But there are still millions and millions. Who are hungry. So there’s a paradox. And I think when you start breaking it down further to understand the exact problems as to why this exists. distribution? The entire access to food, do you have access to food so there is surplus and there is deficiency and then you don’t have a bridge to ensure that there is distribution happening at real time that is needed.

And what this results in is tremendous amount of food shortage, food wastage. And some of the culprits when we think of it, of course geopolitical wars are a big culprit, conflicts are a big culprit but climate is another big culprit. So this is how you sort of at least how I, because I look at everything from a tech lens, I’m by no means an expert in the domain but when I look at it from a technology lens and I say how do I best apply the technology to this problem, this is the domain that we have to play with. So now when you look at it if I have to say where do I want to go deep the problem to solve for at least when I look at all of this is the biggest problem to suffer in the food supply chain according to me right now just purely looking at it from a tech lens is the wastage.

How do I bring down food wastage? What role can AI play to bring down food wastage? So then you start looking at logistics, you start looking at supply, the cold chains that exist globally or not. You start looking at trade, you start looking at geopolitical agreements because all of that will come into mind. Now in terms of industry coming together to solve for AI again if you want the best out of industry you have to ensure that there is alignment on the problem statement you want to solve. Otherwise everyone will come and everyone will do the same pilot everywhere. That’s what’s happening today. When you look at AI executions around India and around the world, and because of the AI commons that we have built, every country is trying out the same thing, farmer advisory, right?

Every country is trying it out, but why is it not scaling? Why are we not solving for other problems? So again, it’s very important to identify what is the problem statement? How do you ensure that when industry gets involved, there is a route to market? And there is a route to commercialization because that becomes very important for industry. And one of the things that we advocate is coming up with maybe a center of excellence, a center of innovation that is identified to solve specific problems. I think one of the problems today we have with COEs are you have AI COEs, you have blockchain COEs. I really don’t understand what that means. But what if we had a COE to say that how do we ensure that the cold chain problem is solved across the country?

How do we have a COE that ensures that climate resilient crops in XYZ areas can be grown, right? And then bringing the industry together to say that how do we collaborate to create, I think gives you the right kind of outcomes. Thank you

Sara Rendtorff Smith

very much, Mishkosh. And this is a perfect segue, I think, to our next speaker, turning to the research community and how to really bridge research into advanced AI to more practical tools. So, Dr. Pratihast, I would like to turn to you now for, you know, some examples of, you know, how these advanced AI tools can really be made to good use in more low -tech farming environments. And maybe you can give us some concrete examples, what distinguishes those who succeed from those who don’t. Maybe speaking also to some of the points that Mishkosh raised. Thank you. Thank

Arun Pratihast

you. Thank you for invitation. It’s very timely discussion. And of course, always when we talk about AI, we often talk about the technology, how fast the model are, how big the data set they can handle, what are the parameters. That we always talk. But if you think about the food system, and of course, Terry mentioned that, you know, food system have different layers. And bottom of this layer is basically a smallholder farmers. And that farmer operate in a different environment. If you look at it last year, there’s billions of euro investment has been done in the tech industry to build more models. Is the same thing happen to the smallholder farmers? No. So there is often there is a problem that what we want to solve in the server room or computer, it doesn’t work in the field.

Right. So we. Really need to think how. the AI or model which we are really developing that is applicable to the grassroots level. And so within the Wakeningen and personally, I have been working in Asia, Africa and Latin America. And one of the problems, basically, there is three problems we are facing in this whole AI domain nowadays. First is really data scarcity. Still, there is not enough data. The data is not shared. As you mentioned, there is no ecosystem. There is no fair infrastructure where data can be shared. And that hinders the model. The model works on a global scale, but when you want to work on the local scale, it doesn’t work. It doesn’t provide the input that is expected for the smallholder farmers.

Second is the trust. Often, the farmers don’t own, and then, of course, the… the model and the farmer’s expectation is different and then there’s often not much trust how to apply this in the local level. That’s why most of this advisory is failed. Farmer doesn’t follow the advisory because it doesn’t make sense. And then third thing is scalability. Often we think that scale is not only the technical scale. Like you process something fast doesn’t mean that it can apply the same way. So we need to really think differently. And that’s why like we started I give a couple of three concrete examples. One example about food security. We need to understand what is the map.

Where are the crops? There is no global map that is accurate enough. So with the help of European Space Agency four years ago we started the World Cereal Project where we try to map the global crop length. So we started the World Cereal Project the World Cereal Project and we started still the maps are not perfect because India, China, many countries they don’t share their data so there is no data and if there is no data we have fantastic model we have built very nice geo -embedding with NASA harvest but applicability of this model is still very low in this country second thing is about high tech solution in low tech environment for example chocolate industry cocoa, agroforestry is really suffering from the climate change and we have established many advisory services but not from the researcher or tech perspective but engaging farmer perspective and that works we build basically chatbot with their language that really understand what they need and how we can translate their problem they know which disease are coming so we are using computer vision from their lens and then we are training and that works So there are a couple of things which we really see that if you really want to make these things working, you need to make sure that these solutions should work in low -tech environment.

Most of the things, connectivity has gone up. People are on social media, but still data is not there. Data infrastructure is not there. And always tech industry or like we as a modeler, whatever we call. So we see always data as the input and output. Data should be as the infrastructure. We should engage farmers in that infrastructure. And then only we can achieve the

Sara Rendtorff Smith

Thank you very much. And I think with this, unfortunately, we’re coming to a close on time. I think maybe the speakers can be kind enough to stay a little bit after if there are questions. We won’t have much time for Q &A. but just to thank you all for really providing a diverse set of perspectives for the timely discussions to the ambassador of the Netherlands for framing this important discussion and I think some of the key takeaways perhaps is that there is vast potential and we saw the Indonesian perspective of all these very concrete examples also Dejan talking about potential for anticipatory action and we heard about this global and even domestic paradox of food insecurity when there really is enough food but it may not be distributed enough or properly and also I think importantly that to have impact with AI we need to make sure that it is problem driven that it is driven by the local context and the farmers who need to use it and maybe lastly a very important point which is exactly core to the work we do at the OECD that to drive this adoption we also need to ensure that there is trust in what is produced.

And this requires, obviously, a number of factors, such as explainability and transparency and so on, and also responsible data collection. But just with that, let me thank the panelists for their rich inputs. Please do stick around a little bit for some questions, maybe in the margin. And thanks again to the Kingdom of the Netherlands for co -hosting this event with the OECD. Thank you.

Speaker 5

Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (17)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Sara Rendtorff Smith opened the session, introducing a multi‑stakeholder panel on AI for transparent, responsible and inclusive food systems.”

The knowledge base lists Sara Rendtorff Smith as the session moderator representing the OECD, confirming her role in opening the panel [S3].

Additional Contextmedium

“The Netherlands has a strong ICT ecosystem combined with an innovative agricultural ecosystem, making it a global agro‑innovation hub anchored by firms such as ASML, NXP and Philips.”

Source S12 describes the Netherlands’ strong ICT and highly innovative agricultural ecosystems, supporting the claim of a Dutch agro-innovation hub, though it does not name specific firms [S12].

Additional Contextmedium

“AI‑enabled precision spraying can cut pesticide use by up to 30 % without yield loss.”

An autonomous spraying robot reported in S100 can reduce pesticide use by up to 95%, providing additional context that AI-driven spraying can achieve reductions even greater than the 30% cited [S100].

Additional Contextlow

“Smart irrigation can save up to 90 % of water.”

S31 discusses precision-agriculture techniques that optimise water use, confirming that AI-based irrigation can dramatically reduce water consumption, though it does not specify the 90% figure [S31].

Confirmedmedium

“AI‑driven tools such as remote sensing, drones and predictive analytics enhance precision agriculture practices.”

Source S22 lists remote sensing, drones, and predictive analytics as AI-powered tools that improve precision agriculture, confirming the claim [S22].

External Sources (101)
S1
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Debjani Ghosh- Distinguished Fellow at NITI Aayog, former role with NASCOM
S2
Panel Discussion: 01 — -Debjani Ghosh- Distinguished Fellow, Niti Aayog (role: moderating the ministerial conversation)
S3
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — -Arwin Datumaya Wahyudi Sumari: Indonesian Air Force officer and professor at the State Polytechnic of Malang, co-invent…
S4
Knowledge Café: Youth building the digital future – WSIS+20 Review and Beyond 2025 — – **Speaker 5** – Role/expertise not specified Speaker 5: Sure. So what we talked about as a group is we discussed this…
S6
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued) – session 5 — The Chair’s instrumental role in facilitating consensus-centric discussions has been recognised with gratitude by South …
S7
AI for food systems — – **Dejan Jakovljevic**: CIO and Director, Digitalization and Informatics Division, Food and Agriculture Organization of…
S8
WSIS prepares for Geneva as momentum builds for impactful digital governance — As preparations intensify for the World Summit on the Information Society (WSIS+20) high-level event, scheduled for 7–11…
S9
Open Forum #18 Digital Cooperation for Development Ungis in Action — Dejan Jakovljevic: Yes, of course. Before I mentioned the project, first of all, thank you for inviting me also to the s…
S10
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — – Harry Verweij- Arwin Datumaya Wahyudi Sumari- Sara Rendtorff Smith – Harry Verweij- Dejan Jakovljevic- Sara Rendtorff…
S11
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — -Arun Pratihast: Senior Researcher at Wageningen University Environmental Research -Speaker 5: Role/title not mentioned
S12
https://dig.watch/event/india-ai-impact-summit-2026/transforming-agriculture_-ai-for-resilient-and-inclusive-food-systems — He’s Chief Information Officer and Director of Digital FAO and Agroinformatics Division at FAO of the United Nations, ba…
S13
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — -Sara Rendtorff Smith: Session moderator, representing the OECD -Speaker 5: Role/title not mentioned
S14
AI for Good – food and agriculture — Dongyu Qu: Excellencies, ladies, gentlemen, good morning. A year ago, we all gathered for the Previous AI for Good Summi…
S15
Building Climate-Resilient Systems with AI — But here’s what we came up with. The first one, I mean, this is a kind of bottom line, but it’s important. AI does have …
S16
WS #279 AI: Guardian for Critical Infrastructure in Developing World — 2. Establish public-private partnerships for knowledge transfer and technology access. 5. Increase collaboration and kn…
S17
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And that’s clearly something we try to do. And, of course, in addition, we need absolutely to have computer facility at …
S18
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — In conclusion, AI has the potential to transform the consumer landscape by empowering consumers and assisting regulators…
S19
Opening of the session/OEWG 2025 — Kazakhstan: Thank you for giving the floor. At the outset, Kazakhstan would like to express its sincere gratitude to y…
S20
Opening of the session — – Tailored capacity building initiatives Kazakhstan: Thank you, Chair, for giving the floor. Mr. Chair, distinguished d…
S21
WS #55 Future of Governance in Africa — Speaker 4: Moderator, excellencies, ladies and gentlemen, let me say that it is an honor today to address you. And I’…
S22
Sustainable development — AI-powered tools like remote sensing, drones, and predictive analytics can enhance precision agriculture practices. They…
S23
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — Development | Economic Sensors and drones collect real-time data, while machine learning models optimize irrigation, pe…
S24
Digital divides &amp; Inclusion — However, the cost of internet access remains a significant barrier in some parts of Africa, notably in The Gambia where …
S25
Open Forum #5 Bridging digital divide for Inclusive Growth Under the GDC — The Minister of Lesotho emphasized the need to eliminate duplication among various digital initiatives and called for st…
S26
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S27
Main Session 2: The governance of artificial intelligence — Kakkar stressed the importance of meaningful multi-stakeholder participation and strengthening mechanisms like the Inter…
S28
Open Forum #30 High Level Review of AI Governance Including the Discussion — Legal and regulatory | Development Moving from Principles to Practice The toolkit will be an online interactive tool a…
S29
WS #123 Responsible AI in Security Governance Risks and Innovation — Legal and regulatory | Human rights principles She emphasizes that UN-sponsored platforms like UNIDIR’s RAISE and IGF p…
S30
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S31
WS #49 Benefit everyone from digital tech equally &amp; inclusively — – Mobile apps that provide farmers with real-time weather data and crop management advice. Ricardo Robles Pelayo: So, …
S32
AI Meets Agriculture Building Food Security and Climate Resilien — Artificial intelligence | Environmental impacts | Social and economic development He highlights the use of century‑long…
S33
The State of Digital Fragmentation (Digital Policy Alert) — The representation of developing countries in data governance is a crucial concern. It argues that existing data governa…
S34
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Adamma Isamade stresses the need for a multistakeholder approach in policymaking. She argues that policies often lack in…
S35
WS #205 Contextualising Fairness: AI Governance in Asia — Milton Mueller: I turned the mic on. I just turned it on. That helps, right? So can we get too hyper-contextualized?…
S36
Ministerial Roundtable — Despite progress in digital transformation, there are critical gaps in AI governance, with only 21% of governments world…
S37
The Global Power Shift India’s Rise in AI &amp; Semiconductors — “So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resource…
S38
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — So I think the accountability on humans is what we have to focus on. And going back to your question, if you’re talking …
S39
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S40
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — **Judith Okonkwo** provided crucial insights into practical challenges of implementing AI technologies across different …
S41
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And if they don’t, they’ll still make decisions, but they’re not going to be very good decisions. You know? So the secon…
S42
AI for agriculture Scaling Intelegence for food and climate resiliance — A lot of questions in the same question. So what I’ll do is I’ll just first take you through the initiatives. First of a…
S43
AI in Action: When technology serves humanity — Again, the farmers themselves remain decision-makers. They weigh the advice against their experience, their land, and th…
S44
High-Level Dialogue: The role of parliaments in shaping our digital future — AgriTech is something extremely important. We all suffer from food safety issue and this relates to everything else, so …
S45
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — The presentation demonstrates high internal coherence with a balanced perspective that acknowledges both the potential a…
S46
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — – Digital tools should be employed to improve outcomes for these at-risk groups, who often lack sufficient employment an…
S47
Pre 3: Exploring Frontier technologies for harnessing digital public good and advancing Digital Inclusion — AI systems reflect the quality and inclusiveness of their underlying data and decision-making processes. Currently, both…
S48
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Algorithms are not just applications of mathematical codes that support the digital world. They are part of a complex po…
S49
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — 1 ,000 hectares in some big island of Indonesia in order to get the safe efficiency in the next five years. And then we …
S50
Survival Tech Harnessing AI to Manage Global Climate Extremes — “It has to be a hybrid model which has to be connected with the physical systems of the various sensor fabric and the sa…
S51
AI Meets Agriculture Building Food Security and Climate Resilien — Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve better. And unde…
S52
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Low to moderate disagreement level. The speakers largely agree on the need for proper data foundations, leadership invol…
S53
AI 2.0 Reimagining Indian education system — High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers, representing d…
S54
The Global Power Shift India’s Rise in AI &amp; Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S55
What policy levers can bridge the AI divide? — ## Sector-Specific Applications **The Philippines** developed their strategy with strong presidential leadership and mu…
S56
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Abeer Alsumait: assistive technologies, but there are challenges like a very minor issue might also be a kind of we ca…
S57
WS #270 Understanding digital exclusion in AI era — – Florent: Professor of law at the University of Zurich – Mbongi Nimsimangasori: Postdoctoral researcher with the Johan…
S58
Digital divides &amp; Inclusion — Another important issue highlighted in the analysis is the lack of accessibility and inclusion for people with disabilit…
S59
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S60
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S61
WSIS Action Line C2 Information and communication infrastructure — Data quality and governance as fundamental requirements Legal and regulatory | Human rights Regulatory Frameworks and …
S62
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — High level of consensus on fundamental principles with constructive disagreement on implementation details. This suggest…
S63
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — Cloud strategy requiring European/French infrastructure but inadequate market supply to meet public administration needs…
S64
WS #98 Towards a global, risk-adaptive AI governance framework — Sector-specific and use case-specific governance may be needed rather than one-size-fits-all approaches
S65
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Low level of fundamental disagreement with moderate differences in implementation strategies. The speakers largely agree…
S66
Scaling AI for Billions_ Building Digital Public Infrastructure — The conversation highlighted the critical importance of building proper foundations before implementing AI capabilities,…
S68
Press Conference: Closing the AI Access Gap — An important aspect of the alliance’s work is the creation of relevant international frameworks and public-private partn…
S69
Multistakeholder Partnerships for Thriving AI Ecosystems — The panel revealed sophisticated understanding of how different stakeholders must collaborate whilst maintaining distinc…
S71
AI for Good – food and agriculture — AI-powered advisory services have reduced costs from $30 to $3 per farm with potential to reach $0.30. Partnership with …
S72
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — 1 ,000 hectares in some big island of Indonesia in order to get the safe efficiency in the next five years. And then we …
S73
Digital divides &amp; Inclusion — In conclusion, the digital divide between the developed and developing world is a significant issue that requires attent…
S74
What is it about AI that we need to regulate? — Based on discussions across multiple IGF 2025 sessions, several fundamental assumptions about digital inclusion need cha…
S75
Open Forum #5 Bridging digital divide for Inclusive Growth Under the GDC — Key unresolved challenges include bridging funding implementation gaps, developing mechanisms for harmonizing existing n…
S76
Panel Discussion: 01 — When asked to rate global AI infrastructure progress on a scale of one to ten, Minister Patria gave it 6 out of 10, high…
S77
Regional Leaders Discuss AI-Ready Digital Infrastructure — Hamam Riza, co-chair of Indonesia’s National AI Roadmap 2030 and president of the Collaborative Research and Industrial …
S78
Huawei’s dominance in AI sparks national security debate in Indonesia — Indonesia is urgently working tosecure strategic autonomy in AIas Huawei rapidly expands its presence in the country’s c…
S79
The Global Power Shift India’s Rise in AI &amp; Semiconductors — “So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resource…
S80
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — Development | Economic Guo advocates for strengthened collaborative mechanisms that bring together multiple stakeholder…
S81
From India to the Global South_ Advancing Social Impact with AI — -Public-Private Partnership for Scale: Emphasis on collaboration between government, industry, and academia to create em…
S82
https://dig.watch/event/india-ai-impact-summit-2026/transforming-agriculture_-ai-for-resilient-and-inclusive-food-systems — Every country is trying it out, but why is it not scaling? Why are we not solving for other problems? So again, it’s ver…
S83
Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161 — Certain barriers, such as low budgets, less technical focus in decision-making teams, and low priority given to smaller …
S84
Challenges and solutions for broadband infrastructure deployment in developing countries, rural and remote areas — Innovations to overcome deployment barriers and labour scarcities were covered, including the use of pre-connectorized o…
S85
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — **Judith Okonkwo** provided crucial insights into practical challenges of implementing AI technologies across different …
S86
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Brandon Mello introduced a sobering statistic: 95% of AI pilots never reach production deployment. The primary barriers …
S87
Main Session on Artificial Intelligence | IGF 2023 — Finally, it was suggested that an independent multi-stakeholder panel should be implemented for important technologies t…
S88
WS #102 Harmonising approaches for data free flow with trust — This discussion, moderated by Timea Suto, brought together experts from various sectors to explore the challenges and po…
S89
DPI High-Level Session — The World Summit on the Information Society (WSIS) hosted a session that brought together a diverse group of stakeholder…
S90
Panel 2 – Anticipating and Mitigating Risks Along the Global Subsea Network  — So whoever’s happy to take my question. So last year, just piggybacking off of John’s question on the panel yesterday on…
S91
High Level Dialogue with the Secretary-General — He mentions the potential of artificial intelligence as a tool for development if used equitably.
S92
Global Digital Governance &amp; Multistakeholder Cooperation for WSIS+20 — Ernst Noorman: Good morning everyone. Very much welcome to this session. First of all, my name is Ernst Noorman. I’m the…
S93
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — The tone was consistently collaborative, optimistic, and forward-looking throughout the session. Delegates maintained a …
S94
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-morning-session-part-2 — Minister Vaishnav, Excellencies, ladies and gentlemen, let me begin by giving our thanks and expressing our sincere appr…
S95
Parallel Session A5: Achieving Sustainable and Resilient Transport and Logistics including inSIDS — Nowadays, countries face multiple simultaneous crises, such as health, environmental, and geopolitical conflicts.
S96
Opening Ceremony | GSCF 2024 — Moreover, the contributions of international bodies like the International Maritime Organization (IMO) and the United Na…
S97
Emerging Markets: Resilience, Innovation, and the Future of Global Development — Egyptian Minister Al-Mashat reported Egypt’s achievement of 5.5% growth despite regional conflicts and reduced Suez Cana…
S98
UNSC meeting: Peace, climate change and food insecurity — Climate change amplifies existing environmental, economic, social and security vulnerabilities Climate change is increa…
S99
Strategy — – Forecasted Weather Data: AI is helping the farmer to stay updated with data related to weather forecasting. The foreca…
S100
Foreword — Through the Asterix project the enterprise has developed an autonomous spraying robot, AX-1. The robot uses deep learnin…
S101
National Strategy for Artificial Intelligence — The government will also initiate a new ‘Intelligent irrigation’ pilot project using artificial intelligence to develop …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
H
Harry Verweij
3 arguments143 words per minute989 words414 seconds
Argument 1
AI can boost yields, reduce inputs, and support climate adaptation
EXPLANATION
Harry highlights that digitalisation and AI in agriculture can dramatically increase productivity while lowering environmental impact. He stresses that AI tools already demonstrate higher yields, reduced food losses and help farmers meet sustainability and climate‑resilience goals.
EVIDENCE
He notes that AI offers “enormous opportunities to increase the productivity and sustainability of local food production” and to “improve nature conservation and to foster a sustainable climate resilience” [13-16]. He further states that AI solutions have “significantly increase[d] food productivity and reduce[d] food losses” and can support farmers with risk assessments, sustainable practices and trustworthy data sharing across the supply chain [23-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s role in reducing greenhouse-gas emissions and enhancing climate-resilient agriculture is highlighted in [S15]; precision-agriculture tools that optimise inputs and lower environmental impact are described in [S22] and [S23].
MAJOR DISCUSSION POINT
AI’s potential to improve agricultural productivity, sustainability, and resilience
AGREED WITH
Sara Rendtorff Smith, Arwin Datumaya Wahyudi Sumari, Dejan Jakovljevic, Arun Pratihast
DISAGREED WITH
Debjani Ghosh, Sara Rendtorff Smith
Argument 2
Collaborative partnerships, knowledge sharing, and bilateral cooperation to spread AI benefits
EXPLANATION
Harry emphasizes the importance of international cooperation, citing the Netherlands’ work with the OECD, FAO, Indonesia and India to accelerate AI adoption in agriculture. He calls for concrete partnerships that share knowledge, technology and measurable results for the benefit of all humanity.
EVIDENCE
He thanks the OECD as “the go-to organization when it comes to AI governance” and mentions cooperation with FAO, Wageningen University, India and Indonesia, highlighting bilateral and multilateral collaboration to spread AI benefits [44-48]. He also thanks India for hosting the summit and reaffirms the Netherlands’ readiness to contribute through partnerships and knowledge sharing [45-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public-private partnerships and international knowledge-sharing mechanisms are advocated in [S16]; multi-stakeholder platforms for AI governance are discussed in [S29]; the OECD AI Incidents Monitor illustrates collaborative oversight in [S30].
MAJOR DISCUSSION POINT
Governance, policy, and inclusive frameworks for responsible AI deployment
AGREED WITH
Sara Rendtorff Smith, Arwin Datumaya Wahyudi Sumari, Debjani Ghosh
Argument 3
Public‑private partnerships, capacity building, and co‑creation of tailored solutions
EXPLANATION
Harry argues that scaling AI in agriculture requires joint public‑private efforts, capacity‑building programmes and solutions customised to local challenges. He stresses that inclusive AI ecosystems and co‑working between governments, businesses and academia are essential.
EVIDENCE
He describes the Dutch ambition to enhance food security through AI, noting the need for “knowledge sharing, co-operation and collaboration, creation and capacity building so that AI solutions are locally relevant, inclusive and accessible to farmers” [30-38]. He adds that the Netherlands facilitates Dutch ICT agribusinesses to collaborate with startups in low- and middle-income countries and commits to work together on tailored solutions [39-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for PPPs and capacity-building programmes is emphasized in [S16] and [S20]; policy-toolkit support for co-creating solutions is provided by the OECD interactive toolkit described in [S28].
MAJOR DISCUSSION POINT
Practical steps and collaborative mechanisms to scale AI responsibly
AGREED WITH
Arwin Datumaya Wahyudi Sumari, Debjani Ghosh, Sara Rendtorff Smith
DISAGREED WITH
Sara Rendtorff Smith
S
Sara Rendtorff Smith
4 arguments94 words per minute2039 words1289 seconds
Argument 1
AI optimizes resource use, cuts pesticide/herbicide use, and enhances traceability
EXPLANATION
Sara outlines how AI‑enabled precision tools can reduce the amount of agro‑chemicals applied and improve supply‑chain transparency. She points to real‑world deployments that achieve substantial input savings while maintaining yields.
EVIDENCE
She cites AI-enabled precision spraying that “reduced pesticide use by up to 30 percent” without compromising yield, and computer-vision systems that “can cut herbicide used by up to half” by targeting weeds only [60]. She also notes that AI-enabled traceability, market transparency and smart logistics can “reduce losses, improve compliance, and strengthen food safety systems” [66-67].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven remote sensing, drones and predictive analytics that optimise water, fertilizer and pesticide applications are outlined in [S22]; similar optimisation of irrigation and pesticide use is noted in [S23]; mobile apps delivering traceability and real-time advice are cited in [S31].
MAJOR DISCUSSION POINT
AI’s potential to improve agricultural productivity, sustainability, and resilience
AGREED WITH
Harry Verweij, Arwin Datumaya Wahyudi Sumari, Dejan Jakovljevic, Arun Pratihast
Argument 2
Uneven adoption, digital divide, high costs, limited skills, and trust issues
EXPLANATION
Sara draws attention to the stark disparities in digital tool usage among farmers worldwide, highlighting structural barriers that hinder AI uptake. She stresses that without addressing cost, skills and trust, AI could widen existing inequalities.
EVIDENCE
She compares adoption rates, noting that “96 % of farmers are using digital tools” in Australia versus only “12 %” in Chile, illustrating a digital divide [70-71]. She then lists “high cost, limited digital skills, and lack of trust” as structural barriers that slow AI uptake [72-76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The unequal pace of digital transformation, with one-third of the world left behind, is reported in [S14]; high internet costs limiting access are documented in [S24] and [S25]; trust and skill gaps are implicit in the discussion of digital exclusion in [S14].
MAJOR DISCUSSION POINT
Barriers and challenges to AI adoption in agriculture
AGREED WITH
Dejan Jakovljevic, Harry Verweij
DISAGREED WITH
Harry Verweij
Argument 3
Need for transparent, explainable AI, interoperable data governance, and policy toolkits
EXPLANATION
Sara argues that trustworthy AI requires transparency, explainability and coherent data‑governance frameworks. She promotes the OECD’s AI policy toolkit as a practical resource for countries to develop responsible AI policies.
EVIDENCE
She points out that “farmers and regulators need transparency in how AI systems make their decisions” and that fragmented data-governance frameworks create complexity, calling for greater interoperability [73-78]. She then describes the OECD AI policy toolkit, which provides context-specific guidance and covers over 2,000 policies across 80 jurisdictions, accessible at osd.ai [80-87].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Transparency, explainability and responsible data stewardship for farmer confidence are stressed in [S3]; the UN Security Council’s call for explainable AI appears in [S26]; the OECD AI policy toolkit providing guidance is described in [S28] and the AI Incidents Monitor in [S30].
MAJOR DISCUSSION POINT
Governance, policy, and inclusive frameworks for responsible AI deployment
AGREED WITH
Harry Verweij, Arwin Datumaya Wahyudi Sumari, Debjani Ghosh
Argument 4
International knowledge‑sharing platforms and OECD AI policy toolkit to guide implementation
EXPLANATION
Sara highlights the role of global knowledge‑sharing mechanisms, such as the OECD’s AI policy toolkit and other platforms, in supporting countries to adopt AI responsibly. She stresses that these resources help align policies, share best practices and monitor impact.
EVIDENCE
She explains that the toolkit “will provide practical, context-specific guidance to countries” and that it builds on the OECD policy navigator covering more than 2,000 policies [80-87]. She also mentions the broader work on digital governance in agriculture within GPAY and the global AI impact comments that share concrete use cases with scaling potential [88-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The OECD AI policy toolkit and its interactive database are detailed in [S28]; the AI Incidents Monitor further supports knowledge-sharing in [S30]; UN-sponsored multi-stakeholder platforms for AI governance are highlighted in [S29] and [S16].
MAJOR DISCUSSION POINT
Practical steps and collaborative mechanisms to scale AI responsibly
AGREED WITH
Harry Verweij, Arwin Datumaya Wahyudi Sumari, Debjani Ghosh
D
Dejan Jakovljevic
3 arguments140 words per minute738 words316 seconds
Argument 1
AI enables anticipatory actions for shocks and disaster response
EXPLANATION
Dejan stresses that AI can help agricultural systems anticipate and prepare for shocks such as natural disasters or conflicts. By providing early‑warning tools and decision‑support platforms, AI enables proactive rather than reactive responses.
EVIDENCE
He defines “anticipation” as the key word, describing how AI can help “anticipate the shocks to the agri-food systems” and support “anticipatory actions” through data-driven decision-making tools, situation rooms and rapid response mechanisms [127-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of “anticipation” for agri-food shocks is foregrounded in [S3]; AI’s contribution to climate-resilient systems and early-warning is discussed in [S15]; FAO’s focus on better production and nutrition through data-driven tools is mentioned in [S9].
MAJOR DISCUSSION POINT
AI’s potential to improve agricultural productivity, sustainability, and resilience
AGREED WITH
Sara Rendtorff Smith, Harry Verweij, Arwin Datumaya Wahyudi Sumari
Argument 2
Digital exclusion of farmers; need for low‑tech access such as phone‑based advice
EXPLANATION
Dejan points out that many farmers lack digital connectivity, making them vulnerable to exclusion from AI‑driven services. He highlights phone‑based advisory tools as a low‑entry solution that can reach farmers without smartphones.
EVIDENCE
He notes that “it used to be possible to exist outside of the digital ecosystem” but now “if a farmer or communities are outside of the digital ecosystem, they suddenly are outside of any ecosystem” [112-117]. He then describes a recent Indian government tool that allows farmers to receive advice via a phone call in multiple languages, lowering the entry barrier to AI services [120-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A phone-call advisory service for farmers in India is described in [S7] and reiterated in [S12]; the broader digital divide affecting one-third of the global population is noted in [S14].
MAJOR DISCUSSION POINT
Barriers and challenges to AI adoption in agriculture
AGREED WITH
Sara Rendtorff Smith, Harry Verweij
DISAGREED WITH
Arwin Datumaya Wahyudi Sumari, Harry Verweij
Argument 3
Phone‑based advisory services as low‑entry AI tools for inclusive access
EXPLANATION
Dejan reiterates the value of phone‑based advisory services as an inclusive AI application that can reach farmers lacking smartphones or internet access. Such services can deliver multilingual guidance on crops, pests and other agronomic issues.
EVIDENCE
He cites the same Indian government initiative where “farmers can, with a phone call, … get advisory in the area of agriculture” covering topics from shrimp cultivation to pest diseases, and notes that the service works in many languages [120-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Indian phone-based multilingual advisory platform exemplifies low-entry AI and is referenced in [S7] and [S12]; the need for inclusive low-tech solutions is reinforced by the digital-exclusion discussion in [S14].
MAJOR DISCUSSION POINT
Practical steps and collaborative mechanisms to scale AI responsibly
A
Arwin Datumaya Wahyudi Sumari
3 arguments109 words per minute1277 words698 seconds
Argument 1
AI predicts soil conditions, optimal crops, fertilizer, water needs, weather, and logistics
EXPLANATION
Arwin outlines a suite of AI applications for Indonesia, ranging from soil‑nutrient prediction to crop‑selection, fertilizer optimisation, intelligent farming, weather forecasting and logistics routing. These tools aim to increase yields, reduce crop failures and cut transport costs across the archipelago.
EVIDENCE
He describes AI use for “prediction of soil condition and nutrition” to guide new rice fields, for “prediction of the most appropriate food crops” per island, for “optimising fertilizer content and water volume” [165-172], for “intelligent farming” that optimises seed planting and harvest processes [174-181], for “weather dynamics” prediction to avoid crop failures [182-184], and for “optimising logistic transportation routes” to reduce operational costs between islands [187-193].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled remote sensing and predictive analytics for soil, crop selection and water optimisation are covered in [S22]; sensor-driven optimisation of irrigation, pesticide and planting schedules is detailed in [S23]; weather forecasting and logistics routing tools are mentioned in [S31].
MAJOR DISCUSSION POINT
AI’s potential to improve agricultural productivity, sustainability, and resilience
AGREED WITH
Harry Verweij, Sara Rendtorff Smith, Dejan Jakovljevic, Arun Pratihast
DISAGREED WITH
Dejan Jakovljevic, Harry Verweij
Argument 2
Infrastructure gaps, uneven AI talent distribution, and data scarcity across regions
EXPLANATION
Arwin highlights Indonesia’s geographic fragmentation and uneven digital infrastructure, which limit AI deployment. He points to disparities in telecom coverage, regional time‑zone differences and a shortage of AI talent as major constraints.
EVIDENCE
He notes that Indonesia consists of “17,000 islands” with only “36 % of land, 64 % of water” and that each region has different time zones, creating challenges for coordination [150-162]. He also mentions the “problem with unequal distribution of AI talent” and the lack of democratic AI infrastructure such as telecommunications [159-164].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
High data-cost barriers and digital-infrastructure gaps in low-income regions are highlighted in [S24] and [S25]; the unequal pace of digital transformation worldwide is reported in [S14].
MAJOR DISCUSSION POINT
Barriers and challenges to AI adoption in agriculture
Argument 3
Indonesia’s AI roadmap with seven pillars (regulation, ethics, investment, data, innovation, talent, use cases) and a multi‑stakeholder “helix” approach
EXPLANATION
Arwin presents Indonesia’s comprehensive AI strategy, structured around seven pillars and a collaborative “helix” model that brings together government, industry, academia, media and civil society. The roadmap seeks to create a trustworthy, inclusive AI ecosystem for agriculture and other sectors.
EVIDENCE
He explains that the roadmap includes pillars for “AI regulation, AI ethics, investment, AI data, AI innovation, AI talent development, and AI use case” and that it follows a “helix” approach involving multiple stakeholders, with the Ministry of Digital Information and Communication coordinating voluntary contributions [196-203].
MAJOR DISCUSSION POINT
Governance, policy, and inclusive frameworks for responsible AI deployment
AGREED WITH
Sara Rendtorff Smith, Harry Verweij, Debjani Ghosh
A
Arun Pratihast
2 arguments152 words per minute690 words271 seconds
Argument 1
AI‑driven global crop mapping and farmer‑friendly chatbots improve advisory services
EXPLANATION
Arun describes two initiatives: a global crop‑mapping effort using satellite data, and multilingual chatbots that deliver agronomic advice to smallholders. Both aim to overcome data gaps and provide actionable information in low‑tech settings.
EVIDENCE
He recounts the “World Cereal Project” launched with the European Space Agency to map global crop areas, noting challenges due to countries not sharing data [299-300]. He also details a chatbot built in local languages that uses computer vision to diagnose diseases and give advice to cocoa farmers, demonstrating a farmer-centric AI service [300-304].
MAJOR DISCUSSION POINT
AI’s potential to improve agricultural productivity, sustainability, and resilience
AGREED WITH
Harry Verweij, Sara Rendtorff Smith, Arwin Datumaya Wahyudi Sumari, Dejan Jakovljevic
Argument 2
Data scarcity, lack of trust, and scalability problems hinder impact
EXPLANATION
Arun identifies three core barriers to effective AI in agriculture: insufficient and non‑shared data, low trust from farmers in AI recommendations, and difficulties scaling solutions from pilot to widespread use.
EVIDENCE
He lists “data scarcity” and the lack of shared data as a major issue, followed by “trust” problems where farmers do not follow AI advice, and finally “scalability” challenges where technical speed does not translate into broader impact [278-291].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of trustworthy, transparent AI and interoperable data governance for farmer confidence is discussed in [S3]; trust and explainability concerns are reiterated in [S26] and [S27]; digital-divide challenges that affect scalability are noted in [S24].
MAJOR DISCUSSION POINT
Barriers and challenges to AI adoption in agriculture
D
Debjani Ghosh
2 arguments156 words per minute887 words339 seconds
Argument 1
Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization
EXPLANATION
Debjani stresses that AI projects must start with a well‑defined problem and coordinated industry involvement to move beyond pilots. She proposes sector‑specific Centers of Excellence (CoEs) to focus resources on high‑impact challenges such as cold‑chain waste.
EVIDENCE
She argues that “the biggest problem today is we are not taking the time to think through it” and that industry needs a clear problem statement and a route to market [213-258]. She specifically suggests a CoE for the cold-chain problem to ensure coordinated solutions across the country [252-258].
MAJOR DISCUSSION POINT
Governance, policy, and inclusive frameworks for responsible AI deployment
AGREED WITH
Harry Verweij, Arwin Datumaya Wahyudi Sumari, Sara Rendtorff Smith
DISAGREED WITH
Arwin Datumaya Wahyudi Sumari
Argument 2
Establishing sector‑specific centers of excellence to tackle challenges like cold‑chain waste
EXPLANATION
Debjani proposes creating dedicated CoEs that target particular agricultural bottlenecks, using the cold‑chain waste issue as an example. Such centres would bring together stakeholders to develop, test and commercialise AI solutions at scale.
EVIDENCE
She outlines the concept of a COE focused on solving the cold-chain problem, asking how to ensure that “climate resilient crops” and “cold chain” issues are addressed through coordinated industry collaboration [252-258].
MAJOR DISCUSSION POINT
Practical steps and collaborative mechanisms to scale AI responsibly
S
Speaker 5
1 argument9 words per minute4 words26 seconds
Argument 1
Closing gratitude and acknowledgment of participants
EXPLANATION
The final speaker thanks the audience and participants for their contributions, signalling the end of the session.
EVIDENCE
He simply says “Thank you. Thank you.” at the close of the meeting [318-319].
MAJOR DISCUSSION POINT
Concluding remarks
Agreements
Agreement Points
AI can significantly increase agricultural productivity, reduce inputs, and support climate adaptation and resilience
Speakers: Harry Verweij, Sara Rendtorff Smith, Arwin Datumaya Wahyudi Sumari, Dejan Jakovljevic, Arun Pratihast
AI can boost yields, reduce inputs, and support climate adaptation AI optimizes resource use, cuts pesticide/herbicide use, and enhances traceability AI predicts soil conditions, optimal crops, fertilizer, water needs, weather, and logistics AI enables anticipatory actions for shocks and disaster response AI‑driven global crop mapping and farmer‑friendly chatbots improve advisory services
All speakers highlighted that artificial intelligence offers concrete tools-such as precision spraying, soil and weather prediction, early-warning systems, and global crop mapping-that can raise yields, lower the use of water, fertilizer and pesticides, and make food systems more climate-resilient [13-16][60-61][165-172][127-136][299-304].
POLICY CONTEXT (KNOWLEDGE BASE)
National initiatives such as Maharashtra’s AI-driven agriculture program illustrate policy support for productivity and climate resilience, while projects in Indonesia and broader AI for resilient food systems underscore the strategic emphasis on climate adaptation [S42][S49][S50][S51].
Inclusive AI and the need to bridge the digital divide so smallholders and disadvantaged groups can benefit
Speakers: Sara Rendtorff Smith, Dejan Jakovljevic, Harry Verweij
Uneven adoption, digital divide, high costs, limited skills, and trust issues Digital exclusion of farmers; need for low‑tech access such as phone‑based advice Collaborative partnerships, knowledge sharing, and bilateral cooperation to spread AI benefits
Speakers agreed that current adoption is highly uneven, with high costs, skill gaps and trust barriers limiting uptake, and that low-tech solutions (e.g., phone-based advisory) are needed to reach farmers outside digital ecosystems; international partnerships are seen as a way to close these gaps [70-71][72-76][112-117][120-124][30-38].
POLICY CONTEXT (KNOWLEDGE BASE)
Frameworks emphasizing farmer decision-making and extension worker support highlight inclusive design, and policy discussions on digital inclusion stress the need for equitable access for at-risk groups and marginalized communities [S43][S46][S47][S55][S57].
Strong governance, transparency, and ethical frameworks are essential for trustworthy AI deployment in agriculture
Speakers: Sara Rendtorff Smith, Harry Verweij, Arwin Datumaya Wahyudi Sumari, Debjani Ghosh
Need for transparent, explainable AI, interoperable data governance, and policy toolkits Collaborative partnerships, knowledge sharing, and bilateral cooperation to spread AI benefits Indonesia’s AI roadmap with seven pillars (regulation, ethics, investment, data, innovation, talent, use cases) and a multi‑stakeholder “helix” approach Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization
All highlighted the necessity of clear governance structures, explainability and ethical standards, with toolkits and policy frameworks (OECD toolkit, Dutch ecosystem, Indonesian AI roadmap) and coordinated industry efforts (centers of excellence) to ensure AI is trustworthy and inclusive [73-78][80-87][44-48][196-203][252-258].
POLICY CONTEXT (KNOWLEDGE BASE)
Parliamentary engagement, ethical AI guidelines, and WSIS data-governance recommendations call for transparent, accountable AI governance structures in agri-tech [S44][S45][S61][S64][S66].
Public‑private partnerships and multi‑stakeholder collaboration are critical to scale AI solutions and build capacity
Speakers: Harry Verweij, Arwin Datumaya Wahyudi Sumari, Debjani Ghosh, Sara Rendtorff Smith
Public‑private partnerships, capacity building, and co‑creation of tailored solutions Indonesia’s AI roadmap with a multi‑stakeholder “helix” approach Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization International knowledge‑sharing platforms and OECD AI policy toolkit to guide implementation
Speakers stressed that joint public-private efforts, multi-stakeholder helix models, sector-specific centers of excellence and global knowledge-sharing platforms are needed to develop, finance and scale AI tools that are locally relevant [30-38][196-203][252-258][80-87].
AI can enable anticipatory actions and early‑warning systems to mitigate shocks to agri‑food systems
Speakers: Dejan Jakovljevic, Sara Rendtorff Smith, Harry Verweij, Arwin Datumaya Wahyudi Sumari
AI enables anticipatory actions for shocks and disaster response AI is also revolutionizing agricultural innovation itself and supporting more efficient plant breeding … early detection of climatic and biological threats AI solutions can enhance the efficiency and resilience of food systems by supporting farmers to respond to sustainability requirements, make risk assessments AI predicts weather dynamics to obtain the right conditions
All noted that AI-driven early-warning, risk-assessment and weather-forecasting tools can help anticipate natural disasters, conflicts or pest outbreaks, allowing proactive responses and reducing crop failures [127-136][61][24][182-184].
POLICY CONTEXT (KNOWLEDGE BASE)
Hybrid sensor-satellite models and AI-driven crop-prediction pilots are highlighted as core components of early-warning and climate-resilience strategies in agriculture [S49][S50][S51].
Similar Viewpoints
Both emphasize that international cooperation and clear governance tools (e.g., OECD AI policy toolkit) are essential to ensure AI benefits are broadly shared and trustworthy [44-48][73-78][80-87].
Speakers: Harry Verweij, Sara Rendtorff Smith
Collaborative partnerships, knowledge sharing, and bilateral cooperation to spread AI benefits Need for transparent, explainable AI, interoperable data governance, and policy toolkits
Both point to Indonesia’s fragmented geography and limited digital infrastructure as major barriers, calling for low‑tech, inclusive AI solutions that can work across islands with scarce talent and data [112-117][120-124][150-164].
Speakers: Dejan Jakovljevic, Arwin Datumaya Wahyudi Sumari
Digital exclusion of farmers; need for low‑tech access such as phone‑based advice Infrastructure gaps, uneven AI talent distribution, and data scarcity across regions
Both stress that without a well‑defined problem, reliable data and trust, AI pilots cannot scale; they advocate structured mechanisms (CoEs, data sharing platforms) to overcome these barriers [213-218][252-258][278-291].
Speakers: Debjani Ghosh, Arun Pratihast
Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization Data scarcity, lack of trust, and scalability problems hinder impact
Unexpected Consensus
Both industry‑focused and research‑focused speakers converge on the need for sector‑specific centers of excellence to translate AI pilots into scalable solutions
Speakers: Debjani Ghosh, Arun Pratihast
Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization Data scarcity, lack of trust, and scalability problems hinder impact
While Debjani proposes new CoEs to coordinate industry efforts, Arun, a researcher, also calls for structured platforms to address data, trust and scaling issues, indicating an unexpected alignment between industry and research perspectives on institutional mechanisms needed for impact [252-258][278-291].
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions on scaling AI beyond pilots and national strategies that establish specialized centers illustrate consensus on sector-specific CoEs as implementation hubs [S52][S55][S64].
Overall Assessment

There is strong consensus that AI holds great promise for improving productivity, sustainability and resilience in agriculture, but its benefits will only be realized if inclusive governance, transparent data practices, public‑private partnerships and capacity‑building are put in place. All speakers agree on the urgency of addressing the digital divide and on the need for coordinated, multi‑stakeholder frameworks.

High consensus across technical, governance and partnership dimensions, suggesting that future policy work can build on these shared foundations to design inclusive, trustworthy AI initiatives for food systems.

Differences
Different Viewpoints
Primary focus of AI interventions in agriculture – boosting productivity and climate resilience versus reducing food waste
Speakers: Harry Verweij, Debjani Ghosh, Sara Rendtorff Smith
AI can boost yields, reduce inputs, and support climate adaptation Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization Uneven adoption, digital divide, high costs, limited skills, and trust issues
Harry stresses AI’s role in increasing productivity, lowering environmental impact and supporting climate adaptation [13-16][23-24]. Debjani argues that the biggest problem is food waste and that AI projects should start with a clear problem definition and sector-specific centers of excellence to address issues like cold-chain waste [238-242][252-258]. Sara highlights the risk that AI could deepen existing inequalities if adoption is uneven, pointing to the digital divide and structural barriers such as high cost and limited skills [70-76]. The speakers agree AI is valuable but disagree on whether the priority should be productivity/climate benefits or waste reduction and how to structure interventions.
Preferred level of technological sophistication for delivering AI services to farmers – high‑tech data‑driven platforms versus low‑tech phone‑based advisory services
Speakers: Dejan Jakovljevic, Arwin Datumaya Wahyudi Sumari, Harry Verweij
Digital exclusion of farmers; need for low‑tech access such as phone‑based advice AI predicts soil conditions, optimal crops, fertilizer, water needs, weather, and logistics Public‑private partnerships, capacity building, and co‑creation of tailored solutions
Dejan stresses that many farmers lack digital connectivity and proposes phone-call advisory services in multiple languages as a low-entry AI solution [120-124]. Arwin describes a suite of sophisticated AI applications for soil-nutrient prediction, crop selection, fertilizer and water optimisation, weather forecasting and logistics routing [165-172][174-181]. Harry promotes partnerships to develop locally relevant AI solutions but focuses on more advanced ICT agribusiness collaborations [29-30][30-38]. The disagreement lies in the appropriate technological approach for reaching smallholders.
Governance strategy for AI deployment – a comprehensive national roadmap with multi‑pillar “helix” approach versus sector‑specific Centers of Excellence
Speakers: Arwin Datumaya Wahyudi Sumari, Debjani Ghosh
Indonesia’s AI roadmap with seven pillars (regulation, ethics, investment, data, innovation, talent, use cases) and a multi‑stakeholder helix approach Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization
Arwin outlines Indonesia’s AI strategy built around seven pillars and a collaborative helix model involving government, industry, academia, media and civil society [196-203]. Debjani proposes creating sector-specific Centers of Excellence, such as a COE for cold-chain waste, to align industry and ensure commercialization pathways [252-258]. Both aim for responsible AI but differ on whether a broad national framework or focused sectoral hubs are more effective.
Impact of AI on inequality – whether AI will deepen the digital divide or can be deployed inclusively through partnerships
Speakers: Sara Rendtorff Smith, Harry Verweij
Uneven adoption, digital divide, high costs, limited skills, and trust issues Public‑private partnerships, capacity building, and co‑creation of tailored solutions
Sara warns that AI could exacerbate existing inequalities if structural barriers like high cost, limited digital skills and lack of trust are not addressed, citing the stark contrast in digital tool usage between Australia (96 %) and Chile (12 %) [70-71][72-76]. Harry counters that inclusive AI can be achieved through strong public-private partnerships, knowledge sharing and capacity building to ensure AI benefits are broadly shared [30-38][39-43]. The speakers disagree on the net effect of AI on inequality.
Unexpected Differences
Whether AI exacerbates digital exclusion or can be a tool for inclusion
Speakers: Dejan Jakovljevic, Harry Verweij
Digital exclusion of farmers; need for low‑tech access such as phone‑based advice Public‑private partnerships, capacity building, and co‑creation of tailored solutions
Dejan argues that AI can worsen digital exclusion, stating that farmers outside the digital ecosystem are left out and that AI makes this worse [112-117]. Harry, however, expresses confidence that inclusive AI can be achieved through partnerships and capacity building, suggesting AI will bridge rather than widen gaps [30-38][39-43]. This contrast was not anticipated given the overall consensus on AI’s benefits.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on digital exclusion highlight risks of deepening divides alongside evidence that targeted policies and inclusive design can harness AI for broader societal benefit [S46][S47][S48][S57].
Overall Assessment

The participants share a common belief in AI’s potential to improve agricultural productivity, resilience and sustainability, but they diverge on priorities (productivity vs waste reduction), technological approaches (high‑tech platforms vs low‑tech phone services), governance models (national roadmap vs sector‑specific centers of excellence), and the net impact on inequality. These disagreements are moderate and revolve around implementation strategies rather than the value of AI itself.

Moderate disagreement focused on pathways and governance; implications include the need for coordinated policy frameworks that accommodate both high‑tech and low‑tech solutions, ensure inclusive governance structures, and address data, trust and capacity gaps to prevent widening digital divides.

Partial Agreements
All speakers agree that AI has a role in building more resilient and inclusive food systems, but differ on the primary pathways—whether through high‑tech precision tools, low‑tech advisory services, or comprehensive governance frameworks [13-16][23-24][55-57][60-66][70-71][112-117][196-203].
Speakers: Harry Verweij, Sara Rendtorff Smith, Dejan Jakovljevic, Arwin Datumaya Wahyudi Sumari
AI can boost yields, reduce inputs, and support climate adaptation AI optimizes resource use, cuts pesticide/herbicide use, and enhances traceability Digital exclusion of farmers; need for low‑tech access such as phone‑based advice Indonesia’s AI roadmap with seven pillars (regulation, ethics, investment, data, innovation, talent, use cases) and a multi‑stakeholder helix approach
All three emphasize the critical importance of data and trust for AI adoption. Sara promotes an OECD policy toolkit for transparent AI and data governance [80-87]; Arun highlights data scarcity and trust as core barriers [278-283]; Dejan points out that lack of digital access excludes farmers from AI services [112-117]. They concur on the need for better data governance but propose different solutions.
Speakers: Sara Rendtorff Smith, Arun Pratihast, Dejan Jakovljevic
Need for transparent, explainable AI, interoperable data governance, and policy toolkits Data scarcity, lack of trust, and scalability problems hinder impact Digital exclusion of farmers; need for low‑tech access such as phone‑based advice
Takeaways
Key takeaways
AI can significantly increase agricultural productivity, reduce inputs (water, fertilizer, pesticides), and enhance climate resilience and food‑system traceability. Real‑world AI pilots (precision spraying, smart irrigation, early‑warning services) have demonstrated measurable gains such as up to 90% water savings and 30% reduction in pesticide use. Anticipatory AI tools can help predict shocks (weather, pests, conflicts) and support rapid, pre‑emptive responses in agri‑food systems. Adoption of AI is highly uneven across regions; digital divide, high costs, limited skills, and trust deficits hinder scaling, especially for smallholders and remote communities. Data scarcity, lack of interoperable governance, and opaque “black‑box” models undermine trust and limit the usefulness of AI solutions for farmers. Inclusive, multi‑stakeholder governance (the “helix” model) and clear, sector‑specific regulation are essential to ensure AI is transparent, explainable, and equitable. Public‑private partnerships, capacity‑building, and knowledge‑sharing platforms (e.g., OECD AI policy toolkit, international working groups) are critical to scale responsible AI deployment. Tailored, low‑tech entry points such as phone‑based advisory services can broaden access for farmers lacking smartphones or internet connectivity.
Resolutions and action items
Commitment by the Netherlands to forge concrete partnerships, share knowledge and technology, and support capacity‑building for AI in low‑ and middle‑income countries. OECD will continue to develop and promote its AI policy toolkit and digital‑governance guidance for agriculture, encouraging countries to contribute their policies. Agreement to pursue sector‑specific Centers of Excellence (e.g., for cold‑chain waste reduction) to align industry efforts with clearly defined problem statements. Indonesia will advance its national AI roadmap (seven‑pillar framework) and promote a multi‑stakeholder helix approach to ensure inclusive and resilient AI deployment.
Unresolved issues
How to effectively close the digital divide so that smallholder farmers in remote or low‑income regions can reliably access AI tools. Mechanisms for ensuring trustworthy data sharing while protecting farmer ownership and privacy. Specific financing models and incentives needed to make AI solutions affordable for SMEs and small farms. Standardized methods for evaluating and scaling AI pilots across diverse agro‑ecological contexts. Details on how to operationalize interoperable data governance frameworks across borders and sectors.
Suggested compromises
Balancing horizontal AI governance (overall principles, ethics, transparency) with sector‑specific regulations tailored to agriculture’s unique needs. Adopting a multi‑helix (government, industry, academia, media, civil society) collaboration model to distribute responsibilities and avoid any single stakeholder dominating AI development. Combining high‑tech AI innovations with low‑tech delivery channels (e.g., phone‑based advisory) to ensure inclusivity while leveraging advanced capabilities.
Thought Provoking Comments
AI can be a powerful tool to increase productivity, reduce environmental impact, and strengthen the resilience of food systems, while also supporting farmers to meet sustainability requirements and provide trustworthy data across the supply chain.
Sets a broad, optimistic framing for AI in agriculture, linking technology directly to food security, climate resilience and inclusive growth, and introduces the idea of AI‑enabled data sharing as a public good.
Established the thematic baseline for the panel, prompting other speakers to position their national or organisational experiences against this vision and to discuss concrete ways to translate the promise into practice.
Speaker: Harry Verweij (Ambassador, Kingdom of the Netherlands)
AI‑enabled precision spraying has reduced pesticide use by up to 30 % without compromising yield, and computer‑vision‑based weed detection can cut herbicide use by half. Yet adoption is highly uneven – 96 % of Australian farmers use digital tools versus just 12 % in Chile – highlighting a digital divide that could deepen existing inequalities.
Combines hard evidence of AI benefits with a stark illustration of the global digital gap, moving the conversation from possibilities to urgent equity concerns.
Shifted the tone from optimism to caution, prompting panelists to address how to bridge the divide (e.g., Dejan’s anticipatory tools, Debjani’s call for problem‑driven pilots) and to consider policy mechanisms such as the OECD AI policy toolkit.
Speaker: Sara Rendtorff Smith (OECD)
The key word for resilience is *anticipation* – we need AI‑driven anticipatory tools, decision‑making rooms and early‑warning services (like the phone‑call advisory system launched by the Indian government) so that we can act before shocks hit the agri‑food system.
Introduces ‘anticipation’ as a strategic lens, reframing resilience from reactive to proactive and highlighting a concrete, low‑tech AI service that reaches farmers without smartphones.
Created a turning point toward discussing pre‑emptive governance and service design, influencing subsequent speakers (e.g., Sumari’s focus on early‑warning and predictive models, Ghosh’s emphasis on targeting specific problems such as waste).
Speaker: Dejan Jakovljevic (FAO)
Indonesia’s AI roadmap is built on seven pillars – regulation, ethics, financing, data, innovation, talent development and use‑cases – and follows a ‘quad‑helix’ model that brings government, industry, academia, media and communities together to ensure no one is left behind.
Provides a concrete, multi‑dimensional governance framework that ties horizontal AI policy to sector‑specific needs, and stresses inclusivity, transparency and ecosystem building in a highly fragmented archipelagic context.
Expanded the discussion from high‑level benefits to the practical architecture needed for implementation, prompting other panelists to reference similar multi‑stakeholder approaches (e.g., OECD’s policy toolkit, Ghosh’s COE idea).
Speaker: Arwin Datumaya Wahyudi Sumari (Professor, Indonesia)
We often ‘throw AI at every problem’ without first defining the exact problem, leading to duplicated pilots (e.g., farmer advisory apps) that don’t scale. A more effective approach is to focus on the biggest leverage point – today I see food waste – and create sector‑specific Centres of Excellence that align industry, data, and commercialization pathways.
Challenges the prevailing hype‑driven mindset, redirects attention to problem‑driven AI, and proposes a concrete institutional mechanism (COE) to avoid fragmentation and ensure impact.
Served as a pivotal critique that reframed the conversation around prioritisation and coordination, influencing later remarks about trust, scalability, and the need for focused pilots (e.g., Arun’s discussion of data scarcity and trust).
Speaker: Debjani Ghosh (NITI Frontier Tech Hub, India)
Three persistent barriers prevent AI from reaching smallholders: data scarcity and poor sharing, lack of trust in AI recommendations, and scalability that ignores low‑tech realities. Successful projects (World Cereal mapping, language‑specific chatbots for cocoa farmers) show that solutions must be built for the grassroots environment.
Synthesises systemic challenges into three clear categories and backs them with concrete examples, highlighting the gap between high‑tech models and field‑level applicability.
Deepened the analysis by linking technical obstacles to the earlier themes of equity and anticipatory action, reinforcing the need for data ecosystems and trust mechanisms discussed by the OECD and Indonesia’s roadmap.
Speaker: Arun Pratihast (Senior Researcher, Wageningen University)
Overall Assessment

The discussion began with a broad, optimistic framing of AI’s potential, but key interventions – especially Dejan’s emphasis on ‘anticipation’, Debjani’s critique of indiscriminate AI deployment, and Arun’s articulation of data, trust and scalability barriers – redirected the conversation toward concrete, problem‑driven strategies and the governance structures needed to make AI inclusive. These turning points introduced new analytical lenses (anticipatory governance, sector‑specific COEs, multi‑helix roadmaps) and prompted participants to move from abstract benefits to actionable pathways, highlighting both the promise and the systemic challenges of deploying AI in global food systems.

Follow-up Questions
How can anticipatory AI tools and decision‑support systems be developed to predict and respond to shocks (natural disasters, conflicts, etc.) in agri‑food systems?
He emphasized the need for AI‑enabled anticipatory actions, decision‑making tools, and situation rooms to handle shocks, indicating a gap in current capabilities.
Speaker: Dejan Jakovljevic
What strategies can effectively bridge the digital divide and ensure inclusive access to AI for farmers in regions with low adoption rates (e.g., Chile vs. Australia)?
Sara highlighted uneven AI adoption across countries; Dejan stressed inclusion, pointing to a risk of deepening inequalities without targeted interventions.
Speaker: Sara Rendtorff Smith; Dejan Jakovljevic
How can fragmented data‑governance frameworks be harmonized to achieve greater interoperability for AI applications across agricultural supply chains?
She noted that fragmented governance creates complexity, suggesting a need for research on interoperable standards and policies.
Speaker: Sara Rendtorff Smith
What public‑private partnership models best scale responsible AI deployment while preventing an AI divide among emerging economies and smallholder farmers?
She discussed the importance of alignment, commercialization routes, and industry collaboration, indicating a need to define effective partnership structures.
Speaker: Debjani Ghosh
Should sector‑specific Centers of Excellence (e.g., for cold‑chain logistics or climate‑resilient crops) be established, and how would they operate to coordinate industry and research efforts?
She proposed COEs focused on concrete problems, highlighting a gap in coordinated innovation hubs.
Speaker: Debjani Ghosh
What mechanisms can address data scarcity and improve data sharing infrastructure that is farmer‑centric and supports AI model development?
He identified data scarcity and lack of shared infrastructure as major barriers to effective AI solutions for smallholders.
Speaker: Dr. Arun Pratihast
How can trust be built among smallholder farmers regarding AI advisory services, including issues of model explainability and data ownership?
He pointed out mistrust and mismatched expectations as reasons advisory tools fail, indicating a need for trust‑building research.
Speaker: Dr. Arun Pratihast
What approaches ensure that AI solutions developed at scale are truly scalable and adaptable to low‑tech, grassroots farming environments?
He highlighted scalability as a challenge, noting that technical scale does not guarantee field‑level applicability.
Speaker: Dr. Arun Pratihast
How can global, high‑resolution crop‑mapping initiatives (e.g., the World Cereal Project) be improved through better data contributions from major producing countries?
He explained that missing data from countries like India and China limits map accuracy, suggesting a need for research on data‑sharing incentives and protocols.
Speaker: Dr. Arun Pratihast
What design principles enable AI‑enabled low‑tech advisory tools (e.g., multilingual chatbots) that work effectively for smallholder farmers?
He gave examples of successful chatbot solutions for cocoa farmers, indicating a research gap in replicating such tools across crops and regions.
Speaker: Dr. Arun Pratihast
What data‑protection frameworks are needed to safeguard smallholder farmers’ data while allowing its use in AI‑driven supply‑chain platforms?
He mentioned protecting farmer data as a priority for AI solutions, highlighting a need for privacy‑focused policy research.
Speaker: Harry Verweij
What financing and investment models can sustainably support AI ecosystems in agriculture, especially for low‑ and middle‑income countries?
Both referenced the importance of financing for AI ecosystems, indicating a need to explore viable funding mechanisms.
Speaker: Arwin Sumari; Harry Verweij
How can horizontal AI governance be balanced with sector‑specific regulations to promote trustworthy AI in agriculture?
He described Indonesia’s approach of combining broad AI policies with sectoral rules, suggesting a need to study best‑practice frameworks.
Speaker: Arwin Sumari
What robust impact‑measurement methodologies can assess AI interventions (e.g., pesticide reduction, yield gains) across diverse agricultural contexts?
She cited promising evidence but implied the need for systematic evaluation metrics to validate AI benefits.
Speaker: Sara Rendtorff Smith

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

The Global Power Shift India’s Rise in AI & Semiconductors

The Global Power Shift India’s Rise in AI & Semiconductors

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how AI has shifted from a niche technology to a catalyst for economic transformation, emphasizing that genuine AI leadership demands the integration of silicon, software, systems and policy [21-34]. Jaya highlighted India’s strong engineering talent, silicon-design capabilities and rapidly growing ecosystem of system and infrastructure partners, while stressing that collaboration across nations and organisations is essential [37-42]. She framed the discussion around three strategic questions: building the intellectual foundation, deepening manufacturing and supply-chain resilience, and establishing a credible sovereign AI capability [45-48].


Vivek noted that India’s AI Mission, backed by over ₹10,000 crore, tax holidays for data-centers and platforms like AI Coach, is creating credibility through large-scale deployments and the gradual development of domestic IP in AI and semiconductors [50-57]. He added that the country’s robust VLSI design ecosystem must evolve from relying on foreign IP to owning its own, a key step toward a trustworthy deep-tech sector [56-57]. Rahul observed that domestic demand for AI-enabled products is surging and that both government programmes (e.g., a ₹1 lakh crore AI fund) and private capital exceeding $100 billion are beginning to flow into data-center and manufacturing projects, though the breadth of investment remains uneven [66-79].


Thomas argued that India should move from merely seeking compute capacity to building sovereign capability, leveraging its unique data-residency needs and a large pool of startups to develop home-grown IP and niche supply-chain components such as co-packaged optics [89-104]. He suggested that India does not need to produce leading-edge 2 nm chips but can add value in adjacent technologies and AI-infrastructure deployment, positioning itself as a resilient partner in global supply chains [101-108]. On policy, Thomas advocated public-private partnerships like the U.S. “Genesis” model, where government de-risks large-scale research while avoiding direct subsidies, to align funding with grand-challenge problems and accelerate innovation [116-128][216-226].


Vivek stressed the need for strategic autonomy-clearly defining which technologies to indigenize and which to keep open-to balance national security with global collaboration [142-148]. He also pointed to expanding skilling programmes such as NASCOM’s Future Skill Prime and a shift from rote learning to creative problem-solving, arguing that massive reskilling is required to prepare the next generation for AI-driven jobs [178-188]. Rahul described India’s manufacturing path as a “vertical-stack” model where firms integrate design, fabrication and system integration, encouraging experimentation across many domains despite limited resources [153-164]. Thomas concluded that sustainability must be embedded in product design, noting AMD’s commitment to flattening the energy curve while acknowledging the need for humility and continuous correction [278-285].


The moderator wrapped up by stating that momentum alone is insufficient; coordinated sequencing, disciplined capital, institutional alignment and infrastructure depth are essential for India to realise its AI and semiconductor ambitions [198-202][255-262]. Overall, the discussion underscored that India’s AI and semiconductor future hinges on collaborative public-private effort, strategic focus on sovereign capabilities, robust talent development and sustainable execution [21-34].


Keypoints

Major discussion points


AI leadership requires a holistic, cross-disciplinary approach – true AI dominance can only be achieved when silicon, software, systems and policy are aligned; no single element is sufficient and broad collaboration is essential. [21-34]


Credibility in deep-tech hinges on large-scale, systematic investment and a balanced policy framework – India’s AI mission, data-center tax holidays, and semiconductor design strengths must be scaled up, while policy must protect strategic autonomy yet remain open to global collaboration. [50-59][131-148]


Building manufacturing depth and supply-chain resilience calls for sustained capital and focused niche capabilities – rather than trying to match the most advanced fabs, India should target areas such as optics, co-package interconnects and packaging, leveraging public-private risk-sharing to grow a robust ecosystem. [66-80][92-110][216-236]


Talent development and skilling are critical for the next-generation AI/semiconductor workforce – education must move from rote memorisation to AI-augmented, creative problem-solving, supported by programmes like Future Skill Prime and extensive startup incubators. [178-188][255-259]


Public-private partnership models (e.g., the U.S. “Genesis” project) offer a template for India – government can de-risk strategic initiatives, fund grand-challenge research, and align academia, national labs and industry without directly subsidising private ventures. [116-128][225-230]


Overall purpose / goal of the discussion


The panel was convened to assess India’s current position and future roadmap in artificial intelligence and semiconductor technologies, identify the strategic gaps (intellectual foundation, manufacturing depth, sovereign capability), and propose coordinated actions across policy, industry, academia and capital markets that will enable India to become a credible, self-reliant AI power by the 2030 horizon.


Tone of the discussion


The conversation began with an enthusiastic, forward-looking tone, emphasizing the transformative potential of AI and India’s “poised” status ([21-34]). As the dialogue progressed, speakers adopted a more analytical and realistic tone, acknowledging existing shortcomings (limited IP, capital constraints, supply-chain fragility) and the need for disciplined execution ([50-59], [66-80], [92-110]). Toward the end, the tone shifted to a constructive, solution-oriented stance, highlighting concrete programmes, public-private partnership models, and a call to action for talent and policy makers, ending on an optimistic, motivational note about the nation’s collective journey ([178-188], [255-259]).


Speakers

Rahul Garg


Role/Title: Founder and CEO of Moglix (Mr.)


Areas of Expertise: Industrial supply-chain platforms, manufacturing, industrial finance, AI infrastructure scaling


Jaya Jagadish


Role/Title: Session Moderator; veteran semiconductor industry executive with three decades of design-engineering experience (Jaya Jagadish)


Areas of Expertise: Semiconductors, AI leadership, technology strategy


Thomas Zacharia


Role/Title: Senior Vice President for Strategic Technical Partnerships and Public Policy, AMD, Inc.; former director at Oak Ridge National Laboratory (Dr. Thomas Zakaria)


Areas of Expertise: Exascale supercomputing, AI systems, semiconductor policy, public-private partnerships


Vivek Kumar Singh


Role/Title: Professor and Senior Advisor on Science and Technology, NITI Aayog (Professor Vivek Kumar Singh)


Areas of Expertise: National science & technology policy, AI strategy, semiconductor ecosystem, biomanufacturing, innovation governance


Moderator


Role/Title: Session Moderator (Moderator)


Areas of Expertise: Session facilitation


Additional speakers:


Pooja – mentioned briefly as someone who could also join the closing remarks; no specific role or expertise identified.


Subhash Suresh – referenced as former president of the U.S. National Academy of Engineering; expertise in engineering leadership and grand challenges.


Ray Kurzweil – quoted regarding longevity and AI; known as futurist and inventor, expertise in AI, futurism, and health technologies.


Medi CEO – referenced in discussion about Indian startups; name not provided, role is Chief Executive Officer of “Medi”.


Vivek Murthy – appears as a transcription error; likely refers to Vivek Kumar Singh already listed.


External source citations:


Rahul Garg – [S1]


Jaya Jagadish – [S3]


Thomas Zacharia – [S4][S5]


Vivek Kumar Singh – [S6][S7]


Moderator – [S8][S9][S10]


Full session reportComprehensive analysis and detailed insights

The moderator opened the session with a brief overview of today’s computing stack-CPUs, GPUs, SoCs and AI engines-that underpins modern systems, and introduced the three panelists: Dr Thomas Zakaria, AMD; Prof. Vivek Kumar Singh, Senior Advisor, NITI Aayog; and Mr Rahul Garg, founder-CEO of Moglix [1-15]. After welcoming the audience, the moderator announced the start of the discussion [16-17].


Jaya Jagadish set the thematic tone, observing that artificial intelligence has moved from a niche technology to a catalyst reshaping entire economies [21-25]. She argued that genuine AI leadership requires synchronising silicon, software, systems and policy-“no one aspect can really get us there” [32-34]. Emphasising India’s readiness, she highlighted the country’s engineering talent, strong silicon-design base and a rapidly expanding ecosystem of system-level partners and manufacturers [37-39]. She then framed the panel’s inquiry around three strategic pillars: building the intellectual foundation, deepening manufacturing and supply-chain resilience, and establishing a credible sovereign AI capability [45-48].


Talent development – Prof. Vivek Kumar Singh highlighted the shift from rote, memory-based learning to AI-augmented, creative problem-solving. He cited the availability of free generative-AI tools, the NASCOM Future Skill Prime platform and widespread university incubators that make this “the best time to be a student” [178-188].


When asked how the next generation should be prepared for the AI-driven future, Jaya directed the question to Prof. Singh [170-172].


Manufacturing and capital – Mr Garg shifted the focus to post-COVID supply-chain shocks that have heightened political will to localise production, noting the government’s ₹1 lakh crore AI fund and private-sector commitments exceeding $100 billion for data-centres and related infrastructure [66-73]. He acknowledged that capital remains unevenly distributed but affirmed that investment is already flowing and that demand for AI-enabled products is rising sharply across the country [70-78]. He cautioned that the ability to execute at the required speed and scale still needs to be proven [79-80].


Strategic focus for manufacturing – When asked where India should concentrate its manufacturing efforts, Dr Zakaria argued that the country should move from merely seeking compute capacity to building sovereign capability [89-92]. He distinguished “sovereignty” (keeping data and applications within India) from “resilience” (developing indigenous IP and participating in the global supply chain without necessarily mastering the most advanced 2 nm nodes) [93-103]. Zakaria identified niche, high-value segments such as co-packaged optics and AI-infrastructure interconnects as realistic entry points, noting that these components are not widely available globally and that India could “stab the jib” in these areas [104-108].


He advocated public-private partnerships (PPPs) modelled on the U.S. “Genesis” program, explaining that the public sector should provide policy direction and demand signals while de-risking large-scale research through collaborative frameworks that fund compute infrastructure, software stacks and “lighthouse” problems, without directly subsidising private ventures [116-124][216-226]. He cited U.S. programs such as Genesis and highlighted China’s 20-year HPC-to-AI trajectory as examples of how coordinated national compute initiatives can seed long-term AI leadership [119-124][123-125].


Zakaria also pointed to AMD’s Helios project, an open-standard platform that could enable Indian firms to become leading providers of specific components, illustrating how open-standard ecosystems can create a competitive edge [210-212].


Policy perspective – Prof. Singh added a complementary view, urging a “strategic autonomy” approach: clearly delineating which technologies must be indigenised for national security and which can remain open to global collaboration [141-148]. He stressed that such a framework would protect critical components while still benefiting from international knowledge exchange.


Private-sector model – Mr Garg described the Indian private-sector approach as a “vertical-stack” model, where firms integrate design, fabrication and system integration while simultaneously developing ancillary ecosystems such as clean-room facilities, chemical suppliers and packaging verification [153-164]. He argued that experimenting across many domains-“throwing darts at hundreds of problems”-will eventually reveal the few areas where India can achieve first-mover advantage, even if the nation is currently “late to the party” in semiconductor technology [153-154][165-166].


Closing remarks – The moderator stressed that momentum alone will not secure India’s AI and semiconductor ambitions; disciplined sequencing, capital allocation, institutional alignment and deep infrastructure are essential [198-202]. He also raised the question of embedding sustainability as a core design choice rather than a trade-off [274-277].


Dr Zakaria responded that AMD designs its products with an explicit goal of flattening the energy curve, acknowledging that sustainability requires humility and continuous course-correction [278-285].


When asked what single move India must execute flawlessly, Rahul Garg emphasized that success will not hinge on a single action but on a fast-follower capability combined with a global-scale ambition. He argued that India must rapidly scale capital (≈ $10-20 bn) through coordinated public-private effort to compete with larger global pools [240-247].


Key agreements (each supported by transcript citations):


– AI leadership demands a holistic ecosystem linking silicon, software, systems and policy [32-34][116-119].


– Substantial public and private financing, preferably through PPP de-risking mechanisms, is critical [56][70-78][216-221][255-262].


– India’s large talent pool provides a fast-follower advantage that must be leveraged via coordinated action [38-39][214-215][239-240].


– Developing indigenous IP and nurturing local startups are essential for sovereign capability [84-86][98-100][56-57].


Points of divergence (with supporting citations):


Scope of manufacturing: Zakaria advocates focusing on niche supply-chain components such as co-packaged optics [104-108]; Garg argues for building mid-range fabs and a vertically integrated ecosystem that includes clean-room and packaging capabilities [153-164].


Capital mobilisation: Zakaria suggests government de-risk projects without direct subsidies [216-221]; Garg highlights the need for massive pooled capital (potentially $10-20 bn) that may require more direct state involvement [70-78][214-218].


Openness vs. strategic autonomy: Singh calls for clear rules on strategic autonomy, delineating indigenisation priorities [141-148]; Zakaria’s Genesis model favours an open, collaborative research environment [225-229].


Strategic posture: Garg’s fast-follower narrative contrasts with Zakaria’s forward-looking supercomputing mission that seeds long-term capability rather than merely chasing existing technologies [119-124][214-218].


The panel concluded with consensus that India’s AI and semiconductor future hinges on coordinated public-private effort, strategic focus on high-value niche technologies, aggressive talent skilling, and embedding sustainability into design. Unresolved issues include defining the exact roadmap for private-capital mobilisation, specifying timelines for moving from niche participation to more advanced fab capabilities, establishing mechanisms for IP transfer from academia to industry, and creating metrics to monitor sustainability outcomes. Together, these insights outline a roadmap that combines ambitious policy, targeted investment and a skilled workforce to realise India’s AI sovereignty by the 2030 horizon.


Session transcriptComplete transcript of the session
Moderator

Thank you. Thank you. across CPUs, GPUs, SoCs, and AI engines that power cutting -edge compute systems worldwide. She brings a rare combination of deep silicon expertise, global product leadership, and national ecosystem engagement. She is deeply committed to talent development in the ecosystem as well. Please join me in welcoming Jaya, who will be moderating our session. Our first panelist is Dr. Thomas Zakaria, Senior Vice President for Strategic Technical Partnerships and Public Policy at AMD, Inc. Dr. Zakaria previously led Oak Ridge National Laboratory, where he oversaw the deployment of multiple world -leading supercomputing systems, including Frontier, the first exascale supercomputer. His career spans scientific discovery, national compute infrastructure, public policy, and global partnerships. Please welcome Dr. Thomas Zakaria.

Joining us is Professor Vivek Kumar Singh, Senior… advisor on science and technology at NITI IO. Professor Singh plays a central role in shaping India’s science, technology and innovation architecture. From R &D governance to university industry collaboration and state level innovation ecosystems. With a background in computer science, data analytics and experience in academic leadership at leading institutions, he bridges research depth with national policy execution. Please welcome Professor Vivek Kumar Singh. My apologies. And finally, we have Mr. Rahul Garg, founder and CEO of Moglix. Rahul has built one of India’s leading industrial supply chain platforms and has expanded into manufacturing and industrial finance, navigating the realities of scale, capital and execution in India’s industrial ecosystem. Please welcome Mr.

Rahul Garg. We will now be beginning the discussion. Thank you so much for joining us.

Jaya Jagadish

All right. Good afternoon, everyone. And I would like to extend a very warm welcome to each one of you for this session. And thank you for taking time to be here with us. So we are meeting at a moment when AI is no longer a niche technology. And these conversations have become foundational. And there is a shift in shaping the entire economies. And that’s the global impact that this technology can have. And having spent about three decades in semiconductor industry doing design engineering, I have seen compute evolve. From a single threaded processor to massively parallel AI systems. And that’s. stupendous growth that we have seen and a transformation of technology. And honestly, AI is a technology that is probably the most transformational that we will be able to see in our lifetimes.

And true AI leadership is something globally there’s a contest. Every country wants to achieve self -reliance and, you know, leadership in AI. And that’s the importance of this technology that we are talking. But true AI leadership itself happens when silicon, software, systems, and policy, all of these aspects have to come together to achieve that leadership. No one aspect can really get us there. And that’s what truly excites me for today’s session. We have experts who have the knowledge. In each of these, many of these aspects, and we will be, you know, asking questions and they’ll be sharing their perspectives on this, which I’m sure all of us will enjoy listening to. So coming to India, from what I see, India is truly well poised for this technology shift.

And we bring together engineering talent, silicon design strength, and a growing ecosystem of system and infrastructure partners, including manufacturing. But what truly defines and makes this moment different is the scale and the speed at which we are moving. So we do see a strong commitment, but what is also important is collaboration. No one country or one organization can truly achieve the results or be successful at this, but we all need to collaborate. We all need to become very aware because this is not a simple thing. It has the potential to touch human lives and humanity. At a time when we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation through this panel, today I want to look at three perspectives.

First, how do we continue to build the intellectual foundation? Second, how do we build manufacturing depth and supply chain resilience through a sustained investment model? And third, how do we build a credible, sovereign AI capability? I will get to Vivek. I’ll

Vivek Kumar Singh

Thank you, Jaya. This is a very important thing, very important question. I think India has already taken a call to go in a big way in the whole deep tech domain. And a lot of changes that we see happening in terms of AI compute, then AI data centers and so on. Recently, we all heard about the tax holidays for data centers that are going to be created in India. Also, platforms like AI Coach, because that’s very, very important. If you want to create AI applications for India, you need AI data, which is centered in India, which is for the context of India. So what I believe, when you talk about credibility and how credible we are into this deep tech domain, comprising AI, semiconductor, biomanufacturing, even other, areas what is very very important is that credibility doesn’t come only from announcements so what we what we really need to know and what we really need to do is to go at a scale and fortunately a lot of positive changes are already happening we have india ai mission we all know about that 10 000 plus crores for five years and it’s a very systematic effort where we have almost all so all seven pillars address you know all kind of needs that we need for ai and similarly if you look at semiconductors we we all know about what is happening in fabs also you know we know that india has a very strong ecosystem of uh you know vlsi design semiconductor design and so on unfortunately most of that ip is not with india but you know there’s a time when is also going to happen that india would also be owning a lot of ip so credibility i think uh for india would be very very important and this is coming not only as part of announcement but it’s it’s coming you know it’s coming it’s coming it’s coming it’s coming it’s coming it’s coming it’s coming you know as part of commitment for scaled deployments, for scaled growth, accelerated growth.

And what we see now is something, you know, which nobody could have thought of 10 years back or 5 years back. So we, I believe we are on the track and we are very much into, you know, into the whole realm of AI and semiconductors. And a lot of push is there and the whole ecosystem is evolving and we all, as we move further, we all are going to work towards, you know, creating a very, very credible ecosystem for overall growth of the sector.

Jaya Jagadish

Now, great insights. Thank you, Vivek. Now, moving to Rahul. There’s clearly a growing momentum to strengthen manufacturing in India. Given your journey, you have expanded Moglex from digital marketplaces into manufacturing and industrial financing. Do you believe the Indian private sector is truly ready financially and have the mind? set to take on long term investments that are needed?

Rahul Garg

So firstly, thank you for having me. I think the question is very pertinent because again, pre -COVID there was a very different environment, both from a geopolitics perspective, supply chain perspective. And I think the supply chain as a word started to become popular in COVID times. So and I think I take some pride in the fact that at least as Moglex, we have been part of seeing the supply chain journey in the country as well as continuing to now see the manufacturing journey. On the specific point that you raised on both from a will perspective, capital perspective, demand perspective, if you look at these three aspects of it. So I think the demand in the country clearly is growing rapidly.

And one of the changes that has happened, obviously India becoming the larger in terms of GDP size, consumer demand, people expecting faster and faster products, people want variety. products so on so forth so i think demand discretionary spend is increasing the one significant change that has happened post covid we see is that while the demand is growing there is also an increasing uh appetite for people to start building more and more manufacturing to also start to look at many of those being localized rather than just depend on global supply chains because obviously we have gone through moments where like we may not have enough mask capacity in the country we may not have enough oxygen concentrated capacity in the country and we some of those shocks kind of uh got both the private and the public sector realizing that there is a bare minimum manufacturing that needs to happen in the country for it to be truly self reliant at the population scale that we are in so i think that will has gotten generated the capital is starting to flow in i think the question on whether the capital is large enough and long term enough i think we are seeing increase increasing trend that there are clearly government will whether it is in terms of the fund that we have seen of 1 lakh crore, now 1 .2 billion dollar for specific AI deep tech, things like that.

But also private capital, which within this week, the numbers that I’m hearing is more than 100 billion dollar plus commitment from the private capital companies saying that they are going to invest into data centers, localizing, so on, so forth. So I think the capital is happening today. Is it happening broad -based? The answer may be no. No. But has it started to happen? And has it started to go from like maybe few hundred crores to few billions of dollars? So that is happening. Can we execute at the same speed and scale? Only time will tell.

Jaya Jagadish

Sure. No, there’s definitely an increased momentum. But along with manufacturing, I mean, I’m also biased more towards the design front based on my experience. I do definitely want to see lot more local startups. And Vivek just mentioned, we don’t have the IPOs. I mean, having our own IPs is one of the key steps. we need to take. So moving on, question for Thomas. If advanced fabs remain limited globally, where should India focus on in the near future? Where can we realistically create value in the next three to seven years?

Thomas Zacharia

Thank you, Jaya, and I just want to echo the sentiments that my colleagues here on the panel have mentioned, so I’ll build on that. So I think the opportunity for India is to move from compute to capability, right? I mean, that’s really where we need to be. And I’ll pick a couple of areas. So sovereignty and resilience gets intermingled. So I’m going to sort of keep those two things separate. Sovereignty is one where you are really trying to make sure that your data and your application or use cases are resident in country and it’s relevant to country. And that’s an area that is uniquely India to lead because no one else is going to do that.

It has to be done and you already mentioned the opportunity, we were with the CEO of Medi today talking about 50 ,000 startups. I don’t know how to get my head wrapped around 50 ,000 startups so I asked him, can you tell me who the top 50 are so that perhaps a company like AMD can partner with them and try to help them to mature. So that is on the sovereignty side. On the resiliency side the reality is that clearly India needs any sovereign country expects to have resiliency create their own IP and India should have the same aspiration given the scale of ambition and scale of population and here I think while we certainly should have an ambition to go up the development cycle to the leading edge of chip design.

I think there is an opportunity to also look at being part of the supply chain for leading edge deployment. So you don’t necessarily have to be at the two nanometer scale for GPUs or CPUs. There are critical technology in the deployment at scale of AI infrastructure where India can play a role. For instance, we know that the entire ecosystem is going to be driven to optics as interconnect technology, co -package optics. And there are clear supply chain that is not available globally. That is something that is being considered today. And leading candidates obviously today I would say are U .S., Japan, Malaysia. But those are the kind of niche areas where India can stab the jib.

And that is the journey where you are. really contributing to the first -of -a -kind or the nth -of -a -kind leading -edge technology. So that’s the way I would approach it.

Jaya Jagadish

Great insights. Thank you, Thomas. Now, continuing, today AI leadership is ultimately limited not by ambition, but by access to secure scalable computing resources. So, Thomas, continuing with you, you have led exascale class systems and now you’re working on sovereign AI partnerships globally. In the U .S., programs such as Genesys and broader national compute initiatives have attempted to systematically align infrastructure, research, and industrial capacity. So what lessons from these models are actually applicable to India?

Thomas Zacharia

So I think this is a great area for public -private partnership, in my view. The public part of it is a uniquely government function. Government brings both policy as well as the demand signal, particularly in the area of science and innovation, critical infrastructure, whether it is energy sector or national security, as well as uniquely government missions. And the opportunity here is to, I mean, India has supercomputing mission. I think there is an opportunity, and I think India is already thinking about deploying this national supercomputing mission and national scientific infrastructure that is on a trajectory to be at global leading scale. So today, countries like U .S. China. China is a particularly interesting example. China developed the intellectual ecosystem around HPC, which then translated to AI, over a period of 20 to 25 years.

It was intentional. And if you look at where the AI penetration, AI adoption, AI infrastructure resides globally, you can directly trace that to investments in sort of supercomputing mission that built the underlying infrastructure. So I think that is a great opportunity. Already plans are there. But it’s not a static view. So one of the things that I would encourage as we plan for the future is not plan based on where things are, but plan on where things will be by the time we deploy this kind of infrastructure.

Jaya Jagadish

That’s great. A future -looking planning is what we… Thank you, Thomas. Vivek, moving to you, from a policy standpoint, how do we balance national security concerns with openness and global collaboration?

Vivek Kumar Singh

Vivek Murthy Well, it’s a very tricky question, I would say. You know, for a country like India, we all know, you know, I mean, the kind of culture that we have in India is we have always believed in the fact that knowledge is a common good. And that is how, you know, our whole innovation ecosystem has been operating. Our universities have been creating a lot of knowledge and we all, you know, researchers, R &D persons, they have been trained with the fact that whatever you create should be for a, you know, for a common good. There were never efforts to productize them, to convert them into socioeconomic goods, to, you know, protect them with, you know, excess rights and so on.

So that was the common thing that we have been doing earlier. But what is happening now is that we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in is a completely different word.

And that is where our academia, our R &D institutions are also being asked to, you know, change the complete quotes. So it’s not only that researchers, you know, faculty members in universities, they should end up with research publication, that’s all. So it’s very, very important that you productize also. Now what is happening, see, if you talk about the culture of innovation and how you see in terms of the global world that we are in, particularly for sectors like AI and semiconductor, I think what we need to do is we need to go for a, you know, a strategic decision -making in the sense that what is it that we want to do? So, for example, there are certain sectors where, you know, the setup that we are using has certain components which may be used in some critical deployment.

So in those cases, what we need is a set of clarity of rules. What is it that we would like to indigenize? What is it? that we would like to have built on our own? And what is it that we can keep open for rest of the world, for collaboration and so on? So I think what we need is, I would say two words would be important is strategic autonomy. Autonomy in the sense that autonomy where it is needed, but at all other places where we can collaborate with the world, where we can contribute in terms of collective knowledge creation, India can always play a role and India is playing a role

Jaya Jagadish

Great. Rahul, question to you. As AI infrastructure scales, demand patterns for chips and hardware will shift. How should Indian manufacturers position themselves early? And secondly, where are the first mover advantages?

Rahul Garg

I think. we are kind of late to the party in some sense in the semiconductor and chips some say it’s two decades three decades late to the party right so and then there are couple of countries which have a disproportionate advantage not just in terms of what is more popularly known as the 2nm and two three companies dominating that but also in terms of the entire ecosystem that is required around all of those factories and chipsets and systems so on so forth so I think for us I think the India journey will be its own unique path so that’s one thing that I’ve always at least over the last 20 years I’ve seen that if you were to wait for landline to become 10 % of the population we would have not had the mobile revolution if you had to wait for credit cards then I mean like it would not have happened right so I think the in this new era that we are living I think the manufacturers will have to find few spots which may not be as obvious which given the conventional way the countries and ecosystems are built and I think one of the good advantages of events like this is you start to have a very large population and smart and talented people throw darts at hundreds of problems simultaneously and maybe five years later we will say like okay we knew that these are the three things which will work or all of that kind of thing right but I don’t think there is like a today unique path I mean definitely does seem that we need to start building capabilities and capabilities need to be built design we have capabilities we don’t have the productize capability so that is one capability which needs to be built the manufacturing capability while we are starting with some of the fabs which are in the mid zone but the entire ecosystem of chemical suppliers like clean room suppliers utility suppliers.

How do you make sure that. there is enough packaging verification and many of that ecosystem getting developed. So all of those are going to happen simultaneously. So I think the opportunity remains in all of the areas. And I think therefore, at least my encouragement to even my management and the way we are looking at Moglex and so on and so forth is, I mean, you try 10 things. Do not be scared to try one thing or two things and then you fail. And conventionally also while in the Western and so world, there have been horizontal capabilities companies have built and scaled. In India, historically over the last 15 years, every startup, every large company has built vertical stacks of companies.

So they are doing an integrated. They may be chip designed to manufacture, to systems, to product. I mean, like that’s how just the model has evolved so far. So. I think that’s what vertical stack manufacturing all. parts of the ecosystem will have to give a shot and maybe over time will become horizontal.

Jaya Jagadish

That’s great. Thank you. So, you know, I do see quite a few students in the audience. So one thing that we are now facing is kind of with this technology. What is knowledge? How do we acquire knowledge? I mean, traditionally, we go to schools, universities for that. But today it’s at your fingertips. And with that advancement of AI, it’s just going to get better. You want to learn about something, you always have it on your fingertips. So what really do we need? How do we prepare the next generations to solve the problems of the future is the question. I mean, we cannot just stick around with our traditional ways of learning. And we have to scale and adapt to the newer ways.

So question for you, Vivek, how can we prepare ourselves and equip ourselves for this next phase that’s coming?

Vivek Kumar Singh

well I would say efforts have already started so that’s the best thing and as you rightly said this is the best time to be a student you know if you take yourselves 20 years back you will always be constrained with resources the best that you have is lot of books you will have to go to a library there are books that you can’t afford and so on and books are also not on time so you have later editions and so on so what is happening now is you know with lot and lot of information information which is which is you know can be customized for you specifically then you have lot of recommended systems you have retrieval augmented generation systems you know all of this with generative and what is happening so the best part is that you have plenty of information you want to learn anything you want to acquire a skill you always have resources and most of the time these resources you really don’t have to pay for that because this lot of material that you can access for for free.

The programs, particularly for India, we have something called NASCOM’s Future Skill Prime, where you can, you know, which is an aggregator for a lot of online courses. Similarly, there are platforms across the world that you can use. Now, what is happening is that essentially what we have been doing in our universities and generic colleges and, you know, other institutions earlier is that it was largely a kind of memory -based learning where we were acquiring knowledge, we were memorizing things. But now, over a period of time, it’s a more synthetic perspective which is being, you know, percolated across institutions. So, students now are going more into that creative aspect where they’re able to create solutions for certain problems.

And with the whole ecosystem around startups, we all know India is the third largest startup ecosystem of the world. With a lot of support system that we have, most of our universities have incubators. You know, other support systems. So, this is… best time and that is why I said this is the best time to be a student if you want to do anything if you have a creative idea you always find support and there are lot of skilling programs from government of India from many other organizations many you know philanthropic supports are there so even lot of organizations which have their own products they are you know offering free of cost training to students so this is very good but what is important is largely also due to the fact that you know we keep on hearing that AI is going to cause a number of you know disruption in terms of jobs that are there because lot of jobs which were there in areas like software testing and customer support and all of that is gone but at the same time these technologies are also creating new jobs and for that you need to prepare yourself and fortunately the best part is that we have enough material enough resources enough support system that we can use to create a new job and for that you need to prepare yourself for the new kind of jobs that are going to come so this The whole revolution that we see in front of us will require massive skilling, a bit of reskilling also.

So many of my batchmates, 25 years into the careers, and they now feel that they have to reskill themselves with many, many new things. Life was very good somewhere in Silicon Valley, 25 years, a lot of money, but now they feel threatened. And that is the beauty of startups and all these new ideas that are there. So I would simply end by saying that this is the best time to be

Jaya Jagadish

Absolutely, totally agree. You know, I have to share this thing. I was actually conducting a panel discussion within AMD with the senior execs. And one of the fun questions was, if there is a machine or an equipment that you want to invent, what would it be? And the unanimous answer is, I would love to have a machine that can make me 20 years old. 20 years younger, right? So, you know, you guys are extremely lucky, make use of this opportunity to the maximum. All right. So as we come to the final leg of this discussion, India’s opportunity in AI and semiconductors is very real. But it’s also time bound. Momentum alone will not be enough. Sequencing, capital discipline, institutional alignment and infrastructure depth truly matters.

And all these areas have to work in complete alignment with each other. So, you know, let me close the session by asking each of you one more question. First one is for Rahul.

Moderator

was if there is a machine or an equipment that you want to invent, what would it be? And the unanimous answer is I would love to have a machine that can make me 20 years younger, right? So, you know, you guys are extremely lucky, make use of this opportunity to the maximum. All right. So as we come to the final leg of this discussion, India’s opportunity in AI and semiconductors is very real, but it’s also time bound. Momentum alone will not be enough. Sequencing, capital discipline, institutional alignment and infrastructure depth truly matters. And all these areas have to work in complete alignment with each other. So, you know, let me close the session by asking each of you one more question.

First. First one is for Rahul. In the global race where others are moving fast, what is the one move India must execute flawlessly to stay competitive?

Rahul Garg

I think like many other things I think it’s not one move so maybe we do everything as a Bollywood dance move right so they’re like 10 moves to everything but I think the one of the things which has happened at least from my vantage in the startup ecosystem is over the last 15 years we have become extremely good at being fast followers like maybe 15 years back if there was a product or a service in US or in Europe it would take three to five year lag to come to India and now maybe that lag is like one month 15 days I mean like so probably chat GPT within the first one month the maximum number of users are coming from India right so we have become extreme fast thanks to technology that we are fast followers the number of apps that might be built in India might be higher than most countries combined together maybe US China might be the only ones but otherwise I think India would be in the top three in terms of building all the apps in the world I think the move that needs to happen is the scale of ambition beyond India into the global platform because most of this effort that has happened in last 15 years are around kind of dominating the Indian consumer businesses applications so on so forth I think we need to up the game on global and we would require a significant amount of public private upping the game because most of the countries capital pools that we are fighting cannot be only attracted by the private players so I mean if someone is raising 100 billion dollar 200 billion dollar we need to at least start the race with 10 billion 15 billion 20 billion right which is not possible today completely in the private so I think how do we bring this and push the you capital bar, global bar together as a government and as private players I think that’s one thing I would love to see

Moderator

That’s a very valid statement Right Next question to Thomas Thomas, if we had to place one strategic bet that defines India’s position in AI and semiconductors by 2030 what should it be?

Thomas Zacharia

So I’m going to repeat what Rahul said there is, you know, the one I don’t know much about Bollywood dance moves but I would say one move is certainly ambitious I’m going to sort of regress back to a few previous questions since we have a few minutes, I thought I will sort of start with public -private alignment. Rahul mentioned that it is very, very hard for private sector to to to raise the kind of capital that’s being raised elsewhere in India. And that’s part of it is, you know, so one of the important things that government can do is to de -risk that enterprise. Now, I don’t believe that government should de -risk a private sector’s business venture by investing in that effort.

But there are unique places where government can de -risk through public -private partnerships that would enable this ecosystem to develop so that additional ventures can be taken up by private sector on their own. Because I don’t think that my taxpayer money should be used to subsidize. I mean, look, there is a role. So you mentioned Genesis. I did not describe Genesis. I don’t know whether… of you in the audience know what genesis is so I’ll take a couple of minutes to just discuss that as an opportunity to think about how to frame public private partnership so today United States spends a trillion dollars a year in R &D expenditure and roughly about 20 to 25 percent of that is government the rest is private sector now if you look at the R &D spend in the United States it’s been steadily growing at about keeping up with inflation maybe slightly above inflation 2 to 3 percent year over year but if you look at innovation output it’s been flat lined part of it is because the problems are getting more and more complicated discovering new materials cure for cancer all All those things are increasingly, significantly impactful for society, but also significantly challenging.

So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources to bring academia, national laboratories, and private sector to identify what they call lighthouse problems, so you can call it grand challenge problems, that are relevant, that is likely to move the needle across these areas. And the government is then investing substantial resources for compute infrastructure, software stack, partnering with private sector in these important problems. Because it is being done in a… open, collaborative framework, private… This work is, in my view, appropriate for government to invest because the government is not investing directly in any particular business, but the business is able to take the fruits of this collaboration to drive innovation in their own sector.

So I think that is a really good model. I think it was already alluded to. If you are a fast follower or if you follow anybody, the danger is that it may be appropriate for a business, but as a nation, anytime you follow somebody, and if that is your ambition, you are destined at best to be number two, at best, because there is always somebody ahead of you. So I think for a country with the history of India, the ambition of India, the talent of India, and now the will of India, there is nothing wrong with aspiring to be, strategically deciding where India can go. can be world leading in part of this, I mean, no country is going to dominate every aspect of this ecosystem, identifying strategically where one can be that leader globally.

And I would say there are, at least if I can speak to AMD, just as an example, we were discussing about Helios and how it is based on open standards. There are many components. It may not be GPU that you start, but there are many components there where a private sector in India can aspire to be a leading provider based on open standards so that a business like AMD or a public private sector would say, well, I can get a better product, better total cost of ownership if I can plug into that. And one last thing, I cannot let you get away with, you know, just this time. I’m being great for the youngsters. Ray Kurzweil said that today, for each of us in this room, we age only eight months for every chronological year because of advances in medical care.

And that is true because longevity, people are living longer because of better drugs, better health, living, etc. So AI has the added advantage of providing greater solutions. So it’s not just the youngsters, there is hope for us if

Moderator

Absolutely. So we are all lucky to be here at this age of AI. We are truly lucky to be in this. No, that was very insightful. Thank you, Thomas. So Vivek, a question for you. I’m going to say what is the one bold decision, but I’m going to change that to what are some of the bold decisions. We must take to ensure we don’t look back and regret five years from today.

Vivek Kumar Singh

well i think uh what the the biggest advantage that india has is of course you know a huge pool of talent so that is something that we all need to rely on that’s that’s uh that’s the most important thing for a country like india and this essentially see india has an inherent culture of innovation so it’s not that you know we’re always following or we’re looking at technologies and so on the fact is the ecosystems where we have been living in they were not you know uh geared up they were not situated in the context that we were creating products so the culture of transforming that innovation to products has not been there unfortunately for a long period of time things are changing now and probably what we need to do is to invest more in our youth to invest more in skilling to invest uh more in how do we convert the knowledge that we generate generate in our universities, in our R &D labs into actual usable products which have, you know, socio -economic impact.

So that’s the most important thing that I believe we should be looking at. Of course, given the fact that we also have an advantage that we have the advantage of scale also. So, you know, a lot of things that we have done and we have proved it in terms of the digital public infra that we have created at a population scale of size India, it matters a lot. If you go to any part of the world, particularly anywhere in Europe, if you identify yourselves from India, you know, and you are in some discussion related to IT and so on, so you will always be regarded with, you know, a lot of depth in the sense that everybody believes that India is an IT superpower largely because of the talent that we have.

So this is something that we should leverage on and we should do it. And something that we really need to invest in heavily to see what is going to come for the next generation and to provide an environment and our prime minister keeps on saying ease of doing business so that is something that we really need to look into to to enable and to create an environment where we are able to transform the knowledge that we create into usable products

Moderator

no absolutely right i mean talent skilling and ease of doing business i mean all of these are coming together for india and in fact i led the committee for future skills and i worked i got the opportunity to work with 13 other eminent leaders from industry academia across the board and one thing that stood out was if we can actually get our skilling right we can actually supply talent not just for india but globally you know that’s something that’s going to be very effective you if we can get our skilling actions right so thank you again Today’s conversation was truly insightful and inspiring. We touched upon many aspects of semiconductors and AI and the ecosystem and India’s potential as such.

And again, AI leadership will not really happen by accident. It will require a deliberate alignment across policy, industry, research, and infrastructure. And we have many strengths that we need to work on. We need to work on strengthening and leveraging for the growth that we are ambitious about. And truly what matters now is decisive execution, moving with clarity and with urgency. So it’s going to be a great journey. And I once again want to iterate that we are truly lucky to be here in this phase. And what a fantastic journey we have ahead of us. And let’s be committed to that journey of learning and advancement. Thank you also. much for attending this session. I appreciate your time.

Thank you. Do we have time for audience questions? We can take one question, one or two. Out of 500 out of 500 sessions here, this is one on semiconductor. I’m very glad that you guys organized it. Very, very insightful. A few amazing questions and a good response. Quickly to my question. I teach AI and sustainability at IIM and I cover the entire supply chain starting from the rear, going through the chip design, manufacturing, the semiconductor supply chain, essentially, and all the way to data centers and electronic use. So, sustainability is at the core of all design decisions in my class. And that’s what we are trying to teach the new management human resources in India. Your thoughts on having sustainability not as a trade -off but as a core design choice for every decision that is made either in India or some country.

Thomas Zacharia

So it’s a great question and I think every one of us, certainly I can speak for the company that I represent here and I must say I’m in India so I am going to give a shout out to the 10 ,000 AMDers in this country. AMD would not exist without you. We would not be able to do what we are doing without the contributions that they make every day. So India is already very much part of a global supply chain. Sustainability is very key. We design our products with a goal, explicit goal of flattening the energy curve because it’s easy to say we’re going to build megawatts and gigawatts, which we may because it is going to be a fundamental infrastructure in which society is going to progress.

But it’s incumbent on us to ensure that we are very, very thoughtful and committed to sustainability. I also would like to say that we have to be humble enough to know that we are not going to get everything right. I was at a U .S. National Academy meeting where Subhash Suresh, he was the president at the time, we had just rolled out the grand challenges for the 21st century. And he said, you know, if you look at it, the grand challenges of the 21st century. are attempting to solve the problems created by the solutions to the grand challenges of the 20th century. So the reality is that we don’t know what we don’t know. But yet, as long as we use sustainability as a core goal and be humble enough to know that we are not going to get it all right, then I think we cannot stop progress.

We need to continue to move forward. But know that we are not going to get everything right and course correct as we go along.

Moderator

Okay, I’m told we are out of time. Yes, actually we are running out of time. And I really appreciate for joining us for this session. And I’m very much heartfelt thankful to our distinguished guests. So as a token from Mighty’s side, I would like to give a short, I mean cute memento. Pooja, you can also join. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (2)
Confirmedhigh

“Dr Thomas Zakaria is Senior Vice President for Strategic Technical Partnerships and Public Policy at AMD.”

The knowledge base lists Dr Thomas Zakaria as Senior Vice President for Strategic Technical Partnerships and Public Policy at AMD, confirming his role [S1].

Confirmedmedium

“Prof. Vivek Kumar Singh said AI‑augmented learning shifts education from rote memorisation to creative problem‑solving.”

Future-Ready Education material describes a shift from regurgitation-based learning to critical thinking and creativity, supporting Singh’s description of the transition to AI-augmented, creative problem-solving [S79].

External Sources (80)
S1
The Global Power Shift India’s Rise in AI &amp; Semiconductors — First. First one is for Rahul. In the global race where others are moving fast, what is the one move India must execute …
S2
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — First. First one is for Rahul. In the global race where others are moving fast, what is the one move India must execute …
S3
The Global Power Shift India’s Rise in AI &amp; Semiconductors — The discussion was framed around India’s opportunity in AI and semiconductors, with the moderator establishing that AI r…
S4
Building the AI-Ready Future From Infrastructure to Skills — – Timothy Robson- Thomas Zacharia
S5
The Global Power Shift India’s Rise in AI &amp; Semiconductors — – Thomas Zacharia- Rahul Garg – Vivek Kumar Singh- Thomas Zacharia
S6
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Vivek Kumar Singh(Professor): Senior advisor on science and technology at NITI Aayog; plays central role in shaping Ind…
S7
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — Joining us is Professor Vivek Kumar Singh, Senior… advisor on science and technology at NITI IO. Professor Singh plays…
S8
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S9
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S10
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S11
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S12
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biolog…
S13
Panel Discussion Data Sovereignty India AI Impact Summit — Okay, I’m quickly coming to the third question. I think you had so many things. Supply chain trust, absolutely. Today, i…
S14
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S15
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And thank you. And maybe I will introduce a few of them. Agri -Co is transforming agriculture through digital tools that…
S16
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S17
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — So my point was that, for example, geo tagging of all the assets of your, you know, right from the power generation to t…
S18
Diplomatic policy analysis — Policy analysis serves as the backbone of diplomacy’s decision-making. It equips leaders and negotiators with the eviden…
S19
Judiciary engagement — AI implementation in judicial systems has wide-ranging effects on various stakeholders including lawyers, litigants, and…
S20
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Lastly, the analysis emphasises the importance of a cross-disciplinary approach. It highlights the necessity for collabo…
S21
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S22
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — Emphasis is placed on boosting supply chain resilience and embedding sustainability as fundamental to the private sector…
S23
The Battle for Chips — In conclusion, India’s strategic approach to developing a comprehensive semiconductor ecosystem demonstrates a commitmen…
S24
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Talent development, education and future skills
S25
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Massive Workforce Development Challenge: The industry faces a critical shortage of approximately 1 million skilled work…
S26
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The government’s response through the India AI Mission has established a shared compute framework providing access to 38…
S27
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — The participant explains that India is following the same successful approach used for DPI development, where basic buil…
S28
The Global Power Shift India’s Rise in AI &amp; Semiconductors — And again, AI leadership will not really happen by accident. It will require a deliberate alignment across policy, indus…
S29
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Jovan Kurbalija: Thank you. She’s quiet. Okay, okay. Good. Great. We heard from our excellent speakers at the very begin…
S30
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — A strategic ecosystem approach requires early use cases in areas where private sector can lead, areas where public secto…
S31
AI/Gen AI for the Global Goals — Boa-Gue mentions the African Startup Policy Framework as an example of an initiative to enable member states to develop …
S32
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S33
WS #462 Bridging the Compute Divide a Global Alliance for AI — However, other panelists emphasized the importance of local infrastructure for enabling indigenous innovation and ensuri…
S34
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “When it comes to discovery, we need to develop foundation models for proteins, RNA, cellular circuits and systems biolo…
S35
Trade Deals or Disputes? / DAVOS 2025 — 4. Investment De-risking: Ensuring stable fiscal arrangements and rule of law to encourage long-term investments. Vandi…
S36
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — -Policy and Regulatory Frameworks: Multiple panelists emphasized the critical role of government policy in reducing inve…
S37
Biology as Consumer Technology — Notably, the analysis highlights the importance of investors taking more risks, as venture funds often shy away from ris…
S38
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Moderate disagreement with significant implications. The disagreements are not fundamental conflicts but represent diffe…
S39
Day 0 Event #270 Everything in the Cloud How to Remain Digital Autonomous — This balanced approach influenced how other speakers framed their arguments, moving away from binary thinking toward mor…
S40
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — Crucially, the address underscored the importance of incorporating developing countries into the global supply chain, ad…
S41
Parallel Session D3: Supply Chain Disruptions – The Role and Response of NTFCs — In summary, the analysis accentuated TFAs as catalysts for managing and enhancing supply chain efficiency. It also under…
S42
How Investment Promotion Agencies (IPAs) and trade institutions could leverage digital tools to create sustainable supply chain partnerships’ — Cambodia has implemented the Pentagon Strategy, a new social and economic policy agenda, to combat climate change and pr…
S43
Keynote-Alexandr Wang — “That’s transformative, perhaps most especially in countries like India, where so many languages are spoken.”[11]. “That…
S44
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — Partnerships can help address the toughest challenges within a country by utilizing data-centric or artificial intellige…
S45
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — The value of cross-sector partnerships, especially during the pandemic, is emphasised. Collaborations between the public…
S46
WS #460 Building Digital Policy for Sustainable E Waste Management — Sustainability must be designed into products from the beginning rather than treated as an afterthought
S47
Empowering the Ethical Supply Chain: steps to responsible sourcing and circular economy (Lenovo) — However, there are challenges that hinder progress towards sustainability. The analysis identifies knowledge gaps in sus…
S48
Creating Eco-friendly Policy System for Emerging Technology — Decision making should be based on evidence. Her argument conveyed a positive stance towards the central role of higher…
S49
Multistakeholder Partnerships for Thriving AI Ecosystems — Not only the big players. So all those things need framework and need governance. And we have to make sure that the outc…
S50
Indias Roadmap to an AGI-Enabled Future — Researchers, founders and policy makers. At Chariot, we are proud to be one of the companies mandated to build frontier …
S51
The Global Power Shift India’s Rise in AI &amp; Semiconductors — The panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems,…
S52
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S53
WS #270 Understanding digital exclusion in AI era — The speaker stresses the need for collaboration among multiple stakeholders to address AI challenges. No single stakehol…
S54
WS #462 Bridging the Compute Divide a Global Alliance for AI — Successful collaboration requires openness, compromise, and recognition of diverse community needs rather than imposing …
S55
From KW to GW Scaling the Infrastructure of the Global AI Economy — Specific timeline and investment details for India’s semiconductor manufacturing capabilities (Semicon mission) remain u…
S56
The Battle for Chips — Additionally, India advocates for providing more opportunities, investments, and technology to countries with greater po…
S57
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — Irakli Beridze (UNICRI) This comment introduced the governance perspective into the scientific discussion, emphasizing …
S58
High-Level session: Building and Financing Resilient and Sustainable Global Supply chains and the Role of the Private Sector — Businesses are encouraged to look outside the finite Caribbean market Effective collaboration, as demonstrated by the C…
S59
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Talent development, education and future skills
S60
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Massive Workforce Development Challenge: The industry faces a critical shortage of approximately 1 million skilled work…
S61
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — The panel reached consensus on the need for fundamental educational reform to prepare students for an AI-integrated futu…
S62
AI: The Great Equaliser? — While the introduction of AI technology may result in job losses in certain sectors, it also creates new job opportuniti…
S63
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The government’s response through the India AI Mission has established a shared compute framework providing access to 38…
S64
Cyber Resilience Playbook for PublicPrivate Collaboration — – Governments can completely exit the zero-day market and avoid research dedicated to finding software vulnerabilities….
S65
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — He introduces a panel of experts from different fields
S66
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Deepali Khanna from the Rockefeller Foundation opened by framing the central challenge: the digital divide is evolving i…
S67
Keynotes — At the European Dialogue on Internet Governance (EuroDIG) 2024, the imperative of multistakeholder collaboration in shap…
S68
Opening of the session — Support expressed for paragraphs 15 and 16
S69
IGF 2019 – Dynamic coalition on blockchain technologies — After Diedrich’s presentation, the moderator opened the discussion to questions from the audience. The first question wa…
S70
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — I heard from Jingdong JD. That’s the goose named with smart has doubled the last year and the the the fourth one is to i…
S71
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S72
Closing Ceremony — This argument positions artificial intelligence as a transformative force rather than merely a technological tool. It su…
S73
Closing remarks – Charting the path forward — Bouverot argues for comprehensive inclusion in AI governance discussions, extending beyond just governmental participati…
S74
Global Enterprises Show How to Scale Responsible AI — The implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani empha…
S75
Keynote Adresses at India AI Impact Summit 2026 — Gore reinforced this assessment, noting that “India’s entry into Pax Silica isn’t just symbolic, it’s strategic, it’s es…
S76
We are the AI Generation — In her conclusion, Martin articulated that the fundamental question should not be “who can build the most powerful model…
S77
Open Forum #30 High Level Review of AI Governance Including the Discussion — Lucia Russo from the OECD emphasized three strategic pillars: moving from principles to practice, providing evidence-bas…
S78
Opening keynote — Bogdan-Martin framed the AI revolution as a pivotal moment for the current generation, calling it an opportunity to take…
S79
Future-Ready Education: Enhancing Accessibility &amp; Building | IGF 2023 — In the analysis, the speakers highlight the importance of future education being skills-oriented to prepare students for…
S80
AI (and) education: Convergences between Chinese and European pedagogical practices — **Norman Sze** (former Chair of Deloitte China) provided industry perspective on AI’s impact on professional work, notin…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Jaya Jagadish
2 arguments141 words per minute1141 words484 seconds
Argument 1
Integrated approach: AI leadership requires coordinated silicon, software, systems, and policy (Jaya Jagadish)
EXPLANATION
Jaya stresses that true AI leadership cannot be achieved by focusing on a single element; it demands the simultaneous development of silicon hardware, software ecosystems, system integration, and supportive policy frameworks. This holistic coordination is essential for a nation to become a leader in AI.
EVIDENCE
She explained that “true AI leadership itself happens when silicon, software, systems, and policy, all of these aspects have to come together to achieve that leadership. No one aspect can really get us there” [32-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Global Power Shift report stresses that AI leadership demands deliberate alignment across policy, industry, research and infrastructure, confirming the need for a coordinated silicon-software-systems-policy ecosystem [S1].
MAJOR DISCUSSION POINT
Holistic AI ecosystem coordination
AGREED WITH
Thomas Zacharia, Moderator
Argument 2
Development of local startups and indigenous IP is essential for a sovereign AI ecosystem (Jaya Jagadish)
EXPLANATION
Jaya argues that building a sovereign AI capability depends on fostering homegrown startups and creating indigenous intellectual property rather than relying on external IP. Indigenous IP is a cornerstone for self‑reliance and credibility in the AI domain.
EVIDENCE
She noted, “I do definitely want to see lot more local startups. … having our own IPs is one of the key steps we need to take” [84-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Credibility in deep-tech is linked to owning semiconductor IP and fostering domestic capabilities, supporting the emphasis on indigenous IP and local startups [S1].
MAJOR DISCUSSION POINT
Local startup and IP development
AGREED WITH
Thomas Zacharia, Vivek Kumar Singh
V
Vivek Kumar Singh
4 arguments201 words per minute2089 words622 seconds
Argument 1
Massive government funding (₹10,000 crore) and a systematic AI mission create scale and credibility (Vivek Kumar Singh)
EXPLANATION
Vivek highlights that the Indian government’s AI mission, backed by more than ₹10,000 crore over five years, provides the financial muscle and systematic approach needed to build credibility at scale. This funding underpins large‑scale deployments rather than isolated announcements.
EVIDENCE
He referenced “the India AI mission … 10 000 plus crores for five years” as part of a systematic effort that addresses all AI needs [56].
MAJOR DISCUSSION POINT
Government funding for AI credibility
AGREED WITH
Thomas Zacharia, Rahul Garg, Moderator
Argument 2
Credibility stems from large‑scale deployments and genuine IP ownership, not merely announcements (Vivek Kumar Singh)
EXPLANATION
Vivek asserts that credibility in deep‑tech arises from actual large‑scale implementations and ownership of intellectual property, rather than from promotional announcements. Sustainable credibility requires tangible, scaled outcomes.
EVIDENCE
He stated that “credibility doesn’t come only from announcements… it comes as part of commitment for scaled deployments, for scaled growth” [56-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The analysis notes that credibility depends on scaling deployments and owning semiconductor IP rather than on promotional announcements [S1].
MAJOR DISCUSSION POINT
Substance over hype for credibility
AGREED WITH
Jaya Jagadish, Thomas Zacharia
Argument 3
Post‑COVID supply‑chain shocks have generated strong will and government incentives (tax holidays, ₹1.2 bn AI fund) to localise production (Vivek Kumar Singh)
EXPLANATION
Vivek points out that disruptions caused by COVID highlighted the need for domestic supply‑chain resilience, prompting policy measures such as tax holidays for data centres and a sizable AI fund to encourage localisation. This has created a political will to build manufacturing capacity within India.
EVIDENCE
He mentioned recent “tax holidays for data centers” and the broader push to localise production following pandemic-induced supply-chain shocks [53-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Post-COVID supply-chain trust issues and the push for domestic hardware and AI provenance are discussed in the Data Sovereignty panel, reflecting policy incentives such as tax holidays [S13].
MAJOR DISCUSSION POINT
Policy response to supply‑chain shocks
Argument 4
Balancing strategic autonomy for security‑sensitive sectors with open global collaboration is crucial (Vivek Kumar Singh)
EXPLANATION
Vivek emphasizes the need for India to define clear rules on which technologies should be indigenised for strategic autonomy while keeping other areas open for international collaboration. This strategic autonomy ensures security without isolating the ecosystem.
EVIDENCE
He outlined the need for “clarity of rules” on what to indigenise versus what to keep open, describing this as “strategic autonomy” [141-147].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for sophisticated frameworks to decide which components to indigenise versus keep open, i.e., strategic autonomy, is highlighted in the Global Power Shift briefing and the Data Sovereignty discussion [S1][S13].
MAJOR DISCUSSION POINT
Strategic autonomy vs openness
DISAGREED WITH
Thomas Zacharia
R
Rahul Garg
3 arguments172 words per minute1338 words466 seconds
Argument 1
Growing demand, political will, and rising private‑capital commitments (>$100 bn) are enabling manufacturing expansion (Rahul Garg)
EXPLANATION
Rahul observes that India’s expanding consumer demand, reinforced by political commitment and a surge in private‑sector capital—exceeding $100 billion—are driving rapid growth in manufacturing and AI infrastructure. This confluence of demand and financing is reshaping the industrial landscape.
EVIDENCE
He noted that “demand in the country clearly is growing rapidly” and cited “more than 100 billion dollar plus commitment from the private capital companies” for data-center localisation [70-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Private-sector appetite for localisation post-COVID, with commitments exceeding $100 bn, is documented in the Global Power Shift report [S1].
MAJOR DISCUSSION POINT
Demand and capital fueling manufacturing
DISAGREED WITH
Thomas Zacharia
Argument 2
Companies should adopt vertical‑stack models, develop mid‑zone fabs, and nurture ancillary ecosystems (clean‑room, packaging, chemicals) (Rahul Garg)
EXPLANATION
Rahul recommends that Indian firms pursue vertical integration—from chip design through manufacturing to system integration—while establishing mid‑range fabs and building supporting ecosystems such as clean‑room facilities, chemical suppliers, and packaging verification. This approach can accelerate capability building across the supply chain.
EVIDENCE
He described the need for “vertical-stack models” and highlighted the importance of “mid-zone fabs” and ancillary ecosystems like “clean-room suppliers, utility suppliers, packaging verification” [153-163].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recommendations for vertical integration, mid-zone fabs and supporting ecosystems such as clean-rooms and packaging are outlined in the Global Power Shift analysis [S1].
MAJOR DISCUSSION POINT
Vertical integration and ecosystem development
DISAGREED WITH
Thomas Zacharia
Argument 3
India’s large talent pool gives it a fast‑follower advantage; scaling this to a global platform requires coordinated public‑private effort (Rahul Garg)
EXPLANATION
Rahul argues that India’s abundant talent enables it to adopt new technologies quickly, as seen with rapid adoption of tools like ChatGPT. To translate this speed into global leadership, coordinated action between government and private sector is essential.
EVIDENCE
He highlighted that “we have become extreme fast followers” with India leading in early ChatGPT adoption, and stressed the need for “significant public-private” collaboration to raise capital to global levels [214-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report emphasizes that coordinated public-private action is needed to translate India’s talent and fast-follower capability into global-scale leadership [S1].
MAJOR DISCUSSION POINT
Fast‑follower talent leveraged with PPP
DISAGREED WITH
Thomas Zacharia
T
Thomas Zacharia
6 arguments136 words per minute1744 words765 seconds
Argument 1
India should target capability rather than leading‑edge fabs, focusing on niche supply‑chain areas such as co‑package optics and AI‑infrastructure components (Thomas Zacharia)
EXPLANATION
Thomas suggests that India’s realistic path is to develop capabilities in specialized, high‑value supply‑chain segments—like co‑package optics and AI‑infrastructure interconnects—rather than attempting to build leading‑edge 2 nm fabs. This niche focus can still contribute significantly to global AI deployments.
EVIDENCE
He explained that “you don’t necessarily have to be at the two nanometer scale… there are critical technology in the deployment at scale of AI infrastructure where India can play a role” and cited “co-package optics” as a niche area where India can “stab the jib” [102-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The briefing argues that India’s realistic path is to focus on strategic supply-chain participation and niche capabilities rather than pursuing leading-edge 2 nm fabs [S1].
MAJOR DISCUSSION POINT
Niche capability focus over leading‑edge fabs
DISAGREED WITH
Rahul Garg
Argument 2
Private sector must build and protect its own IP while leveraging global collaborations (Thomas Zacharia)
EXPLANATION
Thomas emphasizes that Indian companies need to generate and safeguard indigenous IP, while also seeking partnerships with global players to mature startups. This dual approach balances self‑reliance with the benefits of international collaboration.
EVIDENCE
He recounted asking the CEO of Medi about the “top 50” startups so that AMD could partner to help them mature, underscoring the need for local IP development and strategic partnerships [98-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ownership of semiconductor IP and strategic partnerships with global firms are identified as essential for Indian companies in the Global Power Shift document [S1].
MAJOR DISCUSSION POINT
IP creation and strategic partnerships
AGREED WITH
Jaya Jagadish, Vivek Kumar Singh
Argument 3
Capital is beginning to flow, though still uneven; public‑private de‑risking can broaden the base (Thomas Zacharia)
EXPLANATION
Thomas notes that while capital is entering the ecosystem, its distribution is uneven. He proposes public‑private de‑risking mechanisms to encourage broader private investment without direct subsidy, thereby expanding the funding base.
EVIDENCE
He discussed the role of public-private partnerships in de-risking ventures, stating that “government can de-risk through public-private partnerships that would enable this ecosystem to develop” [216-221].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public-private de-risking mechanisms to broaden capital flow are discussed as a way to support the emerging ecosystem [S1].
MAJOR DISCUSSION POINT
PPP de‑risking to broaden capital base
AGREED WITH
Vivek Kumar Singh, Rahul Garg, Moderator
DISAGREED WITH
Rahul Garg
Argument 4
A national supercomputing mission can seed AI infrastructure and drive long‑term innovation (Thomas Zacharia)
EXPLANATION
Thomas argues that a dedicated national supercomputing mission can provide the foundational compute resources needed for AI research and industrial applications, thereby catalyzing sustained innovation across the country.
EVIDENCE
He referenced India’s “supercomputing mission” and compared it to China’s 20-25-year intentional build-up of HPC that later powered AI adoption [119-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s supercomputing mission is cited as a catalyst for AI research and industrial applications, analogous to China’s HPC build-up, in the Global Power Shift analysis [S1].
MAJOR DISCUSSION POINT
Supercomputing mission as AI catalyst
AGREED WITH
Jaya Jagadish, Moderator
DISAGREED WITH
Rahul Garg
Argument 5
PPP frameworks like the “Genesis” project can de‑risk large‑scale R&D, focus on lighthouse problems, and align academia, labs, and industry (Thomas Zacharia)
EXPLANATION
Thomas describes the Genesis project as a model where government funds are used to bring together academia, national labs, and industry to tackle grand‑challenge problems, thereby de‑risking research while fostering open collaboration.
EVIDENCE
He explained that the Genesis Project “aligns public and private partnership, invests government resources to bring academia, national laboratories, and private sector to identify lighthouse problems” and supports compute infrastructure and software stacks [225-229].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Genesis project, a public-private partnership that aligns academia, national labs and industry around lighthouse problems, is described in the Global Power Shift report [S1].
MAJOR DISCUSSION POINT
Genesis PPP for R&D de‑risking
DISAGREED WITH
Vivek Kumar Singh
Argument 6
AMD embeds energy‑efficiency goals in product design and acknowledges the need for continual humility and course‑correction on sustainability (Thomas Zacharia)
EXPLANATION
Thomas states that AMD designs its products with explicit energy‑efficiency targets to flatten the energy curve, while recognizing that sustainability is an ongoing journey that requires humility and iterative improvement.
EVIDENCE
He said “we design our products with a goal, explicit goal of flattening the energy curve” and admitted “we are not going to get everything right” but must continue to move forward [282-289].
MAJOR DISCUSSION POINT
Sustainability embedded in product design
M
Moderator
1 argument97 words per minute981 words602 seconds
Argument 1
Aligning policy, industry, research, and infrastructure is essential to translate talent into global‑competitive products (Moderator)
EXPLANATION
The Moderator stresses that deliberate coordination among government policy, industrial capabilities, academic research, and infrastructure investment is required to turn India’s talent pool into globally competitive AI and semiconductor products.
EVIDENCE
He summarized that “AI leadership will not really happen by accident. It will require a deliberate alignment across policy, industry, research, and infrastructure” and called for “decisive execution, moving with clarity and with urgency” [255-262].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Deliberate alignment across policy, industry, research and infrastructure is identified as a prerequisite for AI leadership in the Global Power Shift briefing [S1].
MAJOR DISCUSSION POINT
Cross‑sector alignment for AI leadership
AGREED WITH
Vivek Kumar Singh, Thomas Zacharia, Rahul Garg
Agreements
Agreement Points
A coordinated, holistic ecosystem (silicon, software, systems, policy, research, infrastructure) is essential for AI leadership.
Speakers: Jaya Jagadish, Thomas Zacharia, Moderator
Integrated approach: AI leadership requires coordinated silicon, software, systems, and policy (Jaya Jagadish) A national supercomputing mission can seed AI infrastructure and drive long‑term innovation (Thomas Zacharia) Aligning policy, industry, research, and infrastructure is essential to translate talent into global‑competitive products (Moderator)
All three speakers stress that AI leadership cannot rely on a single pillar; it demands simultaneous development of hardware, software, system integration, supportive policy and research infrastructure [32-34][119-124][255-262].
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors policy calls for a deliberate alignment across silicon, software, systems, policy, research and infrastructure to achieve AI leadership, as articulated in the Global Power Shift report on India’s AI ambitions [S28] and reinforced by broader strategic guidance on coordinated AI ecosystems [S43].
Public‑private partnership and substantial financing are critical to scale AI and semiconductor capabilities.
Speakers: Vivek Kumar Singh, Thomas Zacharia, Rahul Garg, Moderator
Massive government funding (₹10,000 crore) and a systematic AI mission create scale and credibility (Vivek Kumar Singh) Capital is beginning to flow, though still uneven; public‑private de‑risking can broaden the base (Thomas Zacharia) Growing demand, political will and >$100 bn private‑capital commitments are enabling manufacturing expansion (Rahul Garg) Aligning policy, industry, research, and infrastructure is essential to translate talent into global‑competitive products (Moderator)
The panel concurs that large-scale funding-both public (AI mission, tax incentives) and private (hundreds of billions of dollars)-combined with PPP de-risking mechanisms, is needed to build a credible AI/semiconductor ecosystem [56][216-221][214-218][255-262].
India’s abundant talent pool provides a fast‑follower advantage that must be leveraged through coordinated action.
Speakers: Jaya Jagadish, Rahul Garg, Thomas Zacharia
India is well‑poised with engineering talent, silicon design strength and ecosystem partners (Jaya Jagadish) India’s large talent pool gives it an extreme fast‑follower advantage; scaling this globally requires public‑private effort (Rahul Garg) India’s talent and ambition can be directed toward strategic capabilities (Thomas Zacharia)
All three highlight that India’s strong engineering and scientific talent enables rapid adoption of new technologies, but realizing global leadership will require coordinated public-private strategies [38-39][214-215][239-240].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s talent advantage is highlighted as a key strength in the Global Power Shift analysis of India’s AI rise [S28] and in discussions about bold national AI strategies that leverage human capital across many languages and domains [S43].
Developing indigenous IP and nurturing local startups are essential for a sovereign AI ecosystem.
Speakers: Jaya Jagadish, Thomas Zacharia, Vivek Kumar Singh
Development of local startups and indigenous IP is essential for a sovereign AI ecosystem (Jaya Jagadish) Private sector must build and protect its own IP while leveraging global collaborations (Thomas Zacharia) Credibility stems from large‑scale deployments and genuine IP ownership, not merely announcements (Vivek Kumar Singh)
The speakers agree that owning IP and fostering home-grown startups are cornerstones of credibility and sovereignty in AI and semiconductors [84-86][98-100][56-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for indigenous IP and startup ecosystems echo the emphasis on technological sovereignty and local infrastructure for innovation found in the Global Alliance for AI report [S33] and in India’s NDIA mission to build sovereign frontier models [S50].
Similar Viewpoints
Both note that COVID‑related supply‑chain disruptions created a policy push (tax holidays, AI fund) and heightened private‑capital interest in building domestic manufacturing capacity [53-55][70-73].
Speakers: Vivek Kumar Singh, Rahul Garg
Post‑COVID supply‑chain shocks have generated strong political will and incentives (tax holidays, AI fund) to localise production (Vivek Kumar Singh) Pandemic‑induced supply‑chain disruptions have spurred demand for domestic manufacturing and capital inflows (Rahul Garg)
Both propose a realistic, incremental path that leverages niche capabilities and mid‑range manufacturing rather than chasing the most advanced process nodes [102-108][153-163].
Speakers: Thomas Zacharia, Rahul Garg
India should focus on niche, high‑value supply‑chain capabilities (co‑package optics, AI infrastructure) rather than leading‑edge fabs (Thomas Zacharia) Companies should adopt vertical‑stack models, develop mid‑zone fabs and ancillary ecosystems (Rahul Garg)
Unexpected Consensus
Recognition that sustainability must be embedded in product design despite limited discussion elsewhere.
Speakers: Thomas Zacharia, Moderator
AMD embeds energy‑efficiency goals in product design and acknowledges the need for humility and continual improvement (Thomas Zacharia) Moderator raises a question on making sustainability a core design choice for AI and semiconductor decisions (Moderator)
Although sustainability was not a primary focus for most panelists, both Thomas and the Moderator converge on the view that sustainability should be a foundational design principle, highlighting an unexpected alignment [282-289][274-277].
POLICY CONTEXT (KNOWLEDGE BASE)
The need to embed sustainability from the design stage is directly reflected in the guidance that sustainability should be designed into products rather than treated as an afterthought [S46].
Overall Assessment

The panel shows strong convergence on four major themes: (1) the need for a coordinated, holistic AI ecosystem; (2) the importance of sizable public and private financing through PPP mechanisms; (3) leveraging India’s deep talent pool as a fast‑follower advantage; and (4) building indigenous IP and startup ecosystems for sovereign capability. Additional nuanced agreements appear on niche supply‑chain focus and the emerging emphasis on sustainability.

High consensus across speakers on strategic direction and required enablers, suggesting a unified policy and industry roadmap is feasible. The alignment reinforces the urgency of implementing integrated funding, talent development, and IP strategies to achieve AI and semiconductor leadership.

Differences
Different Viewpoints
Strategic focus: niche supply‑chain specialization vs building vertical‑integrated mid‑zone fabs
Speakers: Thomas Zacharia, Rahul Garg
India should target capability rather than leading‑edge fabs, focusing on niche supply‑chain areas such as co‑package optics and AI‑infrastructure components (Thomas Zacharia) Companies should adopt vertical‑stack models, develop mid‑zone fabs, and nurture ancillary ecosystems (clean‑room, packaging, chemicals) (Rahul Garg)
Thomas argues that India can contribute to AI and semiconductor ecosystems by specializing in high-value niche components (e.g., co-package optics) without pursuing leading-edge 2 nm fabs, whereas Rahul contends that India needs to build its own mid-range fabs and vertically integrate the entire supply chain, including clean-room and packaging ecosystems, to achieve capability. [102-108] vs [153-163]
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between niche supply-chain specialization and vertical integration mirrors discussions on moving developing countries up the global supply-chain ladder and promoting balanced development beyond lower-tier roles [S40][S41].
Capital mobilisation and the role of government in de‑risking versus direct large‑scale funding
Speakers: Thomas Zacharia, Rahul Garg
Capital is beginning to flow, though still uneven; public‑private de‑risking can broaden the base (Thomas Zacharia) Growing demand, political will, and rising private‑capital commitments (>$100 bn) are enabling manufacturing expansion (Rahul Garg)
Thomas says the government should de-risk ventures through public-private partnerships without direct subsidies, while Rahul points to massive private-capital commitments already flowing and calls for coordinated public-private pools to raise billions, indicating a more direct funding role for the state. [216-221] vs [70-78] and [214-218]
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over de-risking capital versus direct funding reflect policy recommendations on investment de-risking, stable fiscal arrangements and rule-of-law mechanisms to encourage long-term investments [S35] and the broader role of government policy in reducing investment risk [S36].
Openness versus strategic autonomy in security‑sensitive technology domains
Speakers: Vivek Kumar Singh, Thomas Zacharia
Balancing strategic autonomy for security‑sensitive sectors with open global collaboration is crucial (Vivek Kumar Singh) PPP frameworks like the “Genesis” project can de‑risk large‑scale R&D, focus on lighthouse problems, and align academia, labs, and industry (Thomas Zacharia)
Vivek stresses the need for clear rules on what technologies must be indigenised for strategic autonomy, while Thomas promotes an open, collaborative public-private model (Genesis) that de-risks research without imposing strict indigenisation boundaries. [141-147] vs [225-229]
POLICY CONTEXT (KNOWLEDGE BASE)
The openness versus strategic autonomy dilemma parallels the European tech sovereignty discourse, where regulatory philosophy and the balance between openness and autonomy are contested [S38].
Speed‑driven fast‑follower model versus long‑term capability‑building via a supercomputing mission
Speakers: Rahul Garg, Thomas Zacharia
India’s large talent pool gives it a fast‑follower advantage; scaling this to a global platform requires coordinated public‑private effort (Rahul Garg) A national supercomputing mission can seed AI infrastructure and drive long‑term innovation (Thomas Zacharia)
Rahul emphasizes leveraging India’s talent to quickly adopt and build AI applications as a fast follower, whereas Thomas advocates building sovereign capability through a dedicated supercomputing mission that underpins long-term AI innovation, suggesting a more measured, capability-first approach. [214-218] vs [119-124]
POLICY CONTEXT (KNOWLEDGE BASE)
The contrast between a fast-follower approach and a long-term supercomputing capability aligns with India’s NDIA supercomputing and sovereign frontier-model mission, which emphasizes building deep, long-term compute capacity [S50].
Unexpected Differences
Perception of being ‘late to the party’ versus confidence in immediate niche contributions
Speakers: Rahul Garg, Thomas Zacharia
So I think we are kind of late to the party in some sense… (Rahul Garg) You don’t necessarily have to be at the two nanometer scale… there are critical technology in the deployment at scale of AI infrastructure where India can play a role (Thomas Zacharia)
Rahul frames India as significantly behind the global semiconductor race, implying a need to catch up, while Thomas downplays the need for leading-edge fabs and asserts that India can immediately add value through niche components, making the contrast between a ‘catch-up’ narrative and a ‘ready-to-contribute’ stance unexpected. [153-154] vs [102-108]
Degree of openness in public‑private research collaborations
Speakers: Vivek Kumar Singh, Thomas Zacharia
Balancing strategic autonomy for security‑sensitive sectors with open global collaboration is crucial (Vivek Kumar Singh) The Genesis Project aligns public and private partnership, invests government resources to bring academia, national laboratories, and private sector to identify lighthouse problems… (Thomas Zacharia)
Vivek calls for clear limits on what should be indigenised, suggesting a more guarded approach, whereas Thomas promotes an open, collaborative PPP model (Genesis) that encourages shared problem-solving without strict indigenisation rules-an unexpected tension between protectionist and open-innovation philosophies. [141-147] vs [225-229]
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on openness in public-private research echo multistakeholder partnership models that promote open-source development and shared datasets, as highlighted in the multistakeholder AI ecosystem framework [S49].
Overall Assessment

The panel displayed broad consensus on the importance of talent, coordinated ecosystems, and public‑private collaboration, but diverged sharply on strategic focus (niche vs vertical integration), the mechanism for capital mobilisation (de‑risking PPP versus large pooled funding), the balance between strategic autonomy and open collaboration, and whether India should pursue a fast‑follower catch‑up path or a longer‑term capability‑building route.

Moderate to high – while the participants share common goals (AI leadership, sustainability, talent development), their prescriptions differ substantially, indicating that policy design will need to reconcile competing priorities and reconcile fast‑follower ambitions with strategic, niche‑focused investments.

Partial Agreements
All three agree that AI leadership cannot be achieved in isolation and requires coordinated action across technology, policy, and public‑private mechanisms, even though they differ on the exact balance between openness, strategic autonomy, and partnership models. [32-34] vs [116-119] vs [141-147]
Speakers: Jaya Jagadish, Thomas Zacharia, Vivek Kumar Singh
Integrated approach: AI leadership requires coordinated silicon, software, systems, and policy (Jaya Jagadish) So I think this is a great area for public‑private partnership, in my view. (Thomas Zacharia) Balancing strategic autonomy for security‑sensitive sectors with open global collaboration is crucial (Vivek Kumar Singh)
The speakers concur that India’s abundant talent is a core asset that must be nurtured and leveraged through education, startup ecosystems, and coordinated policy‑industry‑research frameworks, even though they propose different mechanisms (startup IP, fast‑follower scaling, strategic autonomy, broad alignment). [38-40] vs [214-218] vs [141-147] vs [255-262]
Speakers: Jaya Jagadish, Rahul Garg, Vivek Kumar Singh, Moderator
Development of local startups and indigenous IP is essential for a sovereign AI ecosystem (Jaya Jagadish) India’s large talent pool gives it a fast‑follower advantage; scaling this to a global platform requires coordinated public‑private effort (Rahul Garg) Balancing strategic autonomy for security‑sensitive sectors with open global collaboration is crucial (Vivek Kumar Singh) Aligning policy, industry, research, and infrastructure is essential to translate talent into global‑competitive products (Moderator)
Takeaways
Key takeaways
AI leadership requires a coordinated ecosystem of silicon, software, systems and policy; no single element is sufficient. India’s AI mission and large government funding (≈₹10,000 crore) provide scale and credibility, but true credibility comes from large‑scale deployments and indigenous IP ownership. Manufacturing depth can be built by focusing on niche, high‑value supply‑chain segments (e.g., co‑package optics, AI‑infrastructure components) rather than trying to own the most advanced fabs immediately. Public‑private partnerships (e.g., the “Genesis” model) are essential to de‑risk large R&D programmes, align academia, national labs and industry, and address lighthouse problems. Balancing strategic autonomy for security‑sensitive domains with open global collaboration is critical for sustainable growth. India’s large talent pool and emerging fast‑follower capability can be leveraged through aggressive skilling, free AI‑driven learning platforms, and university incubators. Sustainability must be embedded as a core design principle in semiconductor and AI hardware, with humility and continuous course‑correction.
Resolutions and action items
Encourage public‑private de‑risking mechanisms for deep‑tech R&D (e.g., Genesis‑style projects) to broaden private capital participation. Prioritize development of mid‑zone fabs and ancillary ecosystem (clean‑room, packaging, chemicals) while targeting niche supply‑chain opportunities such as co‑package optics. Accelerate IP creation and protection by supporting local startups and facilitating pathways from research to productisation. Scale existing government skilling initiatives (e.g., NASCOM’s Future Skill Prime) and promote AI‑driven learning resources to up‑skill the workforce for emerging roles. Formulate a strategic autonomy framework that delineates sectors requiring sovereign capability versus those open to global collaboration. Integrate energy‑efficiency and sustainability targets into semiconductor product design and AI infrastructure planning.
Unresolved issues
How to ensure broad‑based, long‑term private capital flows that match the scale of global competitors. Specific timeline and roadmap for moving from niche supply‑chain participation to more advanced fab capabilities. Concrete mechanisms for IP transfer and commercialization from academia to industry. Detailed policies for balancing national security concerns with openness in collaborative research. Metrics and governance structures to monitor progress on sustainability goals within the semiconductor supply chain.
Suggested compromises
Adopt a strategic autonomy approach: retain sovereign control over security‑critical technologies while remaining open to collaboration on non‑sensitive domains. Use public‑private partnerships to de‑risk projects without direct government subsidies to private firms, sharing risk and reward. Encourage vertical‑stack business models initially, with a gradual transition toward horizontal ecosystem participation as capabilities mature.
Thought Provoking Comments
AI leadership is not achieved by a single pillar; silicon, software, systems, and policy must all come together. No one aspect can get us there alone.
Sets a holistic framework for the entire discussion, emphasizing interdisciplinary collaboration rather than siloed efforts.
Guided the panel to address each domain (design, manufacturing, policy, talent) and prompted subsequent speakers to position their insights within this integrated view.
Speaker: Jaya Jagadish
We don’t need to chase the 2 nm GPU/CPU node. India can create value in niche areas like co‑packaged optics and other critical interconnect technologies that are not globally abundant.
Redirects the conversation from the daunting goal of leading‑edge fabs to realistic, high‑impact niches where India can compete now.
Shifted the focus from a ‘catch‑up’ narrative to a ‘strategic specialization’ narrative, leading the panel to discuss supply‑chain gaps and opportunities beyond traditional chip scaling.
Speaker: Thomas Zacharia
The US and China built their AI capabilities on long‑term supercomputing missions. India’s supercomputing mission should be planned not where we are today, but where the ecosystem will be when the infrastructure is deployed.
Draws a clear lesson from other nations, linking HPC investment to future AI leadership and emphasizing forward‑looking planning.
Prompted Vivek and others to consider policy timelines and the importance of aligning research, infrastructure, and industrial capacity well ahead of deployment.
Speaker: Thomas Zacharia
We need strategic autonomy: decide which components we must indigenize for security, and where we can stay open to global collaboration.
Addresses the tension between national security and openness, offering a nuanced policy approach rather than an all‑or‑nothing stance.
Steered the discussion toward concrete policy mechanisms, influencing later remarks about public‑private partnerships and the need for clear rules on indigenization.
Speaker: Vivek Kumar Singh
India has become an extreme fast‑follower; the lag from global product launch to Indian adoption is now weeks. The next move is to scale ambition beyond the domestic market and mobilize billions of dollars of public‑private capital.
Highlights a unique competitive advantage (speed of adoption) while identifying the missing piece—global scale and capital depth.
Reoriented the conversation from capability building to financing and market expansion, leading Thomas to discuss de‑risking mechanisms and large‑scale public‑private funding.
Speaker: Rahul Garg
The Genesis Project is a model where government invests in compute infrastructure, software stacks, and lighthouse problems in an open, collaborative framework, de‑risking the ecosystem without directly subsidising private ventures.
Introduces a concrete, actionable framework for public‑private partnership that balances risk, innovation, and market independence.
Provided a tangible policy proposal that other panelists referenced when talking about funding, talent pipelines, and scaling ambition, moving the dialogue from abstract ideas to a potential implementation plan.
Speaker: Thomas Zacharia
Sustainability should be a core design goal, not a trade‑off. We must be humble, acknowledge we won’t get everything right, and continuously course‑correct as we flatten the energy curve of our products.
Elevates the discussion to include environmental responsibility as integral to AI and semiconductor strategy, linking technical decisions to broader societal impact.
Added a new dimension to the conversation, prompting participants to consider long‑term ecological implications alongside economic and strategic goals.
Speaker: Thomas Zacharia (in response to audience question)
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved it from a broad, aspirational overview to a focused, actionable roadmap. Jaya’s integrative framing set the stage, while Thomas’s emphasis on niche technological strengths and forward‑looking supercomputing strategy redirected attention to realistic competitive edges. Vivek’s call for strategic autonomy and Thomas’s Genesis partnership model provided concrete policy pathways, and Rahul’s insight on fast‑following coupled with the need for massive public‑private capital highlighted the financing challenge. Finally, the sustainability comment broadened the agenda to include environmental stewardship. Together, these comments created turning points that deepened analysis, introduced new topics, and aligned the panel around specific, implementable ideas.

Follow-up Questions
What concrete steps should India take to build its intellectual foundation for AI and semiconductor leadership?
Establishing a strong knowledge base is essential for long‑term competitiveness and informs education, research, and talent policies.
Speaker: Jaya Jagadish
What specific manufacturing depth and supply‑chain resilience investments are needed for AI hardware in India?
Identifying targeted investments will help create a robust domestic ecosystem capable of scaling AI compute resources.
Speaker: Jaya Jagadish
What actions are required to achieve a credible sovereign AI capability for India?
Sovereignty over data and AI services is a strategic priority; clear actions are needed to translate policy into practice.
Speaker: Jaya Jagadish
How effective are the recent tax holidays for data centers and the AI Coach platform in accelerating AI adoption?
Assessing policy impact will reveal whether incentives are driving the intended scale of AI infrastructure and applications.
Speaker: Vivek Kumar Singh
What is the detailed roadmap and milestones for the India AI Mission (₹10,000 crore over five years) across its seven pillars?
A transparent timeline will enable tracking progress and aligning public‑private efforts.
Speaker: Vivek Kumar Singh
What strategies can India adopt to acquire and retain semiconductor IP ownership to enhance credibility?
IP ownership is critical for a self‑reliant semiconductor ecosystem and for attracting global partnerships.
Speaker: Vivek Kumar Singh
What are the sources, terms, and timelines of the reported $100 billion+ private capital commitments for AI deep‑tech and data centers in India?
Understanding the financing landscape is vital to gauge whether capital is sufficient and sustainable for long‑term projects.
Speaker: Rahul Garg
Which niche technology segments (e.g., co‑packaged optics, interconnects) offer realistic value‑creation opportunities for India in AI infrastructure supply chains?
Targeting specific high‑impact areas can allow India to contribute without needing to master leading‑edge process nodes.
Speaker: Thomas Zacharia
What is the feasibility of India developing a supply chain for advanced interconnect optics, including potential partnerships with the US, Japan, and Malaysia?
Exploring partnership models will clarify how India can fill global gaps and build domestic capability.
Speaker: Thomas Zacharia
What is the structure, funding model, and implementation plan for the ‘Genesis Project’ public‑private partnership?
A clear description will help replicate successful R&D collaboration frameworks and attract stakeholder buy‑in.
Speaker: Thomas Zacharia
How should India balance strategic autonomy with openness and global collaboration in AI and semiconductor sectors?
Finding the right equilibrium is crucial for national security while still benefiting from international innovation.
Speaker: Vivek Kumar Singh
Should Indian semiconductor manufacturers pursue vertical‑stack integration or evolve toward horizontal ecosystem models, and what are the trade‑offs?
Understanding optimal industry structure will guide investment decisions and partnership strategies.
Speaker: Rahul Garg
What mechanisms can help Indian startups transition from domestic focus to global platforms, especially regarding capital and market access?
Scaling globally is necessary for India to be a top AI/app developer; identifying pathways will inform policy and investor support.
Speaker: Rahul Garg
How can massive skilling and reskilling programs (e.g., NASCOM’s Future Skill Prime) be measured for effectiveness in meeting future AI job demands?
Evaluating program outcomes ensures that talent pipelines align with industry needs and reduce skill gaps.
Speaker: Vivek Kumar Singh
How can sustainability be embedded as a core design principle in semiconductor and AI hardware development, with clear metrics and accountability?
Integrating sustainability is essential to meet climate goals while maintaining competitive technology development.
Speaker: Audience (moderator)
What data is needed to assess the environmental impact of AI compute infrastructure in India and what mitigation pathways should be pursued?
Quantifying impact will enable targeted policies and industry practices for greener AI deployment.
Speaker: Audience (moderator)
What public‑private de‑risking mechanisms can be employed without direct subsidies to stimulate private sector investment in AI and semiconductor R&D?
Designing effective de‑risking tools will attract private capital while preserving fiscal responsibility.
Speaker: Thomas Zacharia
Which ‘lighthouse’ or grand‑challenge problems should be prioritized under India’s Genesis‑style initiative to maximize AI and semiconductor breakthroughs?
Focusing on high‑impact challenges ensures efficient use of resources and accelerates breakthrough innovations.
Speaker: Thomas Zacharia
How do recent ease‑of‑doing‑business reforms affect the translation of university research into marketable products in India?
Assessing policy impact will reveal bottlenecks and opportunities for improving the innovation pipeline.
Speaker: Vivek Kumar Singh
What are the requirements and timeline for India to establish a national supercomputing mission comparable to those in the US and China?
A clear plan is needed to build world‑class compute infrastructure that underpins AI research and industrial applications.
Speaker: Thomas Zacharia

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

The Role of Government and Innovators in Citizen-Centric AI

The Role of Government and Innovators in Citizen-Centric AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, comprising leaders from Mistral AI, DeepL, the Barcelona Supercomputing Center and the European Commission, discussed how large language models and related AI infrastructure can be leveraged to transform public services in India and the EU [6-10][12-13]. Arthur Mensch explained that generative AI primarily adds value by automating fragmented, knowledge-intensive processes such as procurement and report writing, thereby addressing talent shortages and legacy IT challenges in government administrations [21-24]. He illustrated this with a project for France Travail that uses AI to match job seekers with employers, showing how a horizontal platform can be built around concrete use cases [25-26]. Jarek Kutylowski emphasized that multilingualism is a strength rather than a problem, and that AI-driven translation and real-time spoken-language tools can enable citizens to interact with public offices in many languages, though the complexity of legislation and service design remains high [29-34]. Matteo Valero described the European AI Factory concept, which co-locates hardware, software and skilled personnel to provide free AI services, and highlighted the role of EuroHPC supercomputers in supplying the compute power needed for such public-sector applications [50-53][60-63]. He noted that delivering personalized, fast information to citizens is a key success metric for AI-enabled public services [65-68]. Roberto Viola introduced the “Solow paradox,” observing that past IT investments often failed to raise productivity because new systems ran in parallel with old ones, but AI can create entirely new processes that break this pattern [92-98][100-107]. He argued that without empowering public-sector staff and redesigning bureaucratic workflows, AI adoption will not yield productivity gains, stressing the need for reskilling and a shift from individual to collective delegation [112-118][127-138]. Viola further suggested that policy must be aligned with digital transformation, allowing AI agents to replace or augment traditional bureaucrats and enabling citizens-centric services such as digital identity and automated attestations [186-194][197-202]. The speakers agreed that strong public-private partnerships, open-source models and joint European-Indian alliances are essential to scale AI infrastructure and foster innovation in both regions [204][205-207][209-215]. They also highlighted the importance of early education and training to create a generation of developers who naturally integrate AI into their workflows, as demonstrated by Mistral’s Vibe tool being quickly adopted by younger coders [149-152][158-161]. Overall, the discussion concluded that while technical capacity and multilingual AI tools are available, realizing their public-sector impact will require coordinated policy, organizational redesign, and sustained collaboration between Europe and India [186-194][204-207][209-215]. The panel thus underscored that the future of AI in government hinges on joint investment, reskilling and innovative governance models to turn AI potential into measurable public benefits [217-226].


Keypoints


Major discussion points


AI can boost public-sector efficiency by automating fragmented, knowledge-intensive processes.


Arthur Mensch described the “AI for citizens” programme, emphasizing automation of tasks such as procurement and job-matching to relieve talent pressure and legacy-IT issues [21-26].


Multilingualism is both a challenge and an opportunity for AI-driven public services.


Jarek Kutylowski highlighted how multilingual societies (e.g., India, Canada, Switzerland) need real-time written and spoken translation, and how frontier language models can bridge the communication gap, though legislation translation adds complexity [29-36].


European supercomputing capacity and the “AI Factory” concept provide the hardware-software backbone for large-scale AI deployment.


Matteo Valero traced the evolution from early supercomputers to EuroHPC, then to AI factories and gigafactories that co-locate compute power, skills, and technology-transfer expertise to serve administrations and SMEs [50-63].


Adoption faces a productivity paradox and requires reskilling, organisational redesign, and supportive policy.


Roberto Viola cited the “Solow paradox” – investment in IT/AI often yields little productivity unless processes are re-engineered – and stressed that bureaucrats must be empowered, while Arthur Mensch detailed the need to move from individual-productivity gains to collective-process automation and extensive reskilling of staff [92-118][127-146].


International public-private partnerships are essential to scale AI in the Global South.


The opening remarks called for deeper India-EU collaboration [2]; later speakers reiterated the value of joint research, shared infrastructure, and alliances (e.g., Barcelona Supercomputing Center with Indian institutes) to accelerate open-source, multilingual AI solutions [204][205-207].


Overall purpose / goal


The panel was convened to explore how large language models and related AI technologies can be responsibly and effectively integrated into public-sector operations, especially in India and the broader Global South, by building technical capacity, addressing multilingual and productivity challenges, and fostering cross-regional collaboration between governments, academia, and industry.


Overall tone


The discussion began with a formal and optimistic tone, emphasizing partnership and opportunity. As speakers delved into practical obstacles-multilingual complexity, the productivity paradox, and the need for massive reskilling-the tone shifted to candid and reflective, acknowledging systemic inertia. The conversation concluded on a hopeful and inspirational note, stressing collective responsibility and the potential for multiple future pathways shaped by joint action.


Speakers

Arthur Mensch


– Areas of expertise: Generative AI, large language models, AI applications for public sector


– Role/Title: Co-founder and Chief Executive Officer of Mistral AI


Matteo Valero


– Areas of expertise: Computer architecture, high-performance computing, AI-enabled supercomputing


– Role/Title: Professor of Computer Architecture at the Technical University of Catalonia; Founding Director of the Barcelona Supercomputing Center


Lucilla Sioli


– Areas of expertise: AI policy, public-sector digital transformation, moderation of high-level panels


– Role/Title: Panel moderator/host; senior role at the European Commission (referred to as “my boss” by Roberto Viola) [S6]


Jarek Kutylowski


– Areas of expertise: AI-driven translation, multilingual language technologies, AI agents for public services


– Role/Title: Founder and Chief Executive Officer of DeepL [S7][S8]


Roberto Viola


– Areas of expertise: Digital policy, AI strategy, public-sector AI adoption, supercomputing infrastructure


– Role/Title: Director General of DigiConnect; Director General for Digital Policies at the European Commission [S9]


Speaker 1


– Areas of expertise: (not specified)


– Role/Title: Opening speaker / moderator introducing the panel (no specific title mentioned)


Additional speakers:


(none identified beyond the list above)


Full session reportComprehensive analysis and detailed insights

Opening & Panel Introduction – The session opened with a call for stronger India-EU cooperation to build AI capacity for the Global South [1-2][5-12]. After a brief welcome and a group photograph, moderator Lucilla Sioli introduced a distinguished panel: Vice-President and Secretary-General Krishnan (Ministry of Electronics & Information Technology, India), Arthur Mensch (co-founder & CEO, Mistral AI), Jarek Kutylowski (founder & CEO, DeepL), Prof. Matteo Valero (Barcelona Supercomputing Centre), and Roberto Viola (Director-General, DigiConnect, European Commission) [7-13][15-17].


Mistral – “AI for Citizens” – Mensch described the “AI for Citizens” programme, whose first pillar is efficiency: using generative AI to automate fragmented, knowledge-intensive public-sector processes such as procurement, report writing and legacy-IT integration [21-24]. He gave the France Travail pilot as an example, where AI-driven matching of job-seekers and employers reduced manual effort [25-26].


Multilingualism & Translation – Kutylowski framed multilingualism as a “beautiful” societal feature, citing India, Canada and Switzerland. He explained that frontier language models can provide real-time written and spoken translation, enabling citizens to converse with office staff in their own language, while noting challenges such as translating legislation [29-36].


European Compute Backbone – Valero traced supercomputing from Seymour Cray’s first machine to today’s EuroHPC network, which now hosts six of the world’s top-15 supercomputers [50-53]. He introduced the AI Factory and the emerging gigafactory concept: co-located hubs of compute, software and AI talent that offer free services to citizens and SMEs, aiming for personalised, fast public services [60-67][82-90].


Policy & the Solow Paradox – Viola invoked the Solow paradox – the observation that past IT investments often failed to raise productivity because new digital tools ran in parallel with legacy processes [92-98][100-107]. He argued AI can break this paradox only by creating new man-machine workflows rather than merely digitising existing ones [112-118]. He warned against a “digital bureaucracy” and called for empowering public-sector staff and organisational redesign[186-194][197-202].


Mistral’s Vibe & Delegation – Mensch explained that Vibe extends beyond chatbots to delegate whole processes (e.g., procurement) by designing end-to-end automation that removes human bottlenecks [127-129]. He stressed the need to re-organise staff so that former “menial” workers become managers of AI-operated workflows, and highlighted a cultural shift toward strong delegation, noting that younger developers adopt AI-assisted coding quickly while mid-career engineers need substantial reskilling [149-161].


Additional Points from the Transcript


Destination Earth – Viola mentioned the climate digital twin “Destination Earth”, a free-to-test AI model that simulates past and future climate at high resolution [92-98].


Free tools – Both Mistral and DeepL offer free web-based demos that citizens can try [21-24][29-35].


Matteo’s brief comment – When asked about the main applications of the compute backbone, Valero replied succinctly: “teach the young people to understand the problem to propose solution” [??].


Closing Remarks – Mensch highlighted the importance of public-research partnerships to accelerate innovation [204-206]. Valero stressed AI’s dual-use nature and advocated an EU-India alliance leveraging EuroHPC and Barcelona’s ties with Indian institutes [205-207]. Kutylowski called for strong business-government collaboration to bring AI value to the public sector [209-215]. Viola concluded that there is no single predetermined future for AI; the thousands of summit participants will collectively write the next chapter [217-226].


Key take-aways


– AI can markedly improve public-sector efficiency when whole processes are automated and staff are trained as delegators [127-138].


– Frontier language models can turn multilingualism into an asset, enabling real-time written and spoken translation for citizens [29-36].


– Europe’s EuroHPC-backed AI factories provide the compute backbone needed for large-scale public AI, and offering these services for free lowers entry barriers [50-53][60-67][82-90].


– The Solow paradox warns that mere digitisation does not guarantee productivity gains without organisational redesign [92-118].


– Large-scale reskilling programmes are required to develop a generation of civil servants comfortable with AI-augmented workflows [149-161].


– Public-private partnerships and an EU-India alliance are viewed as essential accelerators [2][204-207].


– Open-source, freely accessible AI tools (Mistral, DeepL, Destination Earth) are strategic assets for governments with limited budgets [21-26][29-35][92-98].


Session transcriptComplete transcript of the session
Speaker 1

precisely this, how do we sort of build capacity in order for this technology to be applied significantly better. And in the days to come, I would really love to see a day when India and the EU collaborate much more closely to make this happen, not just in India, but all over the global south. Thank you very much for having me. Thank you very much. Don’t go away, because now I’m going to call the panel. We have a distinguished panel today, but we would like to take a picture first. So if I can invite Vice President and Secretary Krishnan to stand here, and then I invite Arthur Mensch. He’s the co -founder and CEO of the European Union.

He’s the CEO of Mistral AI, if you can just stand next to the Secretary, which is a European company developing large language model, but also Jarek Kutuloski. who is the founder and CEO of a German company called DeepL, which is on language technologies. Matteo Valero, who is a professor of computer architecture at Technical University of Catalonia and the founding director of the Barcelona Supercomputing Center. And from the European Commission, I’m pleased to announce Roberto Viola. He’s the director general of DigiConnect. And he plays a pivotal role. He’s the director general for our digital policies. Okay, so as I said, it’s a very distinguished panel from the European Union. And I would like to thank all of you for being here to participate.

I’ll start with Artur from Mistral. I repeat that he comes from Mistral, which is a European model and one of the main large language models. In your opinion, how can LLMs or general purpose models in general reshape the public sector? And as a developer, how do you work with governments to apply it in the public sector?

Arthur Mensch

I’m the co -founder of a company called Mistral and we effectively train language models and perception models and we then use them to create applications for businesses and for states typically the models is never enough to actually provide value for the states we work with we have a program called AI for citizens that have multiple pillars but when we work with states the first thing we work on is efficiency what generative AI allows you to do is to delegate tasks in general and to automate certain processes that can be fairly complex, that can be fragmented, that can involve multiple people, that can involve multiple tools that can deal with IT legacy and so a state is not different, an administration is not different from an enterprise in that respect, that they have IT problems, in that they have processes that are sometimes inefficient in that they have pressure on talents because there are a lot of people that are actually retiring, so knowledge is a very big problem and management of the ledges is a very big problem.

The kind of things we do is related to that. So we deploy our horizontal platform and we create use cases. We work backward from use cases that are around procurement, that are around writing reports on the, visible in that it can show to the citizens themselves is building public services on top of artificial intelligence. And so one example is we worked with France Travail which is an employment agency in France to actually help with the matching of job employers, of employers and of people seeking jobs. And often times people would just connect and they’re looking

Lucilla Sioli

Thanks a lot. I now turn to Yarek, founder of DeepL, which has been a very important part of the project. Yarek has been working since 2017 in AI -based translation tools. and so there is a lot of linguistic diversity in India as well as in the European Union and so how can the AI language models help to overcome this multilingualism issue I say of course we consider it also a benefit but in administration it can be sometimes be a challenge

Jarek Kutylowski

I would definitely try to not characterize it as an issue I think it’s something that’s actually pretty beautiful about a lot of the countries that are so multilingual and there’s a lot of differences in how deeply multilingualism is embedded in different countries and in different societies I think here in India everybody understands it extremely well but it’s not the only country in the world and there’s countries like Canada, there’s countries like Switzerland whom we’re working a lot with the public sector that have this intrinsic necessity of being able to connect to their citizens in very many languages and where partially that communication is even embedded as a part of their constitution. And here, those countries have been struggling over the years, maybe as you have indicated, on how to actually make this happen.

And AI and those kinds of frontier models that we build and the applications on top of them that are specifically tailored to bridging this communications gap, they help a lot. Nowadays, not only in written language, but also in spoken language, enabling real -time conversations maybe with citizens in a setting when they come up into an office and want to get… certain service done. So a lot of options there, but also a lot of complexity as those use cases that governments have really differ very, very much based on what you’re doing. It’s another challenge to translate legislation into different languages. It’s another challenge if you want to enable those real -time conversations with citizens. Quite a lot of exciting problems to solve.

Lucilla Sioli

Thank you very much. Now I turn to Matteo Valero. You are also a professor, but you’re also the director of the Barcelona Supercomputing Center. So can you maybe explain, you’re also in an AI factory, what the AI factory does and how it can help the transformation of the public sector and of SMEs?

Matteo Valero

Thank you, Lucila. Good afternoon to everyone. It’s my pleasure to be in India once more. Sorry? Sure. My pleasure to be here. You have an incredible country, believe me. So, thank you for inviting me. And I am going to start 50 years ago when Seymour Cray produced the first supercomputer, no? And this supercomputer increased the speed from 10 to 10 until now, okay? With this computer, we did simulation and we produced better results in science and engineering. In Europe, every country was alone until, thanks to Roberto Viola, we created the EuroHPC. And then, because we had the EuroHPC, we have now a reasonable amount of power in the supercomputers. So, if you look at the top 500, probably out of the first 15, we have 6 in Europe.

And we do science. We do science and this is very good. So now because the data, because the computer, and especially because the research of these guys and many others, the AI is invading us. It’s changing any activity we have. In my field, I am changing the way we do high performance processor. In the supercomputing center, let me tell you that we are 1 ,400 and we have 500 people doing hardware software, using or designing in topics related with the AI. There is no question that now the data, the control, the data, the computers, and the algorithms are dominating the world. So what we could do in Europe, we have the supercomputer, but we need to devote more energy in order to…

get the AI distribute around any activities. So the idea of the European Commission was create the factories and now the gigafactory. The AI Factory is a platform, AI platform is hardware and software, but as important as that, it’s co -located where there are people with the skills in AI, there are people with experience in transferring technology to the society. So the idea of this AI is the service is free, the people is free, is to connect as much as we can with the society to make a better world. This is the target for us. Obviously, there are many, many possible contributions, and one of them is the administration, and obviously how we can make happy to the citizen.

If we make happy to the citizen, we are successful, okay? And we can make happy to the citizen if we provide them with personalized information, accurate and fast. After that, a second question. I will give example, but I think… So this is the target for the AI factories and the gigafactories is the same but competing with the data center. Because I forgot one thing. What Europe could do is just to use this data in the platform from outside or create our platform to use our platform using this data. I think this is the right way to go.

Lucilla Sioli

Thanks a lot. So we have talked about what the models can do, the computing capacity that is made available. Now, Roberto, I would like to ask you, since you have reflected and designed all of this, how would you now… Mention your words. I’m your boss. Yes, he is. How would you help now facilitate the uptake of AI? By the public sector, because if we have the models, we have the compute capacity or we are building it. and we’re also building more access and more availability of data sets. But how can we make sure that the public sector actually uses AI?

Roberto Viola

Thank you. Thank you, Lucille. Good afternoon, everyone. It’s really, for me, a pleasure to be here and to be together with Lucille, with the three crown jewels of Europe, which are all very much representing what is for us giving out to citizens, society, innovation. Because you can test the Mistral on the web for free. You can use DPL for free on the web and test it and enjoy it. Translation from all Indian languages to European languages, I dare to say, yes. You can test Destination Earth. Destination Earth is the most sophisticated climate digital twin of the world, AI digital twin. You can replay the climate of the past into the future. You can zoom in in certain areas.

You can have a resolution which was an error of 200, 100 meters, three weather events, because there are already two twins which are running, the twin of the climate and the twin of extreme weather events. Again, for free on the web. So this is the first point I want to make. There’s an economist, maybe you know the name, Mr. Solow, that he expressed with numbers and, I mean, evidence a paradox. The more people invest in IT and software and other infrastructure, the less the productivity. Actually, there’s no… There’s no productivity gain in doing that. So it’s called the solo paradox because it’s a paradox because you as a user, me as a user, experience a much better user experience to have a public administration which is more digitized or an hospital where everything is digital as well or a doctor which is savvy because it has an AI co -pilot.

But in terms of productivity gain, according to the solo paradox and the numbers that he has, in a compelling way, put in front of us, there’s no productivity gain. So many economies, and of course, who solves in this room this paradox is for Nobel Prize of Economy. So, I mean, the challenge is open. So I’m not going to solve it, but I try to answer the question of Lucy. The reason why many have observed this is that because normally, IT, and that includes also now AI, overlaps what exists. And of course then it becomes very intuitive. Imagine an hospital, I mean, having all the doctors still and nurses and everyone in traditional process. So doing a bit of paperwork as they do, but also doing it digitally.

And having two systems running in parallel, of course, I mean, you imagine that the productivity doesn’t move much. Now we have seen some changes during COVID. Why? Because people, I mean, were secluded and they were forced, I mean, to use only digital. So in certain areas, sadly, I mean, you saw the productivity was in a way more linearly linked with the use of technology. The European Investment Bank has published an econometric study that shows AI as a productivity increase of 4%. Which is not the stellar numbers, I mean, some of the vendors around say, but it’s… compared to the solo paradox is not zero. And I think this sign is because with AI, especially agentic AI, you see the change.

So you don’t see the overlap anymore, one process with the other, but you could see that there’s a new process, new way of man -machine interacting and working. But Arthur, before, said something which is the key of all of this. Because if people in the public sector are not empowered, they don’t understand it. They are not part of this change. The change will not lead to any productivity gain. Because you can have the most expensive and sophisticated AI software of the world, probably absolutely not needed by the private sector, because better to have bespoke models, open source, that serve the purpose. But even if you have the most sophisticated, you still get that one. if you have someone that refuses to embrace the technology or in any case you have an organization a process that is not ready not fit for it then there’s no productivity gain.

Maybe as a citizen you can see their wonders but in reality I mean the old system becomes two times more expensive and the adoption rate is low and this is really the real challenge of artificial intelligence as paradoxically it can be. I think we can proudly say that as I see it in India and I see it now in Europe, we are developing an ecosystem which is really brilliant, self -reliant, sufficient in terms of good company producing open source, producing language technology, producing advanced algorithms. We have supercomputing center offering capacity. All of this, I mean, goes in a completely different model compared to other models, and it’s all fine. But now, I mean, we really need to work with the people and with the public administration and to make sure that we

Lucilla Sioli

Okay. So, Arthur, if I were to ask you, how do you get the AI accepted by the citizens and also the public administration? What kind of tools? You already provide, of course, the chatbot Le Chat. But what are other tools that you think will be easily accepted?

Arthur Mensch

Well, we’ve turned Le Chat into something that we call Vibe, actually, which is a product where we can delegate tasks. We can delegate tasks fully and delegate workflows. The challenge and the reason why you don’t see productivity gains when you deploy chatbots in enterprise is that basically you’re focusing on an individual productivity gain. so that’s the case in enterprise but it’s the same in administration and so if someone can actually write a mail faster it’s not actually changing the way your business is being run when the thing starts to change if you look at a full process let’s say procurement for instance which typically you entail like multiple touch points with multiple people and you ask the agent to actually run the process itself so you move from an individual productivity endeavor to a collective productivity endeavor and you move from being equipping ICs so individual contributors to equipping managers that are going to span the same way a manager will delegate sometimes to a human it can delegate sometimes to an AI process and there are two big challenges associated to that and that needs to be solved for product but also through human interaction I would say.

The first is that you need the process automation that we run and we design them we bring our engineers in, they work with subject matter experts and they design they write the code using our coding model and then they deploy the code that is going to run the automation and that’s going to ask questions, that’s going to interact with the tools. The way we design them is to try and get the humans out of the way because the problem is that the process only brings productivity gains if you’re not bottlenecked by the humans themselves, if you’re not interrupting them all the time. A good example is coding. If you want to code faster with AI, you need to give them tasks and then disappear and then you come back maybe one hour later and the task is done.

If the thing comes and nags you like five minutes after, maybe you’re doing something else and so the thing is actually not progressing as fast as it should. You need humans to actually get out of the way of the AI automation if you want this automation to work. And then the second thing that goes with it is that once you’ve done the automation, you need to rethink the organization because once you’ve automated your procurement process, well, suddenly the people that were actually running the analysis of the procurement needs to do something else. And that’s actually… Take… some thought around how you’re reorganizing people, how you’re rescaling people, how you’re turning individual contributors into people that will effectively manage AI -operated processes.

And so you need, and enterprises need, to actually turn people that were used to do menial work into people that are delegating those work. And as you know, and as every manager in the room knows, it’s actually fairly hard to learn how to delegate and to move from being an individual contributor to being a delegator. And because the only way AI actually brings you productivity gain is through strong delegation and long execution, well, every one of us needs to become a strong delegator. And so that takes some training. We are not trained to be delegators at school in Europe, I would say, at least in France. And that’s something we will need to learn. And so we’ll need to learn it early on.

And we need to reskill the people that are, that needs to learn a new way of doing things. A new way of working. And to rewire the brain. I think a very good example, I’ll stop with that. is that we have our coding tool called Mistral Vibe. And what we see is that if you take very young developers, they use it very quickly. And so they learn how to use it. They’re excited. The way they work is inherently wired to using AI because they are 23, and that’s everything they’ve ever done in coding is through AI. Then you have the very senior people that are like 35 years old, I guess, or 33 my age. And those are still much better than the agents themselves.

So they know how to design the architectures. They can give some very precise guidance on how the agents need to rewire the code or bring a new feature. And then the problem is the people in between. The people in between, well, they got very attached to writing code. And now they need to rewire. They were very good at writing code, but they need to become something else. else. And so that’s where reskilling is very, very important. And that applies to software engineering, but that will apply to everybody working on knowledge. And so that’s the three years ahead of us, and we need to work together to make it as smooth as possible.

Lucilla Sioli

Thanks. And indeed, Jarek, you have developed AI agents. So when you think of applying them to the public administration with all these caveats we just heard, what do you think the acceptance is going to be like?

Jarek Kutylowski

Yeah, I think it’s not only about the individuals and how people reskill and how people adopt AI and how they learn to use AI, but it’s also about organizations really rethinking the way that they are working. Thinking about workflows, thinking about processes, whether that’s like general purpose agentic workflows, or whether that’s something that has language at its core, rethinking of how do we do these things. We’ve gone over a couple of decades now improving those processes and maybe putting parts of AI, especially in language processes, that’s been already happening over the last years quite significantly. But we haven’t yet rethought those whole processes. Like, do we need that human review step anymore in a particular use case?

Or is it just enough to use AI? We have organizations who are translating R &D documentation for drug discovery and submitting that to the local regulators, just purely translated by AI with the appropriate guardrails. We have organizations that are translating plain maintenance records and using them as the source of truth. So there is a lot of potential in using AI, but you have to think a little bit out of the box and really forget the old ways of doing things. And the same holds for agentic AI, and I think even more so, because the potential of AI is even bigger. So it’s… It’s both for the public sector and for businesses. It’s a big redesign of how work gets really done.

And the bigger the organization, the obviously bigger the inertia that is out there. And public sectors tends to be the largest organizations in any country. So the challenge is even bigger there.

Lucilla Sioli

Thanks. And so, Matteo, as in the Computing Center of Barcelona, it is quite specialized also in applications for public sector, for health care. So what do you see as the main applications that are being developed on the basis of demand?

Matteo Valero

teach to the young people to understand the problem to propose solution. Thank you.

Lucilla Sioli

So, Roberto, you heard the challenges in terms of acceptance and implementation in the public sector, which were sometimes maybe the skills are not very strong. So how do you think that policy can really help to enable this transformation?

Roberto Viola

I think policy needs to be tuned with the transformation. So in a way, as I was trying to say also before, if you invent a digital bureaucracy, it’s a bureaucracy. It’s digital, but still it’s a bureaucracy. And you have then a bureaucrat and a digital and an AI agent bureaucrat. I mean, It would be very simple for the geniuses in this panel to produce an AI bureaucrat. And I’m sure AI can do bureaucracy even better than us, much better than us. Regulation -generating bots, yes. That would be super useful. Or regulation -correcting bots, that would be good. So you see, I think we need to be also from the legislation side disruptors and look at things with completely different eyes.

And for this, let me say that there are one thing that really does a striking similarity between Europe and India is this idea of believing in the public stock. So the idea that you can actually be managing your identity, your attribution. your capacity to sign, to timestamp, to actually exchange these attributes in an open source and an open model. This is for people and for businesses. And then in this way, the state is in your hand. I mean, it is actually, you have the bureaucracy under control because the bureaucracy, it’s you. So, if you actually reverse the logic of the citizen going to an office, that’s what you refer to, to the office going to the citizen with, of course, all sorts of nice agents, push notifications, attestations, then, I mean, you re -engineer the state.

So, my point is, if we dare, and I dare to say in India you are daring, in Europe we are daring, you can actually redesign the paradigm and then if you do that then creativity is really at work because there can be many different agents many different ideas on how to improve processes Thanks

Lucilla Sioli

Now we only have very little time to go but before leaving since we have these four geniuses I would like to ask you maybe a very last thought that you have on innovation in the public sector and how you can contribute

Arthur Mensch

I think public research is very important in particular I think partnerships between private companies and public efforts is actually something that works because doing research takes some infrastructure infrastructure takes some capital and so I think that’s the way we can actually accelerate together

Matteo Valero

I would say that the the the the the AI is a dual use technology and we need to look for the good use in this direction I think we can do a lot in Europe because as Roberto said we have the infrastructure and then we need a little more to invest a little more and then to define common projects because don’t forget that if you look at the power at the national level between the state and China they have more than 80 % of the computer more than 80 % of the people and more than 80 % of the investment so what we can do one possibility should be being in India alliance I think it would be very good to have an alliance between Europe and India in this topic we as BSE we have alliance with the SIDAC with the super competing centre and we have with the Institute of Science in Bengaluru.

And also financed by the Commission, we have a very good project that we are very happy to collaborate with you in the end. Thank you.

Lucilla Sioli

Now, time is up, but I would like to know very shortly, last thought from Roberto and Jarek.

Jarek Kutylowski

Yeah, I think we can build and we will build from the commercial side, from the business side, amazing products that are driving a lot of value creation in the AI space. I think that that’s clear. And we’re going to be trying to do that in a way, of course, that our users and our customers can be really delighted by those products. But I think there is a lot of work that the public sector can do in terms of bringing this importance of adopting this technology into the broad base of population. I think both the German and also the European countries are going to be very happy and also from all of the conversations that we had here, the European and Indian governments do understand that, but we should not underestimate this challenge.

And I think there needs to be a very strong partnership between… businesses and the public sector on driving that. Thanks.

Lucilla Sioli

Thank you.

Roberto Viola

I am one of the few that has been in the three summits. I mean, Blanchley, this one, and last year in Paris. And of course the size already gives you an idea how things have changed. In the kind of discussion room at Blanchley Park we were 20, including the leaders. I mean, that gives you an idea. Now, the point is, I’m so happy to be here because what I always thought a little bit is true. There’s not one future for AI and technology. And it is not written. It is not written. The thousands and thousands of people that participated to the summit this year will write the future. So those that tell you there’s only one way, I mean, there’s only one scale.

And the rest of the world should watch and applaud. and I mean adapt to it absolutely I mean this summit shows and application of AI in public service what India is doing, what Europe is trying to do shows there are many futures and as I was trying to say before the future is in our hand

Lucilla Sioli

Thanks a lot and with these very intelligent and smart sentences tell me to thank the speakers and thanks a lot for your participation Thank you Thank you Thank you

Related ResourcesKnowledge base sources related to the discussion topics (21)
Factual NotesClaims verified against the Diplo knowledge base (5)
Additional Contextmedium

“The session opened with a call for stronger India‑EU cooperation to build AI capacity for the Global South”

The knowledge base highlights a general emphasis on enhanced international cooperation and Global South partnerships in the opening remarks, but does not specify India-EU ties [S94] and [S92].

Confirmedhigh

“Moderator Lucilla Sioli introduced a distinguished panel: Vice‑President and Secretary‑General Krishnan (Ministry of Electronics & Information Technology, India), Arthur Mensch (CEO, Mistral AI), Jarek Kutylowski (founder & CEO, DeepL), Prof. Matteo Valero (Barcelona Supercomputing Centre), and Roberto Viola (Director‑General, DigiConnect, European Commission)”

The roles of Krishnan, Mensch, Kutylowski, Valero and Viola are corroborated by the knowledge base entries that list them with the same titles and organisations [S99], [S2], and [S100].

Confirmedmedium

“Mistral’s “AI for Citizens” programme’s first pillar is efficiency: using generative AI to automate fragmented, knowledge‑intensive public‑sector processes such as procurement, report writing and legacy‑IT integration”

The knowledge base notes that governments are exploring generative AI to modernise public-sector infrastructure and processes, matching the described efficiency pillar [S21].

Confirmedhigh

“Frontier language models can provide real‑time written and spoken translation, enabling citizens to converse with office staff in their own language, while noting challenges such as translating legislation”

Meta’s SeamlessM4T model demonstrates real-time multilingual translation capabilities, confirming the feasibility of such frontier models [S106]; the EU’s multilingual policy provides additional context for the legislative translation challenge [S31].

Additional Contextlow

“EuroHPC network now hosts six of the world’s top‑15 supercomputers”

The knowledge base confirms that EuroHPC has multiple sites across Europe and is expanding its high-performance computing ecosystem, but does not provide a ranking that places six of its machines in the global top-15 [S108] and [S109].

External Sources (109)
S1
State of Play: AI Governance / DAVOS 2025 — – Arthur Mensch: Co-founder and Chief Executive Officer, Mistral Arthur Mensch: I’m suggesting that this is the direct…
S2
The Role of Government and Innovators in Citizen-Centric AI — – Arthur Mensch- Jarek Kutylowski – Arthur Mensch- Roberto Viola
S3
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — – Arthur Mensch- Ambassador Philip Tigo – Arthur Mensch- James Manyika- Abhishek Singh
S4
https://dig.watch/event/india-ai-impact-summit-2026/the-role-of-government-and-innovators-in-citizen-centric-ai — He’s the CEO of Mistral AI, if you can just stand next to the Secretary, which is a European company developing large la…
S5
S6
The Role of Government and Innovators in Citizen-Centric AI — -Lucilla Sioli: Panel moderator/host; appears to be in a senior role at the European Commission (Roberto Viola refers to…
S7
The Role of Government and Innovators in Citizen-Centric AI — – Arthur Mensch- Jarek Kutylowski
S8
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — – Jarek Kutylowski envisioned enhanced global collaboration that transcends language and geographic barriers And Dr. Ja…
S9
The Role of Government and Innovators in Citizen-Centric AI — He’s the CEO of Mistral AI, if you can just stand next to the Secretary, which is a European company developing large la…
S10
European Cybersecurity Competence Center (ECCC) launches, bolstering EU’s Cyber Shield — The European Cybersecurity Competence Center (ECCC) was officiallyinauguratedin Bucharest, Romania, marking the establis…
S11
https://dig.watch/event/india-ai-impact-summit-2026/the-role-of-government-and-innovators-in-citizen-centric-ai — He’s the CEO of Mistral AI, if you can just stand next to the Secretary, which is a European company developing large la…
S12
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S13
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S14
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S15
Opening of the session — Benefits from emerging technologies must be equally enjoyed. Capacity building is essential for political and instituti…
S16
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/4/OEWG 2025 — Ecuador: Thank you, Chairman. Ecuador supports the statement made by Argentina on behalf of a group of countries on ca…
S17
Closure of the session — Guatemala: Thank you, Chairman. Guatemala is grateful for the efforts and the work done by the Chair to present the e…
S18
Global South pushes for digital inclusion — At the2025 Internet Governance Forumin Lillestrøm, Norway, global leaders, youth delegates, and digital policymakers con…
S19
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Andrea Jacobs: So that’s a very, very good question. And, you know, I’ve heard a lot of unpacking from different regions…
S20
Global South Solidarities for Global Digital Governance | IGF 2023 Networking Session #110 — Building regional solidarities and conversations is deemed necessary for promoting collaboration. Organisations from var…
S21
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Armando José Manzueta-Peña:Well, thank you, Luca, for the presentation. I’m more than thrilled to be present here and to…
S22
Agentic AI gains ground as GenAI maturity grows in public sector — Public sector organisations around the world are rapidly moving beyondexperimentation with generative AI (GenAI), with u…
S23
Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29 — Generative AI has emerged as a powerful tool with the potential to revolutionize various sectors. The analysis reveals s…
S24
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Augmentation approach is superior to pure automation Augmentation vs. Automation Strategies Economic | Future of work …
S25
The Intelligent Coworker: AI’s Evolution in the Workplace — – Christoph Schweizer- Kate Kallot Lores emphasizes that meaningful AI impact requires fundamental process redesign rat…
S26
Open Forum #33 Building an International AI Cooperation Ecosystem — – Qi Xiaoxia- Dai Wei- Ricardo Pelayo Development | Economic | Capacity development Innovation Ecosystems and Practica…
S27
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And gigafactories is the next step. Second, data. Lithuania is among Europe’s leaders in open and high -value public dat…
S28
Building Population-Scale Digital Public Infrastructure for AI — In other words, by 2030, all of us together across the world will develop these pathways to diffuse the use of AI in a p…
S29
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And I have a deep belief that the entrepreneurial ecosystem in India is going to deliver some incredible global leaders …
S30
Driving Indias AI Future Growth Innovation and Impact — The discussion revealed sophisticated understanding of AI development challenges and opportunities, with remarkable cons…
S31
Multilingualism — AI techniques such as NLP can be used to analyse text written in different languages. By processing and analysing multil…
S32
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — The expansion of language support remains an ongoing challenge and opportunity. Currently, Bhashini is being enhanced to…
S33
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — The topic of AI regulation is currently being discussed in the context of its application. The argument put forth is tha…
S34
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Effective AI governance requires frameworks that are aligned with existing global agreements to avoid creating a patchwo…
S35
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Sarim Aziz: Thanks, Brandon. So yeah, I think in our conversations with government, we do see with our open-source app…
S36
Governments, Rewired / Davos 2025 — Blair suggests that artificial intelligence and digital technologies have the potential to revolutionize various aspects…
S37
A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23 — Panelist:Well, very excellent interaction. Thank you very much for bringing this up. Well, first to say that what matter…
S38
How can the UN ensure the impartiality of its AI platforms? — This moment presents both a challenge and an opportunity. By committing to an open, transparent, and inclusive AI framew…
S39
Open Forum #36 Challenges &amp; Opportunities for a Multilingual Internet — Audience: the International Telecommunication Union or the ITU. Thank you for this very important discussion. Multilin…
S40
AI as critical infrastructure for continuity in public services — Thank you very much. Standards are a very important pillar of building trust. Another is inclusive governance. Changatai…
S41
WS #119 AI for Multilingual Inclusion — – Encouraging learning and use of multiple languages – Ensuring public services support multiple languages Audience: …
S42
EU expands support for AI startups with access to supercomputers — The European Union is stepping up itsbacking of homegrown AI startups by allowing them access to the bloc’s supercompute…
S43
EU advances ambitious gigafactory programme for AI leadership — The Councilhas agreedon a significant amendment to the EuroHPC Joint Undertaking regulation, aiming to establish AI giga…
S44
Europe prepares formal call for AI Gigafactory projects — The European Commission is collaborating with the EU capitals to narrow the list ofproposals for large AI training hubs,…
S45
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S46
Business Engagement Session: Sustainable Leadership in the Digital Age – Shaping the Future of Business — Reyansh identifies a common organizational challenge where leaders disagree about implementing emerging technologies lik…
S47
AI productivity gap reveals critical enterprise adoption challenges — AIcontinuesto generate expectations of broad economic transformation, particularly in productivity and employment. Howev…
S48
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — Helani Galpaya,:Thank you for joining us today. We have about 23 people in the room and 22 people online, so I think cer…
S49
WS #100 Integrating the Global South in Global AI Governance — AUDIENCE: I think beyond skills programs and helping developers and people working in those industries in the click co…
S50
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Sarim Aziz: At the risk of contradicting Matisse, but just to say yes, I mean, that’s one option. But I think the ans…
S51
OpenAI backs policy push for Europe’s AI uptake — OpenAI and Allied for Startups havereleasedHacktivate AI, a set of 20 ideas to speed up AI adoption across Europe ahead …
S52
Global Perspectives on Openness and Trust in AI — And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. A…
S53
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Participant: See, when you look at AI or when you look at digital public infrastructure solutions, one thing that one sh…
S54
The Role of Government and Innovators in Citizen-Centric AI — I think policy needs to be tuned with the transformation. So in a way, as I was trying to say also before, if you invent…
S55
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Policy needs to be at a principle level because if it becomes too detailed, it becomes hard to maintain, especially with…
S56
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Harmonization of policies across the region was identified as a critical goal to enable seamless transactions and integr…
S57
Open Forum #33 Building an International AI Cooperation Ecosystem — **Professor Dai Li Na** from the Shanghai Academy of Social Sciences presented a comprehensive case study of Shanghai’s …
S58
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S59
Multilingualism — AI techniques such as NLP can be used to analyse text written in different languages. By processing and analysing multil…
S60
Democratizing AI: Open foundations and shared resources for global impact — ### Multilingual Capabilities and Technical Features The model incorporates over 1,000 languages, including Swiss minor…
S61
WS #119 AI for Multilingual Inclusion — Promoting Language Equity and Inclusion Public services should provide materials and support in multiple languages to p…
S62
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Amish points out that most global AI models operate in English, making Indian‑language capability crucial for the countr…
S63
Don’t waste the crisis: How AI can help reinvent International Geneva — For instance, creating an AI app takes a day or less, preparing a dataset for a functional app takes a month, and fully …
S64
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Lack of infrastructure, skills, compu…
S65
AI That Empowers Safety Growth and Social Inclusion in Action — Despite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap betwee…
S66
AI as critical infrastructure for continuity in public services — Data silos emerged as a primary barrier, with organizations struggling to integrate data across different systems and de…
S67
What policy levers can bridge the AI divide? — A central theme throughout the discussion was that meaningful AI implementation cannot occur without addressing basic co…
S68
The Foundation of AI Democratizing Compute Data Infrastructure — High level of consensus across diverse stakeholders (academic, government, civil society, private sector, international …
S69
Policy Network on Artificial Intelligence | IGF 2023 — This education should be accessible to all, regardless of their age or background. Additionally, the panel discussion sh…
S70
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Economic and Labor Market Impact Examples of relieving employees from 4-hour internet searches and policy drafting, add…
S71
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Funding and Policy Mechanisms In 99% of UN member states, the public sector is still the biggest single buyer, making p…
S72
How AI Drives Innovation and Economic Growth — I agree that there is huge potential in health. and education. I think we’ll see big improvements in that, but the risk …
S73
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Armando José Manzueta-Peña:Well, thank you, Luca, for the presentation. I’m more than thrilled to be present here and to…
S74
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Sarim Aziz: Thanks, Brandon. So yeah, I think in our conversations with government, we do see with our open-source app…
S75
National Strategy for Artificial Intelligence — Artificial intelligence in the public sector can contribute to: Such assessments related to the use of AI in public adm…
S76
1. Introduction — – 7) Citizens have control over their data : Development of tools which allow people to have bet -ter control over and p…
S77
STRATEGIE NATIONALE DE L’INTELLIGENCE ARTIFICIELLE — Confrontée à des inefficacités structurelles et un accès limité aux services numériques en zones reculées. L’IA peut aut…
S78
How Multilingual AI Bridges the Gap to Inclusive Access — And so learning from reality, learning from the real workflows of how people use models. And I think that’s important, t…
S79
The Role of Government and Innovators in Citizen-Centric AI — Linguistic diversity in both India and the EU can be a challenge in administration despite being a benefit Lucilla fram…
S80
AI as critical infrastructure for continuity in public services — And this tool is quite complex. It’s not only covering the governments and compliance area, but as well as helping the o…
S81
WS #119 AI for Multilingual Inclusion — Promoting Language Equity and Inclusion Public services should provide materials and support in multiple languages to p…
S82
EU invests €8 billion in supercomputers to boost AI industry — The European Union (EU) is making efforts to boost Europe’s AI industry by leveraging its fleet of supercomputers throug…
S83
Europe prepares formal call for AI Gigafactory projects — The European Commission is collaborating with the EU capitals to narrow the list ofproposals for large AI training hubs,…
S84
European tech strategy advances with Germany’s new AI factory — Germany has launched one of Europe’slargest AI factoriesto boost EU-wide sovereign AI capacity. Deutsche Telekom unveile…
S85
EU expands support for AI startups with access to supercomputers — The European Union is stepping up itsbacking of homegrown AI startups by allowing them access to the bloc’s supercompute…
S86
AI productivity gap reveals critical enterprise adoption challenges — AIcontinuesto generate expectations of broad economic transformation, particularly in productivity and employment. Howev…
S87
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S88
Responsible AI in India Leadership Ethics &amp; Global Impact — “One size doesn’t fit all”[111]. “So as you said, one size doesn’t fit all”[112]. “We face challenges”[109]. “There are…
S89
Towards a Reskilling Revolution — companies within the next 4 years | Share of companies surveyed | | |————————————|—…
S90
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — Helani Galpaya,:Thank you for joining us today. We have about 23 people in the room and 22 people online, so I think cer…
S91
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development Innovation Ecosystems and Practical Implementation The speaker argues th…
S92
Building Scalable AI Through Global South Partnerships — Africa needs collaboration to leverage its 1.4 billion population scale rather than viewing individual countries separat…
S93
From India to the Global South_ Advancing Social Impact with AI — Public-private partnerships are essential, requiring industry to move beyond closed hiring networks and engage with educ…
S94
Opening of the session — Need for enhanced international cooperation and capacity building
S95
WS #82 A Global South perspective on AI governance — AUDIENCE: Thank you for the wonderful thought provoking conversation. I wanted to ask, I only attended half of the ses…
S96
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S97
OPENING SESSION | IGF 2023 — Ema Arisa:So good morning, ladies and gentlemen. My name is Arisa Emma. Emma is my family name. And so I am Associate Pr…
S98
Open Forum #30 High Level Review of AI Governance Including the Discussion — – **Abhishek Singh** – Under-Secretary from the Indian Ministry of Electronics and Information Technology Yoichi Iida: …
S99
Empowering India &amp; the Global South Through AI Literacy — -Shri S. Krishnan: Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India
S100
High-Level Session 1: Navigating the Misinformation Maze: Strategic Cooperation For A Trusted Digital Future — – Pearse O’Donohue: Director for the Future Networks Directorate of DigiConnect European Commission Barbara Carfagna: H…
S101
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — In the document and then in our trainings, we have four pillars. They’re all linked. The first pillar is context-based a…
S102
Conversation: 01 — Krishnan outlined the Trump administration’s three-pillar strategy developed over 13 months. The first pillar focuses on…
S103
LinkedIn unveils AI-driven features to enhance job hunting and recruitment — LinkedIn isusingAI to streamline the job hunting process, aiming to alleviate the task of job searching for its users. T…
S104
Day 0 Event #184 From Compliance to Excellence in Digital Governments — Axel Domeyer: Yeah, that’s a great question. I think in some sense, I mean, like the good things in life, right, they…
S105
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Shri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the r…
S106
Meta unveils AI translator model for real-time multilingual communication — Meta Platforms, the parent company of Facebook,has introduced an AI model named SeamlessM4T that can translate and trans…
S107
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Ryan Budish :I’m coming from Boston, Massachusetts, where it is quite late at night. So I’m going to try not to speak to…
S108
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Georges Olivier Reymond: Do you mind if I start first? You want to start? OK. I would like to highlight a great init…
S109
Six countries selected to host future European quantum computers — The European High Performance Computing Joint Undertaking (EuroHPC JU) has announced theselection of six sites across th…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
2 arguments105 words per minute308 words175 seconds
Argument 1
Building technical and institutional capacity is essential for the effective deployment of emerging technologies.
EXPLANATION
Speaker 1 stresses that without sufficient capacity, new technologies cannot be applied at scale. He calls for concerted efforts to develop the skills, infrastructure and organisational readiness needed for large‑scale adoption.
EVIDENCE
He explicitly asks how to “build capacity in order for this technology to be applied significantly better” and follows with a wish to see stronger collaboration between India and the EU to achieve this goal [1][2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity building is highlighted as essential in session openings and agenda discussions, e.g., [S15] and [S16].
MAJOR DISCUSSION POINT
Capacity building for technology adoption
AGREED WITH
Arthur Mensch, Matteo Valero, Roberto Viola, Jarek Kutylowski
Argument 2
India and the European Union should deepen cooperation to promote digital transformation across the Global South.
EXPLANATION
The speaker envisions a partnership that goes beyond bilateral ties, extending the benefits of digital innovation to other developing regions. He sees joint action as a way to share expertise, resources and best practices.
EVIDENCE
He states his desire to “see a day when India and the EU collaborate much more closely to make this happen, not just in India, but all over the global south” [2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for Global South digital inclusion and regional cooperation are echoed in the IGF discussions on digital inclusion and solidarity, such as [S18] and [S20].
MAJOR DISCUSSION POINT
India‑EU collaboration for global digital development
A
Arthur Mensch
3 arguments163 words per minute1176 words431 seconds
Argument 1
Generative AI can boost public‑sector efficiency by automating fragmented, knowledge‑intensive processes.
EXPLANATION
Mensch explains that many administrative tasks are spread across multiple tools and people, creating inefficiencies. AI‑driven automation can consolidate these steps, reduce manual effort and address talent shortages caused by retirements.
EVIDENCE
He describes how “generative AI allows you to delegate tasks… to automate certain processes that can be fairly complex, fragmented, involve multiple people and tools” and notes the problems of legacy IT, talent pressure and knowledge management [21][22][23][24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Generative AI’s role in modernising public infrastructure and boosting efficiency is noted in discussions on AI compatibility with data protection and agentic AI adoption [S21][S22][S23].
MAJOR DISCUSSION POINT
AI‑driven efficiency in public administration
AGREED WITH
Roberto Viola, Jarek Kutylowski, Matteo Valero
Argument 2
Realising AI‑driven productivity gains requires process automation, organisational redesign and large‑scale reskilling of civil servants.
EXPLANATION
Mensch argues that simply deploying chatbots does not improve overall productivity; the whole workflow must be reengineered so that humans are removed from bottlenecks and staff are trained to become delegators of AI‑run processes.
EVIDENCE
He outlines the need to “design… process automation… bring engineers… work with subject matter experts… deploy code that runs the automation” and then “rethink the organization… turn individual contributors into people that will effectively manage AI-operated processes” while stressing the need for training and reskilling [127][130][136][141][145][148][160][161].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for process redesign and large-scale reskilling is emphasized in reports on augmentation vs. automation and AI workplace evolution [S24][S25][S2].
MAJOR DISCUSSION POINT
Organisational change and reskilling for AI adoption
AGREED WITH
Roberto Viola, Jarek Kutylowski, Matteo Valero
DISAGREED WITH
Roberto Viola
Argument 3
Public‑private research partnerships are crucial to accelerate AI innovation for societal benefit.
EXPLANATION
Mensch highlights that joint research leverages both private‑sector infrastructure and public‑sector funding, creating a faster path to impactful AI solutions.
EVIDENCE
In his closing remark he states, “public research is very important… partnerships between private companies and public efforts… accelerate together” [204].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
International AI cooperation forums stress that public-private partnerships are essential for accelerating AI development [S26][S30].
MAJOR DISCUSSION POINT
Public‑private research collaboration
AGREED WITH
Speaker 1, Matteo Valero, Roberto Viola, Jarek Kutylowski
M
Matteo Valero
3 arguments139 words per minute685 words294 seconds
Argument 1
Europe’s EuroHPC supercomputing infrastructure provides the computational backbone needed for AI‑driven scientific and public‑sector innovation.
EXPLANATION
Valero explains that the creation of EuroHPC, coordinated by Roberto Viola, has given Europe a substantial share of the world’s top supercomputers, enabling large‑scale simulations and AI research that can be applied to public services.
EVIDENCE
He notes that “thanks to Roberto Viola, we created the EuroHPC… we have 6 of the top-500 supercomputers in Europe” and that this capacity underpins AI advances [50][51][52][53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
EuroHPC’s contribution to Europe’s top supercomputers and AI capacity is documented in the role of government and innovators overview [S2].
MAJOR DISCUSSION POINT
Supercomputing as a foundation for AI
Argument 2
The AI Factory and Gigafactory concepts aim to deliver free, citizen‑centric AI services that personalize information and improve public‑sector interactions.
EXPLANATION
Valero describes AI factories as co‑located hardware‑software platforms staffed by AI experts, offering open, free services to citizens such as personalized, accurate, fast information, thereby enhancing public satisfaction.
EVIDENCE
He outlines that “the AI Factory is a platform… the service is free, the people is free… to connect as much as we can with the society to make a better world” and links this to citizen happiness through personalized information [61][62][63][64][65][66][67].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of AI gigafactories and free public AI services is discussed in the AI Impact Summit and digital public infrastructure reports [S27][S28][S2].
MAJOR DISCUSSION POINT
AI factories delivering free public services
Argument 3
Investing further in AI‑focused infrastructure and fostering Europe‑India alliances will amplify the impact of AI on society.
EXPLANATION
Valero calls for increased funding for AI factories and suggests strategic partnerships with India to leverage complementary strengths, citing existing collaborations with Indian research institutes.
EVIDENCE
He mentions the need to “invest a little more… define common projects” and references alliances with Indian institutions such as SIDAC and the Institute of Science in Bengaluru [205][206].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaborations with Indian institutes and the growing Indian AI ecosystem are highlighted in the same overview and related Indian AI growth discussions [S2][S29][S30].
MAJOR DISCUSSION POINT
Strategic investment and international AI partnerships
AGREED WITH
Speaker 1, Arthur Mensch, Roberto Viola, Jarek Kutylowski
L
Lucilla Sioli
2 arguments128 words per minute506 words236 seconds
Argument 1
Multilingualism in India and the EU presents both a challenge and an opportunity that AI language models can help address.
EXPLANATION
Sioli points out the linguistic diversity of the regions and frames it as a potential barrier for administration, while also recognising its cultural value, suggesting that AI translation tools could bridge the gap.
EVIDENCE
She introduces the issue by saying, “there is a lot of linguistic diversity… how can the AI language models help to overcome this multilingualism issue” [27][28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI techniques for multilingual analysis and translation, as well as India’s Bhashini language expansion, illustrate the opportunity to address linguistic diversity [S31][S2][S32].
MAJOR DISCUSSION POINT
AI for multilingual inclusion
AGREED WITH
Jarek Kutylowski, Arthur Mensch
Argument 2
Policy frameworks must be aligned with AI‑driven transformation to avoid creating a merely “digital bureaucracy”.
EXPLANATION
Sioli asks how policy can facilitate AI uptake in the public sector, emphasizing that without supportive regulation the digital layer will simply replicate existing bureaucratic inefficiencies.
EVIDENCE
She poses the question, “how would you help now facilitate the uptake of AI?… how can policy really help to enable this transformation?” [184][185].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy frameworks tailored to AI use and avoiding digital bureaucracy are advocated in governance and sandbox discussions [S33][S34][S2].
MAJOR DISCUSSION POINT
Policy alignment for AI adoption
AGREED WITH
Roberto Viola
DISAGREED WITH
Roberto Viola
J
Jarek Kutylowski
2 arguments148 words per minute703 words284 seconds
Argument 1
Multilingualism is a strength, and AI can be used to bridge language gaps in public services through real‑time translation and speech interfaces.
EXPLANATION
Kutylowski argues that multilingual societies benefit from AI‑enabled translation, which can support both written and spoken interactions between citizens and government agencies.
EVIDENCE
He states, “it’s something that’s actually pretty beautiful… we work with countries… that have intrinsic necessity of being able to connect to their citizens in many languages” and describes AI enabling “real-time conversations” and translating legislation [29][30][31][32][33][34][35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Real-time translation and speech interfaces for multilingual societies are covered in AI multilingual capabilities literature [S31][S2][S32].
MAJOR DISCUSSION POINT
AI‑enabled multilingual public services
AGREED WITH
Lucilla Sioli, Arthur Mensch
DISAGREED WITH
Arthur Mensch
Argument 2
Public sector organisations must redesign workflows and reduce human‑review steps to fully exploit AI potential, but inertia makes this difficult.
EXPLANATION
Kutylowski highlights that many existing processes are built around human oversight; adopting AI requires rethinking these workflows, eliminating unnecessary steps, and fostering strong public‑private partnerships.
EVIDENCE
He notes the need to ask “do we need that human review step anymore?” and gives examples of AI-driven translation for drug-discovery documentation and maintenance records, while warning about the large inertia in big public organisations [165][166][167][168][169][170][171][172][173][174][175][176][177][178][179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on the need for process redesign and challenges of organizational inertia support this view [S24][S25][S22].
MAJOR DISCUSSION POINT
Organisational redesign for AI integration
AGREED WITH
Arthur Mensch, Roberto Viola, Matteo Valero
R
Roberto Viola
3 arguments136 words per minute1332 words587 seconds
Argument 1
A suite of free AI tools (Mistral, DPL, Destination Earth) is already available for public use, offering capabilities from language translation to climate digital twins.
EXPLANATION
Viola lists several open‑access AI services that citizens and governments can test online, emphasizing their potential to support public‑sector functions such as translation and climate modelling.
EVIDENCE
He mentions that “you can test the Mistral on the web for free”, “you can use DPL for free”, and describes “Destination Earth” as a sophisticated climate digital twin that is also free [82][83][84][85][86][87][88][89][90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-access AI services such as Mistral and Destination Earth are mentioned as free tools for public use in the citizen-centric AI overview and AI impact discussions [S2][S27].
MAJOR DISCUSSION POINT
Open‑access AI resources for the public sector
AGREED WITH
Arthur Mensch, Matteo Valero
Argument 2
The “Solow paradox” shows that IT and AI investments alone do not automatically raise productivity; processes must be reengineered and staff empowered to realise gains.
EXPLANATION
Viola cites the Solow paradox to argue that without redesigning workflows and ensuring public‑sector workers understand and adopt AI, investments will merely duplicate existing systems and fail to deliver efficiency.
EVIDENCE
He explains that “the more people invest in IT… the less the productivity” and that productivity only improves when AI creates new processes rather than overlapping with legacy ones, referencing the pandemic as a moment when digital use did boost productivity [92][93][94][95][96][97][98][99][100][101][102][103][104][105][106][107][108][109][110][111][112][113][114][115][116][117][118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Solow paradox and its implications for productivity are discussed in the extensive citation series on IT investment outcomes [S92].
MAJOR DISCUSSION POINT
Need for process redesign to overcome productivity paradox
DISAGREED WITH
Arthur Mensch
Argument 3
Policy must be tuned to the AI transformation, otherwise digital bureaucracy will simply replicate existing inefficiencies; AI‑driven bureaucrats could be a solution.
EXPLANATION
Viola argues that merely digitising bureaucracy does not solve its structural problems; legislation should enable disruptive AI agents that can perform regulatory and administrative tasks more efficiently.
EVIDENCE
He states, “if you invent a digital bureaucracy, it’s a bureaucracy… it would be very simple for the geniuses in this panel to produce an AI bureaucrat” and suggests regulation-generating and regulation-correcting bots as useful tools [186][187][188][189][190][191][192][193][194].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for policy to be aligned with AI transformation and to enable AI-driven bureaucrats appear in governance and policy tuning discussions [S33][S34][S2].
MAJOR DISCUSSION POINT
Regulatory innovation for AI‑enabled governance
Agreements
Agreement Points
AI adoption in the public sector requires organisational redesign, process automation and large‑scale reskilling of civil servants.
Speakers: Arthur Mensch, Roberto Viola, Jarek Kutylowski, Matteo Valero
Generative AI can boost public‑sector efficiency by automating fragmented, knowledge‑intensive processes. Realising AI‑driven productivity gains requires process automation, organisational redesign and large‑scale reskilling of civil servants. Public sector organisations must redesign workflows and reduce human‑review steps to fully exploit AI potential, but inertia makes this difficult. Investing further in AI‑focused infrastructure and fostering Europe‑India alliances will amplify the impact of AI on society.
All four speakers stress that merely deploying AI tools is insufficient; the whole workflow must be reengineered, bottlenecks removed and staff retrained to become delegators of AI-run processes, otherwise productivity gains will not materialise [21][127-148][112-118][60-65].
POLICY CONTEXT (KNOWLEDGE BASE)
Emphasises the need for deep organisational change and skill development, echoing findings that meaningful AI impact hinges on organisational transformation and addressing skill gaps [S63][S64][S66][S70].
Strong public‑private and inter‑regional collaborations are essential to accelerate AI innovation and its societal impact.
Speakers: Speaker 1, Arthur Mensch, Matteo Valero, Roberto Viola, Jarek Kutylowski
Building technical and institutional capacity is essential for the effective deployment of emerging technologies. Public‑private research partnerships are crucial to accelerate AI innovation for societal benefit. Investing further in AI‑focused infrastructure and fostering Europe‑India alliances will amplify the impact of AI on society. We need to work with the people and with the public administration and to make sure that we … (collaboration between ecosystem actors). There needs to be a very strong partnership between… businesses and the public sector on driving that.
The panel repeatedly highlights the need for joint action – between India and the EU, between private firms and public research bodies, and between businesses and governments – to build capacity, share resources and scale AI solutions [2][204][205-206][119-122][214].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with calls for policy harmonisation across regions and multi-stakeholder ecosystems to enable seamless AI integration [S56][S57][S58].
AI language models can help overcome multilingual challenges and turn linguistic diversity into an asset for public services.
Speakers: Lucilla Sioli, Jarek Kutylowski, Arthur Mensch
Multilingualism in India and the EU presents both a challenge and an opportunity that AI language models can help address. Multilingualism is a strength, and AI can be used to bridge language gaps in public services through real‑time translation and speech interfaces. We have a program called AI for citizens that have multiple pillars… (including language‑related use cases).
All three speakers acknowledge that the many languages spoken across India and Europe are a hurdle for administration but also a cultural strength, and that AI translation and real-time conversation tools can bridge the gap [27-28][29-35][21].
POLICY CONTEXT (KNOWLEDGE BASE)
Supported by research showing NLP can process multilingual data and promote language equity in public services [S59][S60][S61][S62].
Policy frameworks must be tuned to AI‑driven transformation to avoid merely digitising existing bureaucratic inefficiencies.
Speakers: Lucilla Sioli, Roberto Viola
Policy frameworks must be aligned with AI‑driven transformation to avoid creating a merely “digital bureaucracy”. Policy must be tuned with the transformation; otherwise digital bureaucracy will simply replicate existing inefficiencies.
Both the moderator and the EU director stress that legislation should enable disruptive AI agents rather than just overlaying old processes with digital tools [184-185][186-191].
POLICY CONTEXT (KNOWLEDGE BASE)
Mirrors recommendations that policy should be principle-based yet adaptable to prevent digital bureaucracy and ensure effective AI governance [S54][S55][S58][S64].
Providing free or open‑access AI tools accelerates public‑sector uptake and citizen empowerment.
Speakers: Roberto Viola, Arthur Mensch, Matteo Valero
A suite of free AI tools (Mistral, DPL, Destination Earth) is already available for public use, offering capabilities from language translation to climate digital twins. We have turned Le Chat into something that we call Vibe, a product where we can delegate tasks. The AI Factory is a platform… the service is free, the people is free… to connect as much as we can with the society to make a better world.
The speakers highlight that offering AI services at no cost – whether language models, workflow assistants, or large-scale climate twins – lowers barriers for governments and citizens, fostering faster adoption [82-90][125-129][61-67].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects concerns about access and the push for open-source AI to democratise capabilities and empower citizens [S52][S60][S68].
Similar Viewpoints
Both emphasise that capacity building – through infrastructure, supercomputing resources and strategic EU‑India partnerships – is a prerequisite for large‑scale AI deployment [2][60-65].
Speakers: Speaker 1, Matteo Valero
Building technical and institutional capacity is essential for the effective deployment of emerging technologies. Investing further in AI‑focused infrastructure and fostering Europe‑India alliances will amplify the impact of AI on society.
Both argue that without redesigning workflows and up‑skilling staff, AI investments will not translate into productivity gains [127-148][112-118].
Speakers: Arthur Mensch, Roberto Viola
Realising AI‑driven productivity gains requires process automation, organisational redesign and large‑scale reskilling of civil servants. The “Solow paradox” shows that IT and AI investments alone do not automatically raise productivity; processes must be reengineered and staff empowered.
Both recognise linguistic diversity as a key issue for public administration and see AI translation as a solution [27-28][29-35].
Speakers: Jarek Kutylowski, Lucilla Sioli
Multilingualism is a strength, and AI can be used to bridge language gaps in public services through real‑time translation and speech interfaces. Multilingualism in India and the EU presents both a challenge and an opportunity that AI language models can help address.
Both explicitly advocate for joint research between private firms and public institutions to speed up impactful AI solutions [204][119-122].
Speakers: Roberto Viola, Arthur Mensch
Public‑private research partnerships are crucial to accelerate AI innovation for societal benefit. Public‑private research partnerships are crucial to accelerate AI innovation for societal benefit.
Unexpected Consensus
All speakers, including those focused on technical infrastructure (Matteo Valero) and those on policy (Roberto Viola), agree that AI services should be offered free to citizens.
Speakers: Matteo Valero, Roberto Viola, Arthur Mensch
The AI Factory is a platform… the service is free, the people is free… A suite of free AI tools (Mistral, DPL, Destination Earth) is already available for public use… We have turned Le Chat into something that we call Vibe…
It is surprising that a supercomputing director and a policy chief both stress free public access, aligning commercial product strategy with public-good provision [61-67][82-90][125-129].
Both the moderator (Lucilla Sioli) and the EU director (Roberto Viola) converge on the idea that simply digitising bureaucracy reproduces old inefficiencies, a point usually raised by technologists rather than moderators.
Speakers: Lucilla Sioli, Roberto Viola
Policy frameworks must be aligned with AI‑driven transformation to avoid creating a merely “digital bureaucracy”. Policy must be tuned with the transformation; otherwise digital bureaucracy will simply replicate existing inefficiencies.
The convergence of a moderator’s policy question with a senior official’s strategic stance shows an unexpected alignment on the need for transformative regulation [184-185][186-191].
POLICY CONTEXT (KNOWLEDGE BASE)
Reinforces the argument that digitising bureaucracy without transformation merely replicates existing inefficiencies, as highlighted in policy discussions on digital bureaucracy [S54].
Overall Assessment

The panel shows a high degree of consensus that AI can transform the public sector only if it is backed by capacity building, organisational redesign, extensive reskilling, and strong public‑private / inter‑regional collaboration. Multilingualism is viewed as both a challenge and an opportunity, and free/open AI services are repeatedly highlighted as a catalyst for adoption. Policy must evolve beyond mere digitisation to enable AI‑driven bureaucratic functions.

Strong consensus across technical, policy and business perspectives, indicating that future initiatives should prioritize coordinated capacity development, open‑access AI tools, and regulatory frameworks that support process redesign rather than simple digital overlay.

Differences
Different Viewpoints
What is the primary lever to achieve AI‑driven productivity in the public sector
Speakers: Arthur Mensch, Roberto Viola
Realising AI‑driven productivity gains requires process automation, organisational redesign and large‑scale reskilling of civil servants. The “Solow paradox” shows that IT and AI investments alone do not automatically raise productivity; processes must be reengineered and staff empowered to realise gains.
Arthur argues that productivity comes from delegating whole processes to AI, building automation pipelines and reskilling staff to become delegators [127-148][141-148][160-161]. Roberto stresses that without a fundamental redesign of organisational structures and supportive policy, even the most sophisticated AI will not raise productivity, citing the Solow paradox and the need to avoid overlapping legacy systems [92-118].
POLICY CONTEXT (KNOWLEDGE BASE)
Procurement and regulatory levers are identified as key mechanisms to drive AI productivity in the public sector [S71][S67][S70].
How policy should shape AI adoption in the public sector
Speakers: Lucilla Sioli, Roberto Viola
Policy frameworks must be aligned with AI‑driven transformation to avoid creating a merely “digital bureaucracy”. Policy must be tuned with the transformation; AI‑driven bureaucrats and regulation‑generating bots could replace traditional digital bureaucracy.
Lucilla asks for policy that enables AI uptake while preventing a simple digitised copy of existing bureaucracy [184-185]. Roberto proposes a more radical approach: designing AI-driven bureaucrats and regulation-generating bots, arguing that legislation should actively create disruptive AI agents [186-194].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for principle-level, adaptable policy frameworks that align with AI transformation and address governance challenges [S54][S55][S58][S64].
Priority of multilingual AI solutions versus generic AI delegation tools
Speakers: Jarek Kutylowski, Arthur Mensch
Multilingualism is a strength, and AI can be used to bridge language gaps in public services through real‑time translation and speech interfaces. Generative AI can boost public‑sector efficiency by automating fragmented, knowledge‑intensive processes; focus is on task delegation rather than language‑specific services.
Jarek highlights AI’s role in translating legislation and enabling real-time multilingual conversations as a core public-service need [29-35]. Arthur concentrates on a generic delegation platform (Vibe) that automates processes without explicit emphasis on multilingual capabilities, suggesting a different priority [127-129].
POLICY CONTEXT (KNOWLEDGE BASE)
Highlights the strategic importance of multilingual AI for inclusion, as emphasized in multilingual AI research and policy briefs [S59][S60][S61].
Unexpected Differences
Availability of free AI tools versus the need for deeper organisational integration
Speakers: Roberto Viola, Arthur Mensch
A suite of free AI tools (Mistral, DPL, Destination Earth) is already available for public use, offering capabilities from language translation to climate digital twins. The challenge and the reason why you don’t see productivity gains when you deploy chatbots in enterprise is that basically you’re focusing on an individual productivity gain; true gains require full process automation and organisational redesign.
Roberto emphasizes that free, publicly accessible AI services are sufficient for immediate public-sector use [82-90]. Arthur counters that simply providing tools (e.g., chatbots) does not deliver productivity unless they are embedded in re-engineered processes and accompanied by reskilling, indicating a mismatch between tool availability and effective adoption [127-129]. This contrast was not anticipated given the shared goal of expanding AI use.
POLICY CONTEXT (KNOWLEDGE BASE)
Tension between open access and the necessity for organisational change and data integration is noted in discussions on AI access and integration challenges [S52][S63][S66].
Overall Assessment

The panel largely shares a common vision that AI can transform public services, but they diverge on the primary mechanisms to achieve this: Arthur stresses process automation and human reskilling; Roberto highlights the need for structural policy redesign and warns of the Solow productivity paradox; Lucilla and Roberto differ on the role of policy versus AI‑driven bureaucrats; Jarek prioritises multilingual AI capabilities while Arthur focuses on generic delegation tools. These disagreements are more about emphasis and implementation pathways than about the end goal.

Moderate – while there is broad consensus on the desirability of AI‑enabled public services, the speakers disagree on the most effective levers (reskilling vs policy redesign vs multilingual focus) and on how quickly free tools can be leveraged. The implications are that coordinated strategies will need to reconcile these perspectives to avoid fragmented efforts and to ensure that capacity‑building, policy, and technology development are aligned.

Partial Agreements
All four speakers agree that capacity – whether technical (supercomputers), organisational (reskilling), or tool‑based (free AI services) – is a prerequisite for successful AI adoption, but they differ on where the priority investment should lie: collaborative capacity‑building between India and the EU (Speaker 1) [1-2]; large‑scale process automation and human‑skill transformation (Arthur) [141-148]; expanding high‑performance computing infrastructure (Matteo) [50-53]; and making free AI tools publicly available (Roberto) [82-90].
Speakers: Speaker 1, Arthur Mensch, Matteo Valero, Roberto Viola
Building technical and institutional capacity is essential for the effective deployment of emerging technologies. Realising AI‑driven productivity gains requires process automation, organisational redesign and large‑scale reskilling of civil servants. Europe’s EuroHPC supercomputing infrastructure provides the computational backbone needed for AI‑driven scientific and public‑sector innovation. A suite of free AI tools (Mistral, DPL, Destination Earth) is already available for public use, offering capabilities from language translation to climate digital twins.
Both speakers stress the importance of partnerships between public institutions and private/foreign actors to scale AI, but Arthur focuses on research collaborations to accelerate innovation [204], while Matteo emphasizes joint investment projects and strategic alliances with Indian institutes [205-207].
Speakers: Arthur Mensch, Matteo Valero
Public‑private research partnerships are crucial to accelerate AI innovation for societal benefit. Investing further in AI‑focused infrastructure and fostering Europe‑India alliances will amplify the impact of AI on society.
Takeaways
Key takeaways
AI can significantly improve public‑sector efficiency by automating fragmented, multi‑step processes, but real productivity gains require redesigning workflows and delegating tasks to AI agents. Multilingualism in India and the EU can be addressed with frontier language models and translation tools (e.g., DeepL), enabling real‑time written and spoken interactions with citizens. Europe’s EuroHPC supercomputers, AI factories, and gigafactories provide the compute backbone needed for large‑scale public‑sector AI applications; making these resources openly available lowers entry barriers. The Solow (productivity) paradox highlights that simply investing in IT/AI does not automatically raise productivity; alignment of policy, organizational change, and skill development is essential. Reskilling the public‑sector workforce to become effective delegators of AI tasks is critical; current education and training systems do not adequately prepare staff for AI‑augmented roles. Public‑private partnerships and international collaboration (especially EU‑India alliances) are seen as key accelerators for building capacity, sharing infrastructure, and co‑creating AI solutions. Open‑source and freely accessible models (Mistral, Destination Earth, DeepL) are strategic assets for governments that lack large budgets but need advanced AI capabilities.
Resolutions and action items
Propose a formal EU‑India collaboration framework to share AI infrastructure, supercomputing capacity, and joint research projects (suggested by Speaker 1, Matteo Valero, and Roberto Viola). Encourage AI firms (Mistral, DeepL) to expand free‑to‑test offerings for public‑sector use cases and to co‑design pilots with government agencies (Arthur Mensch, Jarek Kutylowski). Develop a reskilling programme focused on AI delegation and workflow redesign for public‑sector employees, leveraging existing academic and industry expertise (Arthur Mensch). Create policy guidelines that prevent “digital bureaucracy” by mandating process redesign alongside AI deployment, and that support the use of AI‑generated regulatory bots (Roberto Viola). Set up joint AI‑factory / gigafactory hubs that co‑locate compute resources, AI talent, and technology‑transfer teams to serve both European and Global‑South administrations (Matteo Valero). Launch pilot projects in specific domains such as procurement, employment matching (e.g., France Travail), and multilingual citizen services to demonstrate measurable efficiency gains.
Unresolved issues
How to quantitatively measure productivity gains from AI in the public sector and resolve the Solow paradox in practice. Specific governance mechanisms for AI‑driven bureaucratic agents, including accountability, transparency, and legal liability. The extent and timeline for reskilling large public‑sector workforces; no concrete curriculum or funding plan was detailed. Data privacy and sovereignty concerns when using open‑source models and cross‑border AI infrastructure, especially for sensitive citizen data. Integration challenges between legacy IT systems and new AI platforms; detailed migration pathways were not defined. Long‑term sustainability and financing models for AI factories and supercomputing resources in the Global South.
Suggested compromises
Combine AI automation with selective human oversight rather than full replacement, allowing gradual transition and maintaining trust (Arthur Mensch, Jarek Kutylowski). Leverage open‑source, freely available AI models to lower cost barriers while still encouraging private‑sector innovation and customisation (Roberto Viola). Adopt a phased approach: start with low‑risk, high‑impact pilots (e.g., translation services, procurement assistance) before scaling to more complex regulatory or citizen‑interaction processes. Encourage joint public‑private research initiatives that share infrastructure costs and expertise, balancing public interest with commercial viability (Arthur Mensch, Matteo Valero).
Thought Provoking Comments
The model itself is never enough to provide value for the state; we start by working backward from concrete use‑cases such as procurement or job‑matching and focus on efficiency gains through automation.
Highlights a pragmatic, use‑case‑first approach rather than a technology‑first mindset, emphasizing that AI must solve real administrative pain points to be useful.
Set the agenda for the discussion by steering it toward concrete public‑sector applications, prompting other panelists to frame their contributions (e.g., multilingual translation, supercomputing resources) around specific problems rather than abstract capabilities.
Speaker: Arthur Mensch
Multilingualism should not be seen as a problem but as a beautiful feature of societies; AI can bridge the communication gap with real‑time spoken and written translation, even for complex tasks like translating legislation.
Reframes a perceived obstacle (language diversity) as an opportunity, expanding the conversation to include citizen engagement and the nuanced challenges of legal translation.
Shifted the tone from viewing language diversity as a barrier to exploring AI‑driven solutions, leading the moderator to ask about practical implementations and prompting Roberto to mention open‑source language tools.
Speaker: Jarek Kutylowski
The AI Factory is a free, co‑located platform of hardware, software and skilled people that connects directly with society; it aims to make AI services and expertise openly available to citizens and administrations.
Introduces the concept of an “AI factory” and “gigafactory” as a public‑good infrastructure model, linking supercomputing capacity with societal impact and open access.
Provided a concrete European infrastructure model that other speakers referenced when discussing capacity building, influencing Roberto’s later remarks on policy and ecosystem self‑reliance.
Speaker: Matteo Valero
The Solow paradox shows that more IT investment often yields no productivity gain because new tools overlap existing processes; AI can break this paradox by creating entirely new man‑machine workflows rather than just digitising the old ones.
Challenges the conventional belief that digitalisation automatically improves productivity, introducing a nuanced economic perspective and the need for fundamentally new processes.
Created a turning point that moved the discussion from technology deployment to organizational redesign, prompting Arthur to elaborate on delegation and reskilling as essential for real productivity gains.
Speaker: Roberto Viola
Productivity gains only appear when AI is used to delegate whole processes, not just to help an individual write a faster email; this requires rethinking organization, training managers to become delegators, and reskilling staff.
Deepens the conversation about human factors, emphasizing that AI’s value lies in collective workflow automation and cultural change, not in isolated efficiency tweaks.
Led to a deeper exploration of workforce transformation, with Jarek and Roberto echoing the need for organizational redesign and policy support, and highlighted the importance of training and delegation skills.
Speaker: Arthur Mensch
We must redesign the bureaucracy itself—turn the citizen‑to‑office model on its head so the state reaches out to citizens via AI agents, push notifications, and attestations, effectively making the citizen the manager of the bureaucracy.
Proposes a radical re‑engineering of public administration, moving beyond incremental AI adoption to a paradigm shift in how services are delivered and governed.
Served as a concluding visionary statement that broadened the discussion from technical implementation to systemic policy innovation, inspiring final remarks about multi‑future scenarios and the need for Europe‑India partnership.
Speaker: Roberto Viola
AI is a dual‑use technology; Europe should invest more, define common projects, and forge an alliance with India, leveraging existing supercomputing collaborations to drive joint innovation.
Links strategic geopolitical collaboration with technical capacity, emphasizing the importance of international alliances for AI development and deployment.
Reinforced the opening call for EU‑India cooperation, giving the discussion a concrete policy direction and prompting the moderator’s final question about future collaboration.
Speaker: Matteo Valero
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a high‑level enthusiasm about AI to a nuanced roadmap for public‑sector transformation. Arthur Mensch’s use‑case focus and delegation framework grounded the conversation in practical implementation, while Jarek Kutylowski reframed multilingualism as an opportunity, expanding the scope to citizen engagement. Matteo Valero introduced the AI factory model, providing a tangible infrastructure backbone, and Roberto Viola’s reference to the Solow paradox forced the panel to confront the limits of mere digitisation, steering the dialogue toward organizational redesign and policy innovation. Subsequent comments on reskilling, bureaucratic re‑engineering, and EU‑India partnership built on these foundations, culminating in a shared vision of multiple possible futures for AI in governance. Collectively, these thought‑provoking remarks redirected the tone from speculative optimism to actionable strategy, highlighting the intertwined roles of technology, workforce, and policy.

Follow-up Questions
How can capacity be built for AI technologies to be applied more effectively, particularly through India‑EU collaboration?
Establishing joint frameworks and resources is crucial for scaling AI adoption in the Global South and ensuring equitable benefits.
Speaker: Speaker 1
What approaches can AI language models use to overcome multilingualism challenges in public administration?
Multilingual societies need reliable translation and real‑time communication tools to ensure inclusive citizen services.
Speaker: Lucilla Sioli
What is the exact role of the AI Factory (and gigafactory) and how can it transform the public sector and SMEs?
Understanding the operational model, governance, and service delivery of AI Factories is essential for scaling AI infrastructure and fostering innovation.
Speaker: Lucilla Sioli, Matteo Valero
How should policy be designed to facilitate AI uptake in the public sector and ensure real productivity gains?
Policy must address bureaucratic inertia, data sharing, and regulatory frameworks to translate AI capabilities into measurable outcomes.
Speaker: Lucilla Sioli, Roberto Viola
What tools and strategies can increase citizen and public‑administration acceptance of AI beyond simple chatbots?
Identifying user‑friendly, trustworthy interfaces and demonstrating clear value is key to broad adoption and trust.
Speaker: Lucilla Sioli, Arthur Mensch
How likely is the acceptance of AI agents in public administration given existing challenges, and what factors influence it?
Assessing organizational readiness, workflow redesign, and cultural factors will determine successful integration of AI agents.
Speaker: Lucilla Sioli, Jarek Kutylowski
Which public‑sector applications (e.g., health‑care, climate modelling) are most in demand for AI development, and how should they be prioritized?
Demand‑driven use cases ensure resources are allocated to areas with the highest societal impact.
Speaker: Lucilla Sioli, Matteo Valero
How can the ‘Solow paradox’ be investigated to accurately measure AI‑driven productivity improvements in the public sector?
Empirical studies are needed to separate AI benefits from overlapping legacy processes and validate economic impact.
Speaker: Roberto Viola
What reskilling programs are required to turn public‑sector employees into effective delegators and managers of AI‑automated processes?
Workforce transformation is essential; without proper training, AI tools will not deliver collective productivity gains.
Speaker: Arthur Mensch
How can AI be used to redesign bureaucratic processes, including the creation of regulation‑generating or correcting bots?
Automating regulatory tasks could dramatically increase efficiency, but requires careful design to maintain accountability.
Speaker: Roberto Viola
What steps are needed to establish a Europe‑India AI alliance covering supercomputing resources, joint research projects, and shared standards?
A strategic partnership would pool expertise, infrastructure, and data, accelerating innovation for both regions.
Speaker: Matteo Valero, Roberto Viola
How can open‑source, open‑model digital identity frameworks be developed to give citizens control over their attributes and interactions with the state?
Secure, citizen‑centric identity solutions are foundational for trustworthy AI services and decentralized public services.
Speaker: Roberto Viola
What are the best practices for implementing AI‑driven procurement processes that shift from individual to collective productivity gains?
Procurement is a high‑impact area; successful automation can serve as a model for broader public‑sector transformation.
Speaker: Arthur Mensch
How can climate digital twins like Destination Earth be scaled and integrated into public‑sector decision‑making?
Leveraging high‑resolution climate simulations can improve policy planning, but requires accessible platforms and validation.
Speaker: Roberto Viola

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Scaling Enterprise-Grade Responsible AI Across the Global South

Scaling Enterprise-Grade Responsible AI Across the Global South

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel at the AI Impact Summit examined how India and the Global South can adopt trustworthy AI, focusing on guardrails, regulation, and sovereign models [9-10]. Sunita introduced the theme of connecting India with the global south and asked the chief AI officer about trust frameworks [9].


Babak emphasized that AI systems need balanced safeguards, combining human-in-the-loop and agentic oversight, while noting the lack of standards for third-party agent identities and the risk of both over- and under-regulation, especially as sovereign LLM initiatives emerge in India [11-24][28-32]. He warned that continuous reasoning can lead to trivial mistakes and that redundancy and uncertainty assessment are needed to decide when to involve humans [11][16-18].


Anupam highlighted that models trained on clean data perform poorly on the noisy, multilingual data typical of the Global South, proposing synthetic data generation, federated learning and privacy-aware techniques to improve robustness [41-44]. Amod argued that responsible AI must start with sustainable data-center design, using liquid cooling and energy-per-token KPIs to make AI infrastructure environmentally sound [50-54].


Tanvi argued that true sovereignty means building domain-specific LLMs that keep data and cognition under local control, enabling ROI for regulated sectors [64-88]. She noted that controlling model outputs and avoiding hallucinations is essential for banking, healthcare and other regulated industries [76-80]. Balaji described Flipkart’s fairness strategy, which relies on high-quality data, strict access controls, encryption, and transparent bot interactions that default to opt-out for users [100-112][113-118][129-135]. He explained that Flipkart uses a mixture-of-experts architecture, routing generic queries to large foundation models and domain-specific SLMs for localized pricing and recommendations [225-238].


Babak later recommended creating publicly available processing capacity and sovereign sandboxes to let academia, startups and regulators experiment safely, avoiding both regulatory overreach and neglect [145-165]. Participants pointed to India’s proactive steps such as provisioning 60,000 GPUs and adopting modular, future-proof data-center designs to support AI scaling [166-170][177-184]. The summit’s scale and cross-sector collaboration were praised as evidence that India is uniquely positioned to lead trustworthy AI deployment across the Global South [252-259][306-312].


Keypoints


Major discussion points


Guardrails and trust frameworks for AI deployments – Babak emphasized the need for balanced safeguards, including human-in-the-loop/on-the-loop, uncertainty assessment, and mechanisms to verify the identity of third-party agents, while warning against both over- and under-regulation [11-18][20-24][28-32].


Technical and data challenges unique to the Global South – Anupam highlighted that models trained on clean data fail on noisy, multilingual, and intermittent-compute environments; he advocated synthetic data generation, noise-robust training, federated learning, and privacy-preserving model merging to make AI trustworthy in such settings [34-43][44].


Sustainable AI infrastructure as a foundation for responsible AI – Amod described how responsible AI starts with data-center design: liquid cooling, modular and flexible architectures, and clear energy-per-token KPIs to ensure scalability, reliability, and low environmental impact [50-54][177-185].


Sovereignty, domain-specific models, and ROI – Tanvi argued that true AI sovereignty comes from building locally-controlled, domain-specific LLMs (SLMs) that avoid reliance on foreign “frontier” models, thereby satisfying regulatory model-risk requirements and delivering measurable ROI for enterprises and governments [64-71][74-88][85-89].


Operationalizing AI at Internet scale (e-commerce) – Balaji explained how Flipkart ensures fairness, data quality, and security through access controls, mixture-of-experts architectures, and an agentic orchestration framework that routes tasks to either large foundation models or specialized SLMs, while maintaining transparency about bot interactions [98-108][110-118][127-136][225-236].


Overall purpose / goal of the discussion


The panel aimed to explore how India and the broader Global South can adopt and scale AI responsibly: establishing effective guardrails, addressing data and infrastructure constraints, fostering AI sovereignty, and translating research into practical, high-impact applications across sectors such as finance, healthcare, and e-commerce.


Overall tone and its evolution


The conversation began with a cautiously optimistic tone, acknowledging AI hype and the risks of both mistrust and over-regulation. As speakers moved into technical details, the tone shifted to pragmatic problem-solving, offering concrete methods (synthetic data, modular data-centers, domain-specific models). Toward the end, the tone became enthusiastic and celebratory, highlighting India’s progress, the scale of the summit, and a collective sense of pride and momentum for future AI initiatives.


Speakers

Sunita Mohanty


– Role/Title: Managing Director, Primus Partners


– Areas of Expertise: AI strategy, responsible AI, AI policy, conference moderation


Babak Hodjat


– Role/Title: Chief AI Officer, Cognizant (as referenced in the opening)


– Areas of Expertise: AI safety, guardrails, agentic systems, AI governance, regulatory frameworks


– Citations: [S4]


Tanvi Singh


– Role/Title: AI Transformation Leader, involved with sovereign LLM initiatives (e.g., Vatican, New York City) – partner at ECTA


– Areas of Expertise: Sovereign large language models, domain-specific AI, AI for education, AI governance, AI-driven personalization


– Citations: [S7], [S8], [S9]


Anupam Chattopadhyay


– Role/Title: Researcher/Academic (focus on deep-fake detection, synthetic data, federated learning)


– Areas of Expertise: Computer vision, deep-fake detection, synthetic data generation, federated learning, AI security, AI ethics


– Citations: [S10], [S11], [S12]


Balaji Thiagarajan


– Role/Title: Senior Executive, Flipkart (lead for AI/ML and marketplace trust)


– Areas of Expertise: Large-scale consumer AI, fairness, personalization, data security, marketplace trust, agentic orchestration frameworks


– Citations: [S13], [S14], [S15]


Amod Kabade


– Role/Title: Leader at SubMod (AI infrastructure and data-center solutions)


– Areas of Expertise: Sustainable AI infrastructure, liquid cooling, modular data-center design, energy-efficient AI compute, AI-factory architecture


– Citations: [S1], [S2]


Additional speakers:


Mr. Farnovi – Mentioned briefly by Sunita; no role or expertise detailed.


Nandan Nilakani – Cited by Balaji in an example; no role or expertise detailed.


Amol – Referred to by Sunita (likely a mis-address to Amod Kabade); no separate speaker role.


Full session reportComprehensive analysis and detailed insights

Babak Hodjat – Guardrails & Public Compute


Sunita Mohanty opened the closing session of the AI Impact Summit, thanking the audience, panelists and organisers, and noting that the two-day event was both inaugural and final, aimed at “connecting India and the global south” on trustworthy AI [1-4][5-10]. She then asked Babak Hodjat, Chief AI Officer at Cognizant, about the guard-rails and trust frameworks organisations are building for mission-critical AI in sectors such as banking and healthcare [9-10].


Babak answered that AI’s promise and its risks are both real, so balanced safeguards are essential; neither blind trust nor total scepticism is acceptable [11]. He described a suite of techniques-human-in-the-loop or human-on-the-loop oversight, agents that monitor each other, and explicit uncertainty assessment that triggers human intervention [15-18]. He warned that continuous reasoning can accumulate trivial errors after many steps, a failure mode that must be mitigated through redundancy and error-correction [11-13]. Regarding the emerging multi-agent ecosystem, he noted the lack of robust standards for verifying third-party agent identity, a gap that Google’s A2A work is beginning to address [20-26]. He concluded with a call for balanced regulation, cautioning against both over- and under-regulation as India pursues sovereign large-language models (LLMs) [28-32].


Anupam Chattopadhyay – Synthetic Data & Federated Learning


Anupam highlighted that many AI models are trained on clean, well-curated data but perform poorly on the noisy, multilingual, intermittently-connected environments typical of the Global South. Using a deep-fake detection project, he showed accuracy collapse when the model encountered noisy audio or images from diverse regions [41-42]. To address this, his team creates synthetic datasets with tunable noise, scrapes additional data from the web, and builds an automatic fact-checking pipeline that cross-references news from trusted sources [41-42]. Because high-performance compute is scarce, they employ federated learning and mixture-of-experts techniques to merge proprietary models while preserving data and model privacy [42-44].


Amod Kabade – Sustainable & Modular Data-Centre Design


Amod advocated liquid-cooling technologies to reduce cooling overheads and suggested quantitative KPIs such as “energy-per-token” or “water-per-token” to incentivise efficient operation [50-54]. He further stressed that data-centre designs need to be modular, flexible and future-proof, allowing easy integration of newer AI chips that generate more heat and demand higher density [177-184]. By treating the data-centre as a set of interchangeable modules-electrical, mechanical and IT-organisations can achieve long-term reliability and lower carbon footprints [185].


Tanvi Singh – AI Sovereignty & Domain-Specific Models


Tanvi explained that true AI sovereignty means building domain-specific LLMs trained on locally owned data in native languages, eliminating translation bottlenecks and giving organisations control over the cognition that regulators audit [64-71][74-80][85-89]. She linked this to the Model Risk Management framework used in banking, arguing that without such control the technology cannot pass regulatory scrutiny [76-78]. She illustrated the approach with concrete collaborations: trust-building work with the Vatican’s literature and a hyper-personalised education pilot in New York City [200-208]. Deploying “Domain Specific Models” that are not tied to foreign ecosystems, she argued, enables measurable ROI while respecting data-locality and compliance [81-88][89].


Balaji Thiagarajan – Responsible AI at Scale & Agentic Orchestration


Balaji described Flipkart’s operationalisation of responsible AI at Internet scale. He said fairness spans pricing, product quality and after-sales service, all of which depend on high-quality data and strict access-control policies [100-108][110-112]. Data in motion is protected through encryption, and the modelling layer uses a mixture-of-experts architecture: large foundation models handle generic intent detection, while smaller, region-specific SLMs provide precise pricing and catalogue generation for Indian demographies [225-238][119-124]. He also detailed the agentic orchestration framework that routes each query either to a generic LLM or to a region-specific SLM based on the task, ensuring optimal performance and compliance [260-268]. To maintain user trust, Flipkart’s customer-service agents are presented as co-pilots with a default opt-out disclosure; users must explicitly opt-in to interact with a bot, a practice Balaji said is essential for transparency and compliance [127-135][136-137].


Policy Levers & Public Compute


When asked about concrete policy levers, Babak proposed two complementary measures. First, a publicly available processing-capacity platform that would democratise access to compute resources currently concentrated in a few large firms [145-152]. Second, a sovereign sandbox where startups, academia, regulators and entrepreneurs can safely experiment with agentic systems and co-develop appropriate regulations [155-165]. He stressed that the government’s role should be to nurture an ecosystem rather than to build every AI stack itself [156-158][162-164]. Sunita noted that India has already begun to materialise this vision by provisioning 60 000 GPUs to states and institutions, enabling the creation of open-source sovereign LLMs [166-170].


AI-in-a-Box Question


Sunita also raised a forward-looking question about an “AI-in-a-box” modular infrastructure that could enable synthetic-data creation for students and researchers across the Global South, highlighting its potential to democratise access to AI tools [90-93].


Consensus Points


All speakers agreed that balanced guard-rails-human-in-the-loop oversight, uncertainty quantification and clear user disclosure-are indispensable to avoid both blind reliance and excessive rubber-stamping [11-18][33-34][127-135]. There was broad agreement on the need for publicly accessible compute resources and sandbox environments to democratise innovation [145-165][166-170], and on the importance of sustainable, modular data-centre designs with measurable energy KPIs [50-54][177-185].


Complementary Approaches


The panel highlighted two complementary approaches: Babak emphasized the need for publicly-available compute resources and sandbox environments, while Tanvi stressed the importance of building sovereign, domain-specific models that reduce dependence on external LLMs [145-152][155-165] versus [80-87].


Actionable Take-aways


– Implement guard-rails that combine human oversight with automated uncertainty checks.


– Develop standards for agentic identity verification.


– Establish public compute platforms and sovereign sandboxes.


– Build synthetic data pipelines with tunable noise and federated-learning workflows for heterogeneous, multilingual data.


– Accelerate domain-specific sovereign LLMs for low-resource languages.


– Adopt liquid-cooling, modular data-centre designs and KPI-based incentives.


– Enforce high-quality data, strict access controls, encryption and transparent bot disclosures in large-scale e-commerce platforms.


These suggestions reflect the collective recommendations of the speakers [145-165][166-170][177-185][41-44][80-89][100-108][110-118][127-135].


Conclusion & Future Work


The summit showcased India’s unique position-grounded in a strong service-based IT ecosystem, a proactive AI mission that has already distributed massive GPU resources, and a vibrant startup community-to lead responsible AI deployment across the Global South. The panel indicated that future work should focus on translating these consensus points into concrete policies, standards and public-private partnerships that can sustain the momentum generated over the past week [300-312].


Session transcriptComplete transcript of the session
Sunita Mohanty

Thank you very much. Thank you everyone and thank you to our esteemed panelists and everyone who’s come here braving the traffic. I know it’s a fag end of the AI Impact Summit and as people were saying, they’ve heard so much of AI this week that they could decompress for the next one month and not here. So, but however, we can’t wish it out because it’s a very significant part of our life. I’m Sunita Mohanty, Managing Director at Primus Partners and it’s a pleasure being here. We started with the inaugural session on the 16th and we are ending with a session today. So, it’s a very significant moment for us to be. here today. So I’m going to quickly ask, we have a really good set of panelists here, so I’m going to start with you, Babak.

So we’ve been talking a lot, we’ve been attending a lot of sessions, people are talking about what is real in AI and today’s topic is about how do we connect India and the global south and what are some of the guardrails we can build here and especially as the chief AI officer at Cognizant, you’re seeing how AI is really impacting real life and enterprises are really moving to delivery architectures and operating models in AI in mission critical infrastructures like banking and healthcare. So from your point of view, what are you seeing as the guardrails and the trust frameworks that organizations are creating to make sure that these are safe and what would your advice be for India and the global south, what kind of frameworks should be adopted?

Babak Hodjat

yeah AI is real and both the promise and the risk is real and so guardrails are needed we can’t fall off either ledge of trusting AI either mistrusting it to the point where we debilitated by basically having you know a human rubber stamp every single step or the other way basically thinking that it’s you know some magic pixie dust that you just pour over your organization and then turn it on and it’s AI enabled so guardrails are important and there are different ways there’s no panacea to to ensure safety as well as reliability of these systems One of the biggest risks is this notion that because the AI systems respond and reason very well, after one or two reasoning steps that we can allow them to just continuously reason, they do make mistakes, even very trivial mistakes, after several hundred reasoning steps.

So, we’ve been here before, for example, with telecommunications, where a bit might flip when a truck is driving down the road. And so we know how to error correct through redundancy and through other means. We know how to engineer systems that are reliable. And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agents. In the loop and on the loop. Checking other agents’ works. assessing uncertainty in an agent’s output and deciding not to take its output at face value, basically taking the output as well as its own measure of certainty in its output as a measure of whether or not we bring a human in.

So these are just some techniques, but there are a multitude of techniques that can be used. There is also increasingly this issue of agentic identity. When you’re building a system fully in -house for your own use, then you pretty much have control over the agents. You know which agent is talking to which agent, and they’re all built in in -house. But increasingly we’re moving into a world where you have agents from third parties, maybe another business, maybe your consumer represented by an agent, B2C maybe an agent, coming in and talking to your agents. How do you? How do you assess? how do you determine the identity of this agent? We don’t really have very well -established standards for that just yet.

I know our friends at Google are working on that in A2A and there are other standards coming out, but it’s still not well -established. So there’s risks external to your agentic systems as well. So I just listed a whole bunch of different areas. When it comes to India, I know that there’s talk about, for example, building these systems within India, like sovereign LLMs to back the agentic systems. Regulation does play a part. Again, there’s a risk of over -regulating versus under -regulating. I think it’s important, again, not to fall off the ledge from one side or the other. I have opinions on that too but I’ve realized I’m talking too long so I should

Sunita Mohanty

Now thank you Babak really good point I know you have a very difficult job at Cognizant but the two things that you mentioned about keeping agents and human in the loop and we also heard in this one week a lot of people talking about having humans at the center of everything that you build so that’s amazing morning we also heard about regulation versus innovation and US is at the point of innovation Europe is at the point of regulation and where does India and the global south stand so that’s good I’ll move to the academician point of view Anupam with you next and much of the responsible AI research still assumes that data is clean and there is a stable infrastructure but that’s not true about the global south because we do operate here on very heterogeneous data intermittent compute access and multilingual environment so from a research perspective what are some of the new technical directions hardware aware AI and robust architecture evaluation models that are needed to make AI really trustworthy

Anupam Chattopadhyay

I think this is a very important question. We do see the scale of innovation and the pace. This is going at so high rate. It’s not always easy to just take a back seat and think of the research as a standalone component that matures and goes to industry. People are releasing tools and things are going out of hand very quickly. And for that reason, what we figure out is it’s always good to keep the research very grounded and try to test the waters with some real world scenario. And one of the examples that I pick up here is one spin off that we had from a research group that’s on deep fake detection. And there we are facing exactly the problems that the models that we begin with when we start training it, it is actually showing very poor results when we are testing subjects from global, like for the images or for the audio, or if we are putting the audio under some circumstances where there is a lot of noise.

because it was tested on very much clean data and under noisy atmosphere the accuracy of the detection is failing it’s a huge concern because people are also not always educated there is a already a digital barrier and on top of that there is a ai barrier that’s coming up so which is making people fall prey to a lot of cyber scans very easily so that’s the bad side of the ai that we are observing we are trying to defend it and for that the technologies that we are trying to bring in is of course one is to create synthetic data sets so we have a tunable noise addition on top of the data then collecting as much as possible say data by scraping the internet but for deep fake it brings a different problem you see a video or an image and from a human point it’s not even discernible that it’s deep fake or not it looks so original right so we had to create a separate automatic fact -checker which is looking if there is a news that is linking that image with something and news is coming from a trustworthy source only when then we call it say original image or otherwise it is refit.

So that is the data collection issue, but when it goes to the even the implementation aspects that of course not everyone is having access to the high performance computing and we have to cut all the data or the models back to the bare minimum and there we have to resort to techniques where we are doing say mixture of experts that there are different models with different defecitation capabilities we put them together and sometimes the models are proprietary and we want to take it from a particular vendor, merge them together or an organization they have their own contextual model but they don’t want to share the models as it is and for that we have techniques like federated learning on how to merge the models and still guarantee that their training data or their models will never be leaked.

So it’s a privacy aware building up. So we do have all the technologies and tools just a short glimpse of that.

Sunita Mohanty

Thank you Anupam. I think one of the things that we are always discussing about there is not enough data to train the models and that’s why there was a lot of emphasis during this week around getting language models in so that there is enough Indic language as well as across APAC. The other thing is about synthetic data which is very important for us to keep the data clean and one of the conversations we were also having is how do you enable creation of synthetic data in countries like India and the Global South by creating AI in a box which is a very modular infrastructure that is available to students, researchers in a very small minimal environment for them to be able to create some of this data.

So thank you. With that Amod I wanted to speak to you about AI infrastructure given that you are in that business with SubMod and that is now becoming central to the responsibility defense and debate as well. So the IEA estimates a data center electronically and the electricity demand will by 2030 and AI workloads are a key driver. How do enterprises and governments think about responsible AI not just from a model creation perspective but generally at an infrastructure, environment, ecosystem perspective in terms of cooling, energy transparency, resilience?

Amod Kabade

So from our point of view, the responsible AI starts from the design of the infrastructure. So if my design of the data center is sustainable that is where I am going to then be able to achieve my goals efficiently and sustainably. So to do that, today we can leverage liquid cooling technologies which will minimize the overheads in terms of cooling this infrastructure requirements and allow us to scale AI rapidly for betterment of people and the planet. We need to, as government I would say, we need to get to a point where we can define KPIs around energy consumption per token or water consumption per token for these type of massive infrastructures and incentivize the players who are actually achieving or crossing those KPIs.

Essentially it is all about making these data center designs sustainable from the power consumption standpoint and achieving a much better outcome that we want to achieve in terms of AI and its scale.

Sunita Mohanty

So that’s a good point because Tanvi and I and Babak, you as well have all come from Davos. I think one of the main conversations there was around energy and how do you make it efficient and in one of the conversations at Bloomberg we did hear about ROI and how do you measure the cost of query, like what is it on the infrastructure. So I hope that at least with renewable energy and efficient cooling system, we get better as well as optimized query capabilities. So with that, Tanvi, I wanted to move on to you and especially because you’re creating sovereign LLMs now with the Vatican as well as with the New York City. So drawing from your experience of leading AI transformations, how do you think that deep tech startups in critical sectors like BFSI can build advanced functionality and across complex regulatory systems and also what’s your sense about the definition of sovereignty given it’s a very loosely used terms across all and we’re hearing a lot of it this week.

Tanvi Singh

Thank you, Sunita. Thank you everyone for having me here. I’m very happy, very excited to come back to my homeland in Delhi again from Zurich and the conversation was very enlightening. We call it Davos 2 .0 of what India is hosting now with the AI impacts on it. So thank you for having me. I think the question is super loaded. I can go on and on and on for multiple hours. But let’s just pivot into the conversation on ROI. I come in from a banking background, worked for more than a decade in Swiss banking. And the conversation always around if you’re putting in an investment, what’s the return of it? So whether we are calling the use of LLMs, the frontier models, we all know what the return of investment for the consumer is.

I think this is the first technology that touched consumers first, enterprise and governments later. We always had technology touch enterprise and government first and consumers later. But here, since the equation has turned lopsided, there are lots of factors that goes into ROI. So going back to sovereignty, I think President Trump has really done the marketing and sales for sovereignty. And everybody tends for themselves, whether it comes to defense, or whether it comes to owning your own infrastructure. your data and your intelligence and your cognition, which we call as models. But one of the factors that always resonates coming in from banking, if you cannot control, if you’re not accountable to what you present, you would never pass the regulatory bars on using the technology.

So we had this very famous team called the Model Risk Management, and we used it for AIML for the longest time. I think anybody in banking would resonate with that, similarly for health care, similarly for the regulated industry. So with the use of LLMs, and I had the opportunity of working very closely with OpenAI during my time at UPS, being Microsoft’s primary partners, we have the entire world’s data in ChatGPT and all the other LLMs. There’s no way we can guarantee what output the system’s going to throw at you. So the control on cognition and intelligence is as important as the control on infrastructure is, and that is paramount, and which gave birth to what we’re now building at ECTA, which is the Domain Specific Model.

It doesn’t get trained on an open source. It’s not an American first or a Chinese first or a French first. We’re talking a lot about France and Mistral. It’s your own model. And from a sovereignty perspective, it’s important that we can build our own models where data is not a constraint. You could use the data of your own content, of your own organization, of your own government in your native language, and there’s no translation required. And you could use multiple use cases across that domain, which is extremely applicable and hopefully gives the ROI to the sovereign stacks that different governments and different organizations are building for themselves. Because if your model is in your control, you could put them into consumer -facing use cases and not the internal productivity use cases.

And the value of this whole technology for enterprise and government is only applicable when the end consumer gets to use it the way the retail consumers are using OpenAI and Anthropix

Sunita Mohanty

Thank you so much for that. Because I think one of the other things that I hope… this week at the conference is also not only protection and guardrail at a model level, but there was also a demo of a product where at the hardware level, they are trying to put some kind of controls so that there is a break. So we’ll get to that topic. I wanted to move to you, Balaji, because Flipkart is right in the middle of consumers, like user -user -consumer, very much like our LLM models. So how do you think you operate at a population scale like in India and Global South and in a high -velocity environment? Where does responsible AI collide with business realities?

For example, how do you manage personalization, data security, fairness, and marketplace trust?

Balaji Thiagarajan

Yeah, Sunita, thank you for the question. You’re right, Flipkart operates at Internet scale pretty much, right? So we have 500 million users. See, when you talk about fairness, it’s across multiple areas, and we’ll talk about sovereignty separately. Pricing, we have to be fair. The quality of the things that we sell in our marketplace, it has got to be good quality so that when buyers buy something and they see the quality that they see in our applications and our marketplaces, they get exactly what they expect. Third, around fairness and pricing is also quality of service. When we deliver something, like it’s okay to deliver milk or groceries, but if you’re going to deliver some big equipment like an air conditioner, the quality of service is also not about just delivering.

It’s also about helping them understand how to use the product, how to install it, how to do after -sales service. And so we have companies in the Flipkart group like Jeeves that also does that. So for us, fairness is across a broad spectrum of things, starting from the beginning of the customer journey all the way through servicing the customer through the life of the product. Right. Now, if you think about how we achieve that. you know, it’s not a formula that we know exactly how to kind of implement. There is a recipe, what we call standard operating procedure. It starts with data. We need to have good quality data, right? If we don’t have good quality data, then from there on, everything starts getting diluted further and further and further.

Now, on top of that good quality of data, the other thing that we do is the access controls on the data and who can access what data is also very important. So, you know, that’s where we bring in security aspects of it from an access control perspective. Then when you’re talking about interchanging data between organizations, between services and so on and so forth, it’s not only data addressed, it’s also data, you know, in motion. So how do you secure that data? So that is all about encryption and everything else that goes on. And then when we go into the modeling layer, you know, at Flipkart, you know, I think Anupam talked about it, we use a mixture of experts.

the concept of a world model being able to serve our needs or for that matter anybody’s needs to the specificity of fidelity of information or accuracy is something that I have not seen work right so at a broad information level the LLMs of the world the chat GPTs of the world you know cloud opus and everything works but when you get into very specifics for example I’ll give you example you know we work on image generating models a seller today can bring you know a piece of you know whatever you know skew or listings that they want to sell they can take a picture and based on that picture we can actually create a listings in a catalog and the seller can be in business in a marketplace in a matter of 20 minutes right so how do you do that to do that we have to recognize that we have to recognize that we have to recognize that we have to recognize that we have to recognize what this picture is and based on that we also have to recognize all the things that we need from the picture to create a catalog listing and based on that listing we also want to tell the seller what kind of price ranges you can actually sell this equipment for.

So when you go through all these things, when you talk to an LLM it’s going to give you a range, it’s going to take some international data into account but you have to train the what we call a domain specific models which is what Tanvi was talking about. We call it SLMs for the specific domain, for the specific region, for the specific demography which is India in this case and then price it according to that. Sellers are not selling to somebody sitting in US or England or wherever else. They are selling to somebody in India and by the way we can also give them a price range that if you are selling it in Bombay or Mumbai versus Delhi versus Calcutta versus some other place in Bihar this is the kind of price ranges you can have.

That’s

Sunita Mohanty

So that’s a good point. But again, Balaji, I want to ask a quick question because when you use agents in services like yours for customer service, which is a very important component of the job, are you transparent about this being a bot versus a human? And that conversation has also come up.

Balaji Thiagarajan

Yeah, so look, we today, in customer service, when we deploy our agents, they are primarily co -pilots, right? The reason is we have not mastered the technology yet in terms of actually using voice bots that can directly talk to somebody, respond to somebody in a multilingual way. Also, and when we say we have not mastered it, we know how to do it conceptually, but the models hallucinate, right? That’s number one. Number two, we do have a very, very, very strong ethical and compliance. system in -house which says fair disclosure and transparency is by far the most important thing to win customer trust. So if you are going to have a conversation with an agent, an agent is going to talk to you, for example, our UX experience teams, we actually look at it from how will the customer understand who we are talking to and we actually have what is called disclaimer saying that you might be talking to a machine here and if you do not want to have that conversation, you can always opt out.

So there is an opt out position by default. You have to opt in to actually have the conversation rather than opt out to have a conversation. If you see a lot of companies, including the Apples of the world and the Googles of the world and everybody else, default is opt in. And you have to very carefully think about it because if you are not conscious about it, you have just opted in. Right? That’s not how we do it. We opt out and then we have folks opt in for us.

Sunita Mohanty

That’s very refreshing to hear. I would go back to Babak next, but before I go back, I have a very unusual request. They want us to huddle together for another group photo in the middle of this before we get to you again. So please can I request everyone. Okay. Moving on after the picture. Back to you. I mean one of the questions that a lot of the representatives from the government have been asked over the last one week is what is the framework for building a government stack, a viable government stack, AI stack. So if you have to do scaled AI deployment in the global south which includes monitoring, human oversight and vendor accountability. what would be the framework that you would recommend or your advice to the government on how we should look at it?

Babak Hodjat

I would start off with processing capacity. That’s the underpinning for building these systems in -house and running inference on them. If you really want to build something internally. And I would actually create a publicly available processing capacity. It’s something that everybody’s complaining about everywhere around the world. Most of the processing capacity is concentrated in private companies or large companies and not available to producers. For example, students or the public to experiment and build stuff. and then rely on academia and students and research and government entities in the public domain to actually build on top of that. And so that’s one thing I would suggest. It would attract talent. It will reinvigorate innovation outside of this very exclusive few big companies that can innovate in AI.

And then I would also create a sandbox, sort of a sovereign sandbox, in which to invite both entrepreneurs, startups, academia, the regulator, to, in a safe environment, safe and controlled environment, be able to try out various different applications, various different interoperability between these agentic systems. and come up with the regulatory framework that is well -suited for India specifically. I don’t know if the role of the government is to actually build an AI stack. I would think that the role of the government is to actually create the ecosystem within which this stack can organically create and be safely created. We talked briefly about regulation. You can’t be front -running regulation. You can’t also be completely negligent of regulation.

It’s risky either way. And the best way to do that, I think, is in some form of a safe sort of sandbox environment where the regulator can try different things, can observe. And if something goes wrong within a sandbox, you have control over it. The implications are limited. and then you gradually move that out to the more general usage. That would be my recommendation.

Sunita Mohanty

No, that’s music to our ears, Babak, because to be honest, the Indian government has actually, under the aegis of the AI mission, done exactly that. They’ve created 60 ,000 GPUs. They’ve procured and provided to states and to institutions to create, and we’re seeing a lot of innovation come out of this. We saw some of our sovereign LLM models that are now going to go open source with everything that they have created, which is amazing. So we were in some of the announcements that were happening last week, and Sarvam spoke about the models that they are creating. So with that, I’m going to you next, Amod. So we spoke about the infrastructure. So you worked on enterprise and data center operations, and you are now…

You are now moving from small AI pilots to sustain high -density production environment. So based on your experience across projects, what patterns have you seen in organizations that are successfully scaling their AI infrastructure? And what are one or two cases in early design choices, whether it’s how you code or your cooling, your density, or your deployment planning that has actually made a decisive impact on reliability and trust?

Amod Kabade

is no more working. Now one needs to look at the chips which are going to use today and the chips which you have a future roadmap of. That needs to be a core part of your design and build and even with that you still need that design to be modular, flexible and most importantly sustainable. And why we need that is because traditionally when you design and build data centers it takes anywhere between two to three years and there are cases wherein it exceeds that also but let’s talk about on an average two to three years. In that period I mean and all those who are tracking all these activities from Nvidia within two years they would have launched three or four generations of their new chips and suddenly what you plan for would have become redundant or obsolete.

So whatever you plan for today needs to be flexible enough to accommodate. all those future road maps as well and how do you do that is by designing your data centers in a modular fashion leveraging technologies which allow you to accommodate future chips which are going to be all the more resource hungry they are going to generate all the more heat so we need to have those technologies in place so that your designs can sustain that over a long period of time. So that is one pattern that is clearly coming out with people who are now moving from let’s say pilot to production or from prototype to pilot people are understanding that aspect and that making a key consideration around these areas.

Coming to the cases of benefits I mean we have seen cases wherein by using this sustainability focused technologies in terms of cooling companies are getting benefits upwards of we are seeing customers who are live more than 3 years with 0 IT failures which talks a lot about the reliability aspect of the setup that gets designed here so all of it, if I have to put a summary of all of it it is all about making your design decision which keeps it flexible, modular and scalable and I would like to just leave a thought here that the way we see cars getting manufactured in factories today, wherein you source many components of the car, certain components are manufactured by the manufacturers in their factory and everything gets assembled and rolled out as a car, as a product, we see data centers moving in that fashion wherein the electrical, mechanical the IT will be manufactured, designed and manufactured as modules and then rolled out to the sites as modular, scalable, sustainable infrastructure for AI factories of future.

Thank you.

Sunita Mohanty

Great. And I really hope we get a great design playbook for building data centers that are accessing renewable power, better cooling systems, and better ROI. So with that, again, Anupam, to your view from research, how do you think that academia and industry should jointly rethink model efficiency, efficiency, reliability, and assurance as a single design problem rather than treating ethics, performance, and infrastructure as different layers?

Anupam Chattopadhyay

Okay. So I’ll take a little bit step back from this problem to highlight that for any technology, we do have the good side and the bad side. And before it brings its role out to the industry and masses in general, there needs to be enough safeguards in place. And in academics or university, we do have that liberty to take pot shots saying this is wrong. like we do feel right now there is a lot of gaps in the cyber security of AI and we are trying to raise as much attention to that as possible so we are doing that as part of the research that the models are not properly trained that there are possible loopholes in hallucinations and there could be alignment issues and without these things properly being regulated when it rolls out to industry there will be repercussions, there will be setbacks so we draw caution to this and to address that problem what needs to be done particularly in global south because I spent a lot of time in research in Europe so I have seen that and I can make a comparison is a very very strong industry academia partnership that the industry brings up a problem and tells ok this is what needs to be solved and we want your students to actually learn this before they come to the industry and we try to say align with that kind of philosophy so one of the things that I like very much from my perspective in Singapore and NTU that they started this AI .sg as a single window consortium sort of stuff which has multiple steps like starting from the research that they are giving funding then there is a technology innovation then there is a technology transfer and commercialization, there is a dissemination and there is a regulation so no matter who is a researcher or university or the company, they can participate at any level of this so the problems can be very different because university is like a melting pot so we are making some model with little bit of training and little amount of data but when it goes out then we see the problem is now becoming AI for automotive or AI for perception module, AI for agents, this is not what we can control because every industry have their own regulation, their requirements moves at different pace right so this is what we try to address by having the single point window and clearly define the parameters and benchmarks for example fairness and ethical thing is what is the recurring theme in the discussion here is often underrepresented we highlight the performance but not the ethical lapses and the hallucination and the alignment lapses as much as possible jailbreaking or getting data out of a model it’s so easy that we are really scared before someone says okay start rolling out this cloud but we know from academic point of view it’s weak but we cannot control this unless enterprises and the policy makers steps in and say this must be regulated

Sunita Mohanty

that’s a good point and i think the example that you took both out of europe and singapore is critical but at least that way i have seen with artificial intelligence there’s been a lot of collaboration that’s happening between you and the other countries and i think that’s a good point and i think that’s a good point industry and academia throughout the world so we hope that continues so to you Anvi, given your work with platforms like Palantir and OpenAI, how should AI applications balance broad interoperability with deep scalable domain integration? And also, we’d love to hear your experience of what you’ve done in New York City as well as Vatican and what are some of the learnings that we can take from there.

Tanvi Singh

Thank you, Sunita. I think when you ask about learnings from Palantir and OpenAI, and I’m fortunate to be a design partner in both the cases in my work at the bank. So Palantir, this was way behind when they were more a government technology provider for the U .S. defense services, and they wanted to make an enterprise play. So my bank was a design partner from financial services. So see the transition from being a defense services company to a platform company in the space of AI and ML, and that has been very interesting. Because to biology… point, right? There is no one word model that can fit everything and Ballantyre obviously is one of the best software out there when it comes to AI ML.

So they developed a stack that you could do the customized AI ML at scale for AI ML and that was a huge learning. Being in a bank, one size doesn’t fit all and you can’t think of a domain as a financial domain and a healthcare domain because the way we do finance in Switzerland is very different than the way we do it in the UK and the regulators are different. Our retail use cases are very different from the wealth use cases and one size does not fit all, especially in the regulated industry. So that was a very important learning. With OpenAI this was, now all the data, 80 % of the enterprise data still remains within somewhere that we keep on storing and archiving in Switzerland.

You have to have 10 years worth of every single conversation that has happened with the clients, every single information that has been manufactured in terms of data while doing it. any regulatory work. So we have that data, we never used it. Not even with Palantir, it’s very AIML. With OpenAI, you get this whole unbound data that you could use for a lot of interesting things to manage your regulatory and compliance requirements, which is the biggest cost in technology for a bank, but also for engaging with your clients better, right? So, but then it’s an API, right? It’s a platform is what I got to experience with Palantir, and with an API access, we could get it into the early 23, 24 set of timeframe.

So with those two learnings, what if you could create a scalable, customizable platform like Palantir, but for generative AI, which is what we started building at Ecta. And the idea here is very much, you build in the guardrails, you build in the security as part of your four layers that we have at Ecta, and use your domain knowledge, your domain corpus of information to train and train your clients. So it’s very much yours. There’s no translation required. It’s very language -oriented. It’s very deeply culturally oriented, and that’s why the work of the Vatican was very significant to say, if the church is going to trust you with their literature, with their information as a benchmark against some of the hardest questions that get asked to the church, then we have a fair chance to get introduced to the enterprise and to the governments.

And from a New York perspective, there’s a lot of work we’re doing, starting with AI and education, which is what we’re also hoping to do more of in India. The challenge remains at least 50 students to every teacher, lots of languages, lots of cultural aspects, and the infrastructure is yet not there to match what the students really need. But now with AI, you could hyper -personalize the experience based on every student, so you do not have to learn English to learn math. You could very much do math in your local language, in Marathi, Bihari, or any other state language, and that sort of barrier could go away. And I think proof is always in the pudding.

So we get to see… how these domain models get to work in enterprise as well as in the governments.

Sunita Mohanty

Wonderful. And you have a lot of insights on what’s being asked to the church. So we’ll have to catch you on that someday. But thank you. So coming back to you, Balaji, from Flipkart’s perspective, how do you decide what do you build internally using AI and what to adapt, what to rely internally for, where do you use an external model and how do these choices affect your long -term decisions with business and customers?

Balaji Thiagarajan

Yeah, you know, I think we talked about this. As far as I can tell, unless we decide to build our own foundational models from ground up, we will always use a mixture of experts where at different layers we’ll use different kind of, you know, parametrized models. Usually, if you look at, you know, a workflow that is getting executed, like I’m trying to buy something, a shopping journey, or I’m trying to decide in a discovery funnel or whatever else, the top of the funnel usually is a very generic statement. That’s where the trillion parameter LLMs actually help. And for us, it works because at that point in time, all you’re talking about is an intent and trying to get understanding of what the user is trying to say.

But as you start getting into the further details and the intent becomes more and more clearer, where we want to provide the right recommendations or hyper -personalize information or adapt to what the customer is doing, that’s where the smaller, what we call SLM, the smaller language models come in. And for that, the way we think about this is we have an agentic orchestration framework. Each agent actually decides what is the task on hand. And based on the task on hand, we have SLMs that have been trained for a specific task domain or a specific task even. And the agent knows at that point in time, I’ve got to go to this particular, you know, infrastructure of an LLM or an SLM.

and then get the answers from there. So we have an agent tech orchestration framework that is a very dynamically learning framework that understands what’s going on, adapts to what is happening in the ecosystem, makes decisions online, and then it depends on what is happening. It redirects the traffic to the right SLM. For example, if the consumer is asking for, you know, show me the best price for these categories of products in a specific region, that’s usually a pricing and promotion, you know, domain. And that domain might be a domain of data that we have trained on a specific SLM on a specific catalog of items for that particular area, if you will. Now, if somebody comes and says, I’m just looking for running shoes, right?

That’s a very, very different, there’s a very, very different query. And in that query, what you do is you actually look at the whole catalog. Okay. and then you marry that catalog results with you as a person of your interest and then we kind of filter that out and give it. So that’s the way it usually works. So today, like Nandan Nilakani was telling that people who use UPI, everybody uses UPI in India, but nobody knows what is the technology behind it. So hopefully we’ll get to a point where we don’t know what’s the technology behind, but it makes every user’s life so easy and contextual that it is actually then had an impact.

Sunita Mohanty

So with that one last question for all of you. So Babak, I’ll start with you and we’ll go from left to right. What’s your feeling about the last one week? What is your key takeaway from this? What are you taking back outside of the traffic and the crowd? And any piece of advice that you would give?

Babak Hodjat

You know, I was at… AI Everything Summit Africa last week in Egypt. and they said it’s huge, it’s one of the biggest summits, 23 ,000 people. And I came here and they told me it’s 300 ,000 people. So just the scale, the scope, and, you know, India is in a unique position that its starting point is a starting point of technology and IT. So I think it’s much better prepared to understand AI, its implications, how it can be used. Very strong startup scene. I was very impressed by that. And, yeah, so to me, it’s one of the largest and most interesting, and I go to a lot of these conferences that I’ve been, so very, very impressive.

Sunita Mohanty

Yeah, that’s good, because when we started, a lot of the planning started in October or even before that. I think we never thought about the size of the event. And when we saw the footfall, and there was government, there were researchers, there were students, there was business, it’s just amazing that we could. really run that scale. So thank you so much. Amol?

Amod Kabade

I think it has been a fantastic week here in Delhi participating in the AI Impact Summit. And I’ll just go back to the three sutras, people, planet and progress. I would only say that it is our responsibility to build AI infrastructure or the entire ecosystem around AI, which is planet friendly. And focusing on the real use cases which address the last mile, the last citizen of the country. And progress is something that is bound to follow.

Tanvi Singh

Okay, so I can articulate the journey from Paris where the AI Impact Summit started, which I was in last year, to just being a dialogue between the political leaders to … There was earlier this year in January where sovereignty and building AI for everyone and not just the big frontier models that we see coming out of America or the competition from DeepSeek and other major players from China and Global South, building the technology and the sovereignty becomes the main theme in Davos as a conversation to actually seeing it implemented. It’s been implemented here across the halls of Bharat Bandhapa. It’s fascinating. I feel very proud to be of Indian origin. And also like taking what India has done to Geneva as part of the organizing committee in Switzerland where I come from, I think there will be very hard shoes to fill.

And coming in from an ETA perspective, my company, I think sky’s the limit to the opportunity. Hearing from Balaji and many, many other practitioners including ServiceNow and others, it just seems like the opportunity is there. people are ready to experiment, people are looking towards not piloting but actual return of investments. We see that with infrastructure. We see that with what really works and without customization. This is the deepest and the most important part of every organization of every government with what we do with our data and how we use the cognition where we have control over that cognition. And I like what Mr. Farnovi said. He said, like, we don’t want the American and the Chinese babies.

I like what ACTA is doing. It’s bringing in a lot of Indian babies to the world, which is what domain models do. So very much looking forward to hosting many of you also in Geneva next year and a very big learning and a very impactful week that India has organized for the world. Thank you.

Sunita Mohanty

Anupam? Okay.

Anupam Chattopadhyay

In one word, summer is just fantastic. Like, I have not seen skills like this. because in academics you know we go to technical conferences ranging from very small 100 to 100 people. The largest one I attended was having 9000 attendees. That’s triple AI, also an AI conference. But here it’s like a complete order of magnitude more. And this is very much essential that we have the dialogue between researchers, entrepreneurs, policy makers, ministers all in a single stage. That’s really, really wonderful. One thing I was curious about maybe as part of organization team you can throw some lights into is how much AI was actually used to arrange this and to maybe defend against cyber attack and all the systems to detect if there are people passing through.

So that I am curious about. So that would be like AI in action in hosting an AI summit.

Sunita Mohanty

We did use a significant amount of AI but obviously not everything. But one of the most really amazing things, I don’t know how many of you saw the Prime Minister’s address. But one of the real things was we had AI agent that was actually doing real time translations which was more for accessibility purpose. So I think those are examples of where we have really used and of course in the planning. Although this is not just with the government, I think a lot of people from business, from academia have all come together. So it’s primarily a win across India. I haven’t seen that scale of partnership that has happened. We have a team that sits in the ministry and for the last 6 -7 months the amount of people that have been just coming in, volunteering, supporting, it’s just amazing to see how it’s come together.

Balaji? Look

Balaji Thiagarajan

I’ve been to the first AI impact summit. I’ve not been to other places. But the way I look at it is, the commitment to AI, the government of India, decided to do this. It’s a masterstroke for multiple reasons. One is that it brings the government, it brings the industries, it brings the academia, it brings the students, and it brings the imagination of the whole country together that this is doable. The art of the possible is absolutely there. And more importantly, when I think about this, India’s technology underpinnings was from a service -based industry, right? And if you harken back to the world of telecommunication where we leapfrog landlines to mobile, I think this is the opportunity for India and India -based companies and any company that wants to operate in India to kind of leapfrog this whole SaaS -based technologies, web -based technologies, what have you, and directly leapfrog.

And India can take that opportunity and become the number one industry. software provider, not of services, of systems and products that are at a world scale. We do not have a brand, we do not have a software brand in India that sells on a worldwide scale. Service is not a software brand. This opportunity provides India to leapfrog because we have the scale, we have the people, we have the intelligence, we have the ability to actually think very, very differently at a price point that nobody can imagine, to be honest. And then, now the government is behind this and with the public infrastructure it’s also reinforcing all the research that needs to happen. So this is an opportunity for India to take or lose as the case might be, but I think India is going to take

Sunita Mohanty

No, thank you so much and on that optimistic note, thank you all of you for being here and we started the conference with talking about the theme which is Sarvajana Hitaya, Sarvajana Sukhaya welfare for all and happiness for all and I hope we carry this message across the global south into Geneva and bring in Europe and US into this. as well. Thank you so much.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (9)
Confirmedhigh

“Babak Hodjat emphasized that AI’s promise and its risks are both real, requiring balanced safeguards; neither blind trust nor total scepticism is acceptable.”

The knowledge base notes that both innovation advocates and public safety advocates have valid concerns that need to be balanced, confirming the need for a middle-ground approach [S97].

Additional Contextmedium

“He described techniques such as human‑in‑the‑loop or human‑on‑the‑loop oversight, agents monitoring each other, and explicit uncertainty assessment that triggers human intervention.”

The AUDA-NEPAD White Paper outlines three levels of human oversight for AI systems, providing additional detail on human-in-the-loop frameworks [S40].

Additional Contextmedium

“Continuous reasoning can accumulate trivial errors after many steps, requiring redundancy and error‑correction mechanisms.”

Research on internet resilience highlights cascading failure risks and the importance of redundancy and error-correction in complex systems [S103].

Additional Contextmedium

“There is a lack of robust standards for verifying third‑party agent identity, a gap that Google’s A2A work is beginning to address.”

Google’s recent AI agent toolkit release (Agent Development Kit) represents an effort to create standards and interoperability for AI agents, aligning with the described gap [S106].

Additional Contexthigh

“Balanced regulation is needed in India as the country pursues sovereign large‑language models, avoiding both over‑ and under‑regulation.”

Discussions on AI regulation in India stress the delicate balance between business-friendliness and oversight, echoing the call for measured regulation [S26] and concerns about over-regulation outcomes [S95].

Confirmedhigh

“Many AI models trained on clean, well‑curated data perform poorly on noisy, multilingual, intermittently‑connected environments typical of the Global South.”

Evidence shows that language models exhibit significantly lower performance on non-English languages such as Telugu, confirming challenges with noisy and multilingual contexts [S110].

Additional Contextmedium

“Amod advocated liquid‑cooling technologies to reduce data‑centre cooling overheads and suggested KPIs like “energy‑per‑token” or “water‑per‑token”.”

The AI Impact Summit highlighted unprecedented power and cooling demands of AI workloads, underscoring the relevance of liquid-cooling and efficiency metrics for data centres [S8].

Additional Contextlow

“Data‑centre designs should be modular, flexible, and future‑proof to accommodate newer AI chips that generate more heat and demand higher density.”

Resilience guidelines for IoT and data-centre services emphasize modular, scalable designs to maintain operation under varying loads and power constraints [S104].

Additional Contextmedium

“Babak Hodjat described the evolution from single‑agent systems to complex multi‑agent ecosystems where agents must coordinate while protecting their interests.”

A dedicated session on multi-agent systems at the summit featured Babak Hodjat discussing this evolution, providing additional detail on his perspective [S4].

External Sources (111)
S1
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — -Prime Minister Modi: Role/Title: Honorable Prime Minister of India; Area of expertise: Government leadership, policy
S2
Announcement of New Delhi Frontier AI Commitments — -Brad: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified -S…
S3
IGF 2023 Global Youth Summit — Audience:Thank you, everyone. My name is Emad Karim. I’m from UN Women, working on online gender-based violence. And my …
S4
Challenging the status quo of AI security — ### Multi-Agent Systems at Enterprise Scale (Babak Hodjat) Sounil Yu: Thanks, Babak. And one of the standards that we a…
S5
Subrata K. Mitra Jivanta Schottli Markus Pauli — An analysis of India’s foreign policy over seven decades will inevitably reveal evidence of both change and continuity i…
S6
The reality of science fiction: Behind the scenes of race and technology — ‘Every desireis an endand every endis a desirethenthe end of the worldis a desire of the worldwhat type of end do you de…
S7
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — – Abhay Soi- Tanvi Lall – Jigar Halani- Tanvi Lall
S8
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — “Sir, my question is directly to you”[1]. “I wanted to know on that”[2]. “My name is Umesh Prasad Singh and I’m an assoc…
S9
ElevenLabs Voice AI Session &amp; NCRB/NPMFireside Chat — -Shailendra Pal Singh: Role/title not explicitly mentioned, but appears to be a co-presenter/expert on Bhashini translat…
S10
POST-QUANTUM CRYPTOGRAPHY — – [86] Submission requirements and evaluation criteria for the post-quantum cryptography standardization process, 2016. …
S11
https://dig.watch/event/india-ai-impact-summit-2026/mahaai-building-safe-secure-smart-governance — Thank you sir, that was quite reassuring as well And since you spoke about quantum I want to bring in Dr. Anupam Chattop…
S12
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai-starts-with-people-not-just-algorithms — Hi, my name is Anupama. I am one of AI Kiran members. Professionally, I’m a data scientist. Now moved to a technical lea…
S14
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S15
https://dig.watch/event/india-ai-impact-summit-2026/from-kw-to-gw-scaling-the-infrastructure-of-the-global-ai-economy — By all means. The second layer is one of the layer is the serving layer when you build these applications. How do you do…
S16
Safe and Responsible AI at Scale Practical Pathways — – Rohit Bardawaj- Audience LLMs comprise only 10-15% of a solution, with the remaining 85% being guardrails, human-in-l…
S17
Agentic AI in Focus Opportunities Risks and Governance — They’re not responsible. They can’t take accountability. It’s the humans. It’s the business owner who takes it. So havin…
S18
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S19
WS #283 AI Agents: Ensuring Responsible Deployment — Carter argues that developing standards for how agents authenticate themselves and identify themselves to third parties …
S20
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — An interesting observation from the discussion is that sandboxes can facilitate the growth of digital banks and electron…
S21
Dynamic Coalition Collaborative Session — Panelists discussed balancing ethical guidelines with practical implementation. Gupta warned against over-regulation tha…
S22
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — The Governor challenges the common perception that regulation stifles innovation, arguing instead that appropriate regul…
S23
E-commerce and Sustainability: an overlooked nexus (Brazilian Center for International Relation – CEBRI) — They caution against excessive regulation, as it may stifle innovation and economic progress, particularly in developing…
S24
WS #100 Integrating the Global South in Global AI Governance — – Regulatory uncertainty is a major challenge for companies 2. Regulatory Uncertainty Salma Alkhoudi: So this slide i…
S25
Regulating Open Data_ Principles Challenges and Opportunities — Global governance and increasingly the global south is not merely observing this evolution, it is participating in it. I…
S26
Building fair markets in the algorithmic age (The Dialogue) — In India, a delicate balance must be maintained between being business-friendly and regulating dominant platforms. Key p…
S27
Shaping the Future AI Strategies for Jobs and Economic Development — -Infrastructure and Energy Challenges: Significant discussion around the massive infrastructure requirements for AI depl…
S28
Robotics and the Medical Internet of Things /MIoT — Another crucial aspect discussed was the energy efficiency of data centres, which are vital in supporting human-computer…
S29
Ethical AI_ Keeping Humanity in the Loop While Innovating — Innovation is much more than that. innovation is really challenging ourselves to go further. And I want to go back to a …
S30
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S31
Driving Indias AI Future Growth Innovation and Impact — Rajgopal advocates for minimal regulation to avoid stifling innovation, arguing that benefits outweigh risks and issues …
S32
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Caio Machado:is yours. Thank you very much. It’s great seeing all of you. I’m going to quickly put a slide up with my co…
S33
The rise and risks of synthetic media — The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in he…
S34
Artificial intelligence (AI) – UN Security Council — Finally, synthetic data enhances representativeness by allowing for thecreation of diverse and comprehensive datasets th…
S35
Trusted Personal Data Management Service — He suggests the use of privacy-protecting technologies, such as federated learning, where the data remains with the orig…
S36
AI for Good Technology That Empowers People — “So to make it even faster and achieve the sub 10 milliseconds, you actually have to bring in inference and training to …
S37
Transforming Health Systems with AI From Lab to Last Mile — Implement federated learning approaches that allow local data privacy while contributing to model improvement
S38
Empowering Inclusive and Sustainable Trade in Asia-Pacific: Perspectives on the WTO E-commerce Moratorium — To ensure successful integration, bridging the gap between academia and industry is essential. Due to the rapid advancem…
S39
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — At the same time, these people have to be trained to give back more. If we can get every person to be evaluated or value…
S40
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Academia-industry partnerships are crucial for fostering innovation, addressing industry challenges, and bridging the ga…
S41
Open Forum #17 AI Regulation Insights From Parliaments — Countries in the Global South face multiple challenges including lack of computational power, data access gaps, and insu…
S42
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Jovan Kurbalija: Thank you. She’s quiet. Okay, okay. Good. Great. We heard from our excellent speakers at the very begin…
S43
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S44
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S45
Building Indias Digital and Industrial Future with AI — This comment introduced nuance to the sovereignty debate and influenced the conversation toward finding balance between …
S46
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — This comment reframes AI sovereignty from a purely nationalistic concept to a practical business and security imperative…
S47
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S48
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Understanding algorithms used in consumer interactions is another key area of focus for the ACCC. Regulators must be abl…
S49
WS #31 Cybersecurity in AI: balancing innovation and risks — Melodena Stephens: Thank you First of all, I want to mention that digital literacy is not the same thing as AI litera…
S50
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — “But then there is also a policy and regulatory landscape for discovering price of power for data centers”[60]. “Data ce…
S51
Scaling Enterprise-Grade Responsible AI Across the Global South — Great. And I really hope we get a great design playbook for building data centers that are accessing renewable power, be…
S52
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Achieving a sustainable and resilient future in 2025 will requirecollaboration across sectors, robust governance, and st…
S53
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of reso…
S54
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion revealed strong alignment between industry needs, academic capabilities, and government policy. David Fre…
S55
Empowering Inclusive and Sustainable Trade in Asia-Pacific: Perspectives on the WTO E-commerce Moratorium — To ensure successful integration, bridging the gap between academia and industry is essential. Due to the rapid advancem…
S56
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Eltjo Poort, Vice President Consulting at CGI in the Netherlands, supported this view: “Regulation does not hamper innov…
S57
Agentic AI in Focus Opportunities Risks and Governance — -Enterprise Guardrails and Risk Management: Panelists emphasized the critical importance of implementing robust safety m…
S58
The fading of human agency in automated systems — This gap between language and reality matters, especially in governance contexts where assurances of human oversight are…
S59
Why science metters in global AI governance — helping member states move from philosophical debates to technical coordination, and anchor choices in evidence so polic…
S60
Projecting Digital economy rules on Global South’s AI regulations: what is needed to safeguard human rights? ( Data Privacy Brasil Research Association) — Lastly, the analysis underscores the importance of global solidarity and the pursuit of a fairer level playing field in …
S61
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Based on the analysis provided, AI is significantly transforming consumer protection. It is crucial to strike the right …
S62
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — Artificial intelligence | Data governance | Capacity development T. Srinivasan explained the development of a sovereign…
S63
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Armando Guio Espanol: Perfect, no, not 30 seconds. No, well, I was just going to say that definitely this is very contex…
S64
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Despite coming from very different industries (healthcare vs. payments), both speakers independently emphasized the crit…
S65
The Global Power Shift India’s Rise in AI &amp; Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S66
What policy levers can bridge the AI divide? — ## Infrastructure as Foundation A central theme throughout the discussion was that meaningful AI implementation cannot …
S67
Artificial intelligence (AI) – UN Security Council — Finally, synthetic data enhances representativeness by allowing for thecreation of diverse and comprehensive datasets th…
S68
World Economic Forum 2025 at Davos — Finally, the use of synthetic data can enhance therepresentativenessof datasets, particularly in scenarios where real-wo…
S69
EU Artificial Intelligence Act — (60n)    It is appropriate to establish a methodology for the classification of general purpose AI models as general pur…
S70
What is it about AI that we need to regulate? — Based on the available meeting transcripts from the Internet Governance Forum 2025, the question of leveraging synthetic…
S71
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Advocates for a harmonised approach to regulation and policy-making believe that this method can yield positive outcomes…
S72
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — Modern regulation requires innovative approaches including data-driven regulation and regulatory sandboxes for experimen…
S73
Secure Finance Risk-Based AI Policy for the Banking Sector — Implement a balanced regulatory approach that encourages experimentation through sandboxes while maintaining institution…
S74
WS #283 AI Agents: Ensuring Responsible Deployment — Lazanski warned that the attack surface for agentic AI will be enormous, requiring shared security practices among compa…
S75
Safe and Responsible AI at Scale Practical Pathways — “guardrails human in the loop risk assessment these are the tools which are available today …”[95]. “If we immediately…
S76
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S77
Towards a Safer South Launching the Global South AI Safety Research Network — All speakers identify capacity building as a fundamental challenge, noting gaps in technical capacity, institutional fra…
S78
WS #100 Integrating the Global South in Global AI Governance — Use of synthetic data to address data scarcity issues in the Global South
S79
WS #205 Contextualising Fairness: AI Governance in Asia — Milton Mueller: Can you hear me? Am I on? Okay, thank you very much. Yeah, I am going to, yeah, first issue you a f…
S80
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Sustainable development | Development | Infrastructure
S81
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Africa is one of the most energy -constrained regions. It’s also a continent where adoption is becoming very frequent. W…
S82
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Zhang and Professor Gong Ke agreed on the fundamental importance of infrastructure development for AI advancement. Their…
S83
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — He warns that water resources are already stressed by data‑center cooling, making the adoption of liquid and two‑phase s…
S84
Panel Discussion Data Sovereignty India AI Impact Summit — This comment reframes the entire sovereignty debate by distinguishing between isolation and strategic control. It moves …
S85
Building Indias Digital and Industrial Future with AI — This comment introduced nuance to the sovereignty debate and influenced the conversation toward finding balance between …
S86
Building Sovereign and Responsible AI Beyond Proof of Concepts — Okay, everyone is sovereignty. Sorry, did you say something else? A responsible AI? I think that could also be here, bec…
S87
WS #31 Cybersecurity in AI: balancing innovation and risks — Melodena Stephens: Thank you First of all, I want to mention that digital literacy is not the same thing as AI litera…
S88
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S89
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Regulators must be able to explain how these algorithms operate to ensure transparency and fairness in the marketplace. …
S90
Powering AI Global Leaders Session AI Impact Summit India — -Prime Minister: (mentioned as having spoken the day before, but did not speak in this transcript) -Sam Altman: CEO and…
S91
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Honourable Prime Minister Modi, Excellencies, dear colleagues, ladies and gentlemen. It is a great honour for me to be i…
S92
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — And I want to acknowledge the countries that came forward to really put this initiative together, starting first, of cou…
S93
The Power of Satellites in Emergency Alerting and Protecting Lives — Alexandre Vallet: Thank you very much Dr. Zavazava. Thank you very much both of you for this introductory remark. I will…
S94
Bridging the AI innovation gap — This was mentioned as part of their research sharing but indicates a need for further development of sector-specific fra…
S95
Conversational AI in low income &amp; resource settings | IGF 2023 — Rajendra Pratap Gupta:But Sameer, even after the Sarbanes-Oxley Act in the financial markets, we had the subprime crisis…
S96
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S97
Optimism for AI – Leading with empathy — Recognition that both innovation advocates and public safety advocates have valid concerns that need to be balanced
S98
Seeing, moving, living: AI’s promise for accessible technology — Privacy frameworks must evolve to account for technologies that are simultaneously personal and public. A blind person u…
S99
The Overlooked Peril: Cyber failures amidst AI hype — This is not to say that we should abandon discussions about the potential long-term risks of AI. Rather, we must strike …
S100
National Disaster Management Authority — The Minister stressed the critical importance of creating digital twins and thermal maps for emergency response, but str…
S101
Strategic prudence in AI: Experts advise incremental approach for meaningful advancements — At TechCrunch Disrupt 2024, data management leadersadvisedAI-driven businesses to focus on incremental, practical applic…
S102
Bottom-up AI and the right to be humanly imperfect (DiploFoundation) — From the analysis of these arguments, it can be inferred that while third-party tools offer convenience and efficiency i…
S103
WS #139 Internet Resilience Securing a Stronger Supply Chain — Complex interdependencies create cascading failure risks Despite hiring top engineers and implementing redundancy measu…
S104
Introduction — Resilience should be built in to IoT devices and services where required by their usage or by other relying systems, tak…
S105
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — All right. Just speaking for myself, I can’t wait to use agents. I feel like it’s a lot of developer communities that ha…
S106
Google unveils new AI agent toolkit — This week at Google Cloud Next in Las Vegas, Googlerevealedits latest push into ‘agentic AI’. A software designed to act…
S107
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In conclusion, generative AI technology has the potential for positive impacts in multiple industries. It enhances commu…
S108
Steering the future of AI — Limitations and Future of Large Language Models (LLMs)
S109
Multi-stakeholder Discussion on issues about Generative AI — Luciano Mazza de Andrade:Sorry I was off. Thank you very much, Yoshi. Well, I think our colleagues and previous speakers…
S110
How can AI improve multilingualism — ChatGPT-4 performances in languages other than English are of lower qulity. In a recent test, ChatGPT-4 scored 85% on a …
S111
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Babak Hodjat
4 arguments127 words per minute954 words449 seconds
Argument 1
Balanced guardrails with human‑in‑the‑loop and uncertainty assessment (Babak Hodjat)
EXPLANATION
Babak stresses that AI systems need guardrails that avoid both blind trust and excessive mistrust. He proposes techniques such as keeping humans in the loop, assessing the uncertainty of an agent’s output, and deciding when to intervene based on confidence levels.
EVIDENCE
He explains that AI promises and risks are real and that guardrails are essential to prevent over-trust or mistrust, citing the need for human-in-the-loop or on-the-loop mechanisms and uncertainty assessment of agent outputs as safeguards [11-12][15-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The predominance of guardrails (85% of solution) and the need for human-in-the-loop and uncertainty assessment are emphasized in [S16]; further discussion of these safeguards appears in [S17] and [S18].
MAJOR DISCUSSION POINT
Guardrails, Trust Frameworks, and Regulation for AI Deployment in India & the Global South
AGREED WITH
Sunita Mohanty, Balaji Thiagarajan
Argument 2
Need standards for agent identity and third‑party agents in multi‑agent ecosystems (Babak Hodjat)
EXPLANATION
He points out that as AI systems incorporate agents from multiple external parties, there is currently no clear way to verify their identities. Establishing standards for agentic identity is crucial to manage risks from third‑party interactions.
EVIDENCE
Babak describes the challenge of identifying agents when third-party or consumer agents interact with internal agents, noting the lack of well-established standards and mentioning Google’s work on A2A as an early effort [20-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Identity management challenges for agentic AI and the call for authentication standards are covered in [S4] and [S19]; the lack of established standards is noted in [S13].
MAJOR DISCUSSION POINT
Guardrails, Trust Frameworks, and Regulation for AI Deployment in India & the Global South
Argument 3
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat)
EXPLANATION
He recommends creating publicly accessible compute resources and a sovereign sandbox where innovators can test AI applications under regulatory oversight. This would democratise access to AI infrastructure and foster safe experimentation.
EVIDENCE
Babak proposes a publicly available processing capacity for students, academia, and startups, and a sovereign sandbox that brings together entrepreneurs, regulators, and academia to trial applications and shape regulation in a controlled environment [145-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sandbox approaches for regulated experimentation are described in [S20]; similar sandbox concepts for AI governance are referenced in [S18].
MAJOR DISCUSSION POINT
Guardrails, Trust Frameworks, and Regulation for AI Deployment in India & the Global South
AGREED WITH
Sunita Mohanty
DISAGREED WITH
Tanvi Singh
Argument 4
Caution against both over‑regulation and under‑regulation; advocate a balanced policy framework (Babak Hodjat)
EXPLANATION
He warns that excessive regulation can stifle innovation while insufficient regulation can expose societies to AI risks. A balanced approach, possibly via sandbox testing, is needed to navigate this tension.
EVIDENCE
Babak notes the risk of over-regulating versus under-regulating and stresses the importance of not falling off either ledge, suggesting a sandbox as a way to test and refine regulation safely [30-32][158-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing ethical guidelines with practical implementation and avoiding over-regulation is discussed in [S21]; the view that appropriate regulation enables innovation appears in [S22] and [S31]; concerns about excessive regulation stifling growth are raised in [S23] and [S24].
MAJOR DISCUSSION POINT
Guardrails, Trust Frameworks, and Regulation for AI Deployment in India & the Global South
AGREED WITH
Sunita Mohanty
DISAGREED WITH
Tanvi Singh
S
Sunita Mohanty
3 arguments117 words per minute2003 words1024 seconds
Argument 1
Framing the tension between regulation and innovation for India and the Global South (Sunita Mohanty)
EXPLANATION
Sunita highlights the global debate where the US pushes rapid AI innovation, Europe emphasizes regulation, and asks where India and the Global South should position themselves. She seeks guidance on balancing these forces.
EVIDENCE
She references the ongoing conversation about regulation versus innovation, noting the US is at the innovation stage and Europe at the regulation stage, and asks where India and the Global South should stand [33-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The debate on regulation versus innovation for emerging economies is highlighted in [S22], [S23], [S24] and [S31].
MAJOR DISCUSSION POINT
Guardrails, Trust Frameworks, and Regulation for AI Deployment in India & the Global South
AGREED WITH
Babak Hodjat
Argument 2
Emphasis on renewable energy, efficient cooling and ROI‑focused infrastructure planning (Sunita Mohanty)
EXPLANATION
Sunita stresses that AI infrastructure must be environmentally sustainable, using renewable power and efficient cooling, while also delivering clear return‑on‑investment metrics for queries and operations.
EVIDENCE
She mentions discussions at Davos and Bloomberg about energy efficiency, renewable power, efficient cooling, and measuring query cost to improve ROI for AI workloads [55-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Infrastructure energy challenges, renewable power and cooling efficiency for AI workloads are examined in [S27] and [S28]; a design playbook for renewable-powered data centres is mentioned in [S18].
MAJOR DISCUSSION POINT
Sustainable AI Infrastructure and Energy Efficiency
AGREED WITH
Amod Kabade
Argument 3
Human‑centered AI and the need for clear regulatory guidance to sustain innovation while protecting users (Sunita Mohanty)
EXPLANATION
Sunita calls for AI systems that keep humans at the centre of design and operation, coupled with transparent regulatory frameworks that enable innovation without compromising user safety.
EVIDENCE
She references earlier remarks about keeping humans at the centre of AI development and the broader debate on regulation versus innovation, underscoring the need for clear guidance [33-34][93-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-in-the-loop and ethical AI principles are discussed in [S29]; inclusive, human-centered governance frameworks are presented in [S30] and [S31].
MAJOR DISCUSSION POINT
Responsible AI at Scale in Consumer Platforms
AGREED WITH
Babak Hodjat, Balaji Thiagarajan
A
Anupam Chattopadhyay
3 arguments173 words per minute1226 words423 seconds
Argument 1
Synthetic data generation with tunable noise to improve deep‑fake detection on noisy, multilingual data (Anupam Chattopadhyay)
EXPLANATION
Anupam proposes creating synthetic datasets where controlled noise can be added, enabling deep‑fake detectors to perform reliably on real‑world noisy and multilingual inputs.
EVIDENCE
He describes building synthetic data with tunable noise to address poor detection performance on noisy, multilingual images and audio, noting the challenge of models trained on clean data failing in real conditions [41-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of synthetic data for robustness and representativeness is described in [S33] and [S34].
MAJOR DISCUSSION POINT
Technical Robustness & Research Directions for Heterogeneous Environments
AGREED WITH
Sunita Mohanty
Argument 2
Federated learning techniques to merge proprietary models while preserving data and model privacy (Anupam Chattopadhyay)
EXPLANATION
He suggests using federated learning to combine models from different vendors or organisations without exposing underlying data, thereby maintaining privacy and intellectual property.
EVIDENCE
Anupam explains that federated learning allows merging of proprietary models while guaranteeing that training data or model parameters are never leaked, supporting privacy-aware model building [42-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Federated learning as a privacy-preserving method for collaborative model building is covered in [S35], [S36] and [S37].
MAJOR DISCUSSION POINT
Technical Robustness & Research Directions for Heterogeneous Environments
Argument 3
Single‑window consortium linking research funding, technology transfer, commercialization and regulation to close the academia‑industry gap (Anupam Chattopadhyay)
EXPLANATION
He highlights the AI.sg model, a single‑window consortium that integrates research funding, innovation, technology transfer, commercialization, and regulation, enabling seamless collaboration across stakeholders.
EVIDENCE
Anupam details the AI.sg single-window consortium that coordinates research funding, technology innovation, transfer, commercialization, and regulation, allowing universities, companies, and policymakers to participate at any stage [188-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of academia-industry partnerships and coordinated consortia for AI innovation is highlighted in [S38] and [S40].
MAJOR DISCUSSION POINT
Technical Robustness & Research Directions for Heterogeneous Environments
AGREED WITH
Sunita Mohanty
T
Tanvi Singh
2 arguments179 words per minute1571 words526 seconds
Argument 1
Development of sovereign, domain‑specific LLMs to handle low‑resource languages and reduce translation overhead (Tanvi Singh)
EXPLANATION
Tanvi argues for building domain‑specific large language models that are trained on local data, avoiding reliance on open‑source or foreign models and eliminating the need for translation.
EVIDENCE
She explains that their domain-specific model is not trained on open-source data, is owned locally, uses native language content, and removes translation requirements, thereby supporting sovereignty [80-87].
MAJOR DISCUSSION POINT
Technical Robustness & Research Directions for Heterogeneous Environments
AGREED WITH
Balaji Thiagarajan
DISAGREED WITH
Babak Hodjat
Argument 2
Sovereign LLMs give organisations control over their data, eliminate translation bottlenecks and improve return on AI investment (Tanvi Singh)
EXPLANATION
She emphasizes that sovereign LLMs let organisations train on their own data, maintain data sovereignty, avoid translation delays, and deliver better ROI for AI deployments.
EVIDENCE
Tanvi reiterates that sovereign LLMs allow use of an organisation’s own content in native language without translation, improving control and ROI for sovereign AI stacks [85-87].
MAJOR DISCUSSION POINT
Sovereignty, Domain‑Specific Models, and ROI
A
Amod Kabade
2 arguments131 words per minute694 words316 seconds
Argument 1
Adoption of liquid‑cooling and definition of KPIs such as energy‑per‑token to make data centres climate‑friendly (Amod Kabade)
EXPLANATION
Amod recommends using liquid‑cooling technologies to reduce cooling overhead and establishing metrics like energy‑per‑token to monitor and improve the environmental performance of AI data centres.
EVIDENCE
He notes that liquid cooling can minimise cooling overhead and suggests defining KPIs such as energy consumption per token to drive sustainable AI infrastructure [52-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sustainable AI infrastructure, energy consumption metrics and cooling efficiency are discussed in [S27] and [S28].
MAJOR DISCUSSION POINT
Sustainable AI Infrastructure and Energy Efficiency
AGREED WITH
Sunita Mohanty
Argument 2
Modular, future‑proof data‑centre design that can accommodate rapidly evolving AI chips and workloads (Amod Kabade)
EXPLANATION
He advocates designing data centres in a modular, flexible fashion so they can be upgraded to support new AI chip generations and higher heat loads, avoiding obsolescence.
EVIDENCE
Amod describes modular, future-proof designs that allow incorporation of newer, more resource-hungry chips, emphasizing flexibility and sustainability to keep infrastructure relevant over time [177-184].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future-proof, modular data-centre designs and the need to address growing AI compute and energy demands are examined in [S27] and [S28].
MAJOR DISCUSSION POINT
Sustainable AI Infrastructure and Energy Efficiency
B
Balaji Thiagarajan
3 arguments173 words per minute1838 words634 seconds
Argument 1
Use of smaller, domain‑specific models (SLMs) for regional pricing, catalog generation and hyper‑personalisation (Balaji Thiagarajan)
EXPLANATION
Balaji explains that beyond generic large models, Flipkart employs smaller, domain‑specific language models to deliver region‑specific pricing, generate product listings quickly, and personalize offers.
EVIDENCE
He provides examples of using SLMs to create catalog listings from seller images within 20 minutes and to give price ranges tailored to specific Indian cities, demonstrating regional hyper-personalisation [119-124].
MAJOR DISCUSSION POINT
Sovereignty, Domain‑Specific Models, and ROI
AGREED WITH
Tanvi Singh
Argument 2
Fairness across pricing, product quality and service delivery; requires high‑quality data, strict access controls and privacy‑preserving model orchestration (Balaji Thiagarajan)
EXPLANATION
Balaji outlines that fairness at Flipkart spans pricing, product quality, and service, and can be achieved through good data, robust access controls, encryption, and privacy‑aware orchestration of multiple expert models.
EVIDENCE
He discusses fairness in pricing, quality of goods, service delivery, the need for high-quality data, access controls, encryption for data in motion, and a mixture-of-experts approach to model orchestration [100-108][110-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fair market principles and algorithmic fairness considerations are outlined in [S26]; ethical AI and human-centered governance that support fairness are discussed in [S29] and [S30].
MAJOR DISCUSSION POINT
Responsible AI at Scale in Consumer Platforms
Argument 3
Transparency in AI‑driven customer service: explicit opt‑out disclosure so users know when they are interacting with a bot (Balaji Thiagarajan)
EXPLANATION
He states that Flipkart’s AI agents act as co‑pilots and that customers are shown a disclaimer with an opt‑out default, ensuring users are aware when they are speaking with a machine.
EVIDENCE
Balaji explains that agents are co-pilots, a disclaimer is shown indicating possible machine interaction, and the system defaults to opt-out, requiring users to actively opt-in for bot conversations [127-135].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Transparency and human-in-the-loop requirements for AI systems are emphasized in [S29]; inclusive governance frameworks that promote disclosure are presented in [S30].
MAJOR DISCUSSION POINT
Responsible AI at Scale in Consumer Platforms
AGREED WITH
Babak Hodjat, Sunita Mohanty
Agreements
Agreement Points
Balanced guardrails with human‑in‑the‑loop, uncertainty assessment and transparency to avoid over‑trust or mistrust of AI systems
Speakers: Babak Hodjat, Sunita Mohanty, Balaji Thiagarajan
Balanced guardrails with human‑in‑the‑loop and uncertainty assessment (Babak Hodjat) Human‑centered AI and the need for clear regulatory guidance to sustain innovation while protecting users (Sunita Mohanty) Transparency in AI‑driven customer service: explicit opt‑out disclosure so users know when they are interacting with a bot (Balaji Thiagarajan)
All three speakers stress that AI systems must incorporate guardrails such as human-in-the-loop mechanisms, uncertainty evaluation and clear disclosure to users, thereby preventing blind trust or excessive mistrust of AI outputs [11-18][33-34][127-135].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on human-in-the-loop and robust guardrails aligns with enterprise AI safety guidelines and the EU AI Act’s systemic-risk classification, and reflects panel calls for clear safety measures in high-stakes AI deployments [S57][S58][S69][S56].
Public processing capacity and sovereign sandbox to democratise AI experimentation and foster safe innovation
Speakers: Babak Hodjat, Sunita Mohanty
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat) Government provision of 60,000 GPUs and creation of sandbox‑like ecosystem for AI innovation (Sunita Mohanty)
Both speakers advocate for publicly available compute resources and sandbox environments that enable students, startups and regulators to safely develop and test AI applications, reducing concentration of power in a few large firms [145-165][166-170].
POLICY CONTEXT (KNOWLEDGE BASE)
Regulatory sandboxes are promoted as a way to enable sovereign, safe AI experimentation while maintaining oversight, a view echoed in IGF workshops and recent policy roadmaps on sandbox-driven innovation [S71][S72][S73][S66].
Balanced regulatory approach – avoid both over‑regulation and under‑regulation
Speakers: Babak Hodjat, Sunita Mohanty
Caution against both over‑regulation and under‑regulation; advocate a balanced policy framework (Babak Hodjat) Framing the tension between regulation and innovation for India and the Global South (Sunita Mohanty)
Both emphasize the need for a middle-ground regulatory stance that protects society without stifling AI innovation, warning against the risks of too much or too little regulation [30-32][33-34].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence-based AI policy frameworks argue that clear, proportionate regulation reduces uncertainty and accelerates innovation, mirroring recommendations from CGI’s policy roadmap and IGF discussions on balanced AI governance [S56][S59][S61][S73].
Renewable energy, efficient cooling and KPI‑based metrics for sustainable AI data‑centre operation
Speakers: Amod Kabade, Sunita Mohanty
Adoption of liquid‑cooling and definition of KPIs such as energy‑per‑token to make data centres climate‑friendly (Amod Kabade) Emphasis on renewable energy, efficient cooling and ROI‑focused infrastructure planning (Sunita Mohanty)
Both call for environmentally sustainable AI infrastructure, highlighting liquid cooling, renewable power and quantitative KPIs (e.g., energy per token) to improve efficiency and demonstrate ROI [52-54][55-58].
POLICY CONTEXT (KNOWLEDGE BASE)
Sustainable data-centre operation is a policy priority, highlighted in the AI Impact Summit’s focus on power pricing and cooling, and reinforced by national efficiency targets such as Germany’s binding renewable-energy mandates for data centres [S50][S51][S52].
Development and use of sovereign, domain‑specific LLMs (SLMs) for regional languages, pricing and hyper‑personalisation
Speakers: Tanvi Singh, Balaji Thiagarajan
Development of sovereign, domain‑specific LLMs to handle low‑resource languages and reduce translation overhead (Tanvi Singh) Use of smaller, domain‑specific models (SLMs) for regional pricing, catalog generation and hyper‑personalisation (Balaji Thiagarajan)
Both highlight the strategic importance of building locally owned, domain-specific language models that operate in native languages, enable region-specific pricing and rapid catalog creation, thereby supporting AI sovereignty and ROI [80-87][119-124].
POLICY CONTEXT (KNOWLEDGE BASE)
Sovereign, domain-specific LLMs are advocated to keep data local and lower training costs, exemplified by a tax-domain LLM using LoRA adaptation and regional language model initiatives [S62][S63].
Synthetic data generation and noise‑tuning to improve model robustness in heterogeneous, multilingual environments
Speakers: Anupam Chattopadhyay, Sunita Mohanty
Synthetic data generation with tunable noise to improve deep‑fake detection on noisy, multilingual data (Anupam Chattopadhyay) Synthetic data importance for keeping data clean and enabling AI in the Global South (Sunita Mohanty)
Both agree that synthetic data, especially with controllable noise, is essential to train robust AI models that perform well on noisy, multilingual real-world data typical of the Global South [41-42][45-47].
POLICY CONTEXT (KNOWLEDGE BASE)
Synthetic data is recognised for enhancing dataset representativeness and model robustness across diverse linguistic contexts, as discussed in UN and World Economic Forum analyses [S67][S68].
Strong academia‑industry partnership through coordinated consortia to bridge research, commercialization and regulation
Speakers: Anupam Chattopadhyay, Sunita Mohanty
Single‑window consortium linking research funding, technology transfer, commercialization and regulation to close the academia‑industry gap (Anupam Chattopadhyay) Calls for collaboration across academia, industry and government to sustain AI innovation (Sunita Mohanty)
Both stress the need for structured, multi-stakeholder platforms that align research, funding, technology transfer and policy, facilitating seamless collaboration and faster deployment of responsible AI solutions [188-196][33-34].
POLICY CONTEXT (KNOWLEDGE BASE)
Collaboration between academia and industry is deemed essential for responsible AI development, reflected in IGF and WTO reports emphasizing coordinated consortia and knowledge exchange [S53][S55][S56].
Fairness in AI‑driven commerce requires high‑quality data, access controls, encryption and modular model orchestration
Speakers: Balaji Thiagarajan
Fairness across pricing, product quality and service delivery; requires high‑quality data, strict access controls and privacy‑preserving model orchestration (Balaji Thiagarajan)
Balaji outlines that achieving fairness at scale hinges on reliable data, robust access-control mechanisms, encryption for data in motion and a modular mixture-of-experts architecture to ensure accurate, equitable outcomes [100-108][110-118].
POLICY CONTEXT (KNOWLEDGE BASE)
Fairness and consumer protection in AI-enabled commerce are highlighted in IGF consumer-protection forums, which call for strong data-governance, encryption and modular architectures to prevent unfair practices [S61].
Similar Viewpoints
Both see the creation of publicly accessible compute resources and sandbox environments as essential for democratizing AI development and ensuring safe experimentation [145-165][166-170].
Speakers: Babak Hodjat, Sunita Mohanty
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat) Government provision of 60,000 GPUs and creation of sandbox‑like ecosystem for AI innovation (Sunita Mohanty)
Both advocate for locally owned, domain‑specific language models that address regional language needs and enable tailored commercial applications such as pricing and catalog creation [80-87][119-124].
Speakers: Tanvi Singh, Balaji Thiagarajan
Development of sovereign, domain‑specific LLMs to handle low‑resource languages and reduce translation overhead (Tanvi Singh) Use of smaller, domain‑specific models (SLMs) for regional pricing, catalog generation and hyper‑personalisation (Balaji Thiagarajan)
Both consider synthetic data a crucial tool for improving model robustness and overcoming data scarcity in heterogeneous, multilingual contexts [41-42][45-47].
Speakers: Anupam Chattopadhyay, Sunita Mohanty
Synthetic data generation with tunable noise to improve deep‑fake detection on noisy, multilingual data (Anupam Chattopadhyay) Synthetic data importance for keeping data clean and enabling AI in the Global South (Sunita Mohanty)
Unexpected Consensus
Both technology‑focused speakers (Babak Hodjat and Amod Kabade) highlighted modular, future‑proof infrastructure design as a key enabler for AI scalability, despite coming from different domains (AI governance vs data‑centre engineering)
Speakers: Babak Hodjat, Amod Kabade
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat) Modular, future‑proof data‑centre design that can accommodate rapidly evolving AI chips and workloads (Amod Kabade)
While Babak focused on compute access and sandboxing, and Amod on physical data-centre modularity, both converged on the necessity of flexible, upgradable infrastructure to support AI growth, an alignment not explicitly anticipated in the agenda [145-165][177-184].
Overall Assessment

The panel displayed strong consensus on the need for balanced guardrails, transparent human‑in‑the‑loop mechanisms, sustainable and publicly accessible AI infrastructure, and the strategic development of sovereign, domain‑specific models. There was also broad agreement on the importance of renewable‑energy‑driven data‑centres, synthetic data for robustness, and structured academia‑industry collaborations.

High consensus across technical, regulatory and sustainability dimensions, indicating a shared vision that responsible AI deployment in India and the Global South requires coordinated policy, infrastructure investment and localized model development.

Differences
Different Viewpoints
How AI model development should be sourced and supported – public shared compute resources vs building sovereign, in‑house models
Speakers: Babak Hodjat, Tanvi Singh
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat) Development of sovereign, domain‑specific LLMs to handle low‑resource languages and reduce translation overhead (Tanvi Singh)
Babak argues that democratising AI requires publicly available processing capacity and a sandbox where innovators can experiment on shared infrastructure [145-152][155-165]. Tanvi, by contrast, stresses the need for organisations to build their own sovereign, domain-specific large language models that run on locally owned data and avoid dependence on external compute or foreign models [80-87]. The two positions differ on whether the primary solution is shared public resources or self-contained sovereign stacks.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between public compute provision and sovereign model development mirrors case studies of tax-domain LLMs and regional AI infrastructure strategies that stress data locality and cost-effective training [S62][S63][S66].
Preferred regulatory approach – balanced sandbox‑driven experimentation versus strong sovereignty‑driven control
Speakers: Babak Hodjat, Tanvi Singh
Caution against both over‑regulation and under‑regulation; advocate a balanced policy framework (Babak Hodjat) Sovereign LLMs give organisations control over their data, eliminate translation bottlenecks and improve ROI (Tanvi Singh)
Babak warns that excessive regulation can choke innovation while too little leaves societies exposed, proposing a sandbox to test rules in a controlled way [30-32][158-162]. Tanvi’s focus on sovereign models implies a tighter, nation-centric control over AI assets and data, which can be interpreted as favouring a more protective, possibly stricter regulatory stance to safeguard sovereignty [73-87]. The speakers therefore diverge on how much regulatory oversight is appropriate versus how much self-reliance should be pursued.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on sandbox-driven experimentation versus sovereign control are reflected in IGF workshops advocating harmonised sandbox frameworks while respecting national policy objectives [S71][S72][S73].
Unexpected Differences
Assumptions about the availability of AI infrastructure versus the need for new public resources
Speakers: Babak Hodjat, Sunita Mohanty
Public processing capacity and sovereign sandbox to let academia, startups and regulators experiment safely (Babak Hodjat) India has already provisioned 60,000 GPUs to states and institutions, enabling sovereign LLM development (Sunita Mohanty)
Babak proposes creating new publicly accessible compute capacity because most processing power is concentrated in private firms [145-152]. Sunita, however, points out that the Indian government has already distributed a large GPU fleet to foster innovation, suggesting that the immediate need for additional public compute may be less urgent than Babak assumes [166-168]. This contrast between perceived scarcity and reported abundance was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs stress that reliable AI deployment depends on foundational connectivity and public infrastructure, prompting calls for new public resources to meet growing AI compute demands [S66][S63][S50].
Overall Assessment

The panel largely converged on the importance of guardrails, human‑centric design, and sustainable infrastructure. Divergences emerged around the preferred route to AI capability – shared public compute and sandbox experimentation versus building sovereign, in‑house models – and around the regulatory posture needed to protect sovereignty while fostering innovation. An unexpected tension appeared between the claim of limited public compute resources and the reported government‑provided GPU fleet.

Moderate. While there is broad consensus on the goals of responsible, sustainable AI, the differing strategies for infrastructure provision and regulatory balance could shape policy and investment decisions in India and the Global South. These disagreements suggest that future work will need to reconcile public‑resource democratisation with sovereign model development, and to clarify the appropriate level of regulatory intervention.

Partial Agreements
All three agree that AI systems need safeguards that keep humans informed and in control. Babak stresses technical guardrails such as human‑in‑the‑loop and uncertainty checks [15-18]. Sunita calls for keeping humans at the centre of design and clear regulation [33-34]. Balaji implements this through transparent opt‑out disclosures for bot interactions [127-135]. The disagreement lies in the concrete mechanism (technical uncertainty metrics vs policy guidance vs UI disclosure) rather than the shared goal of responsible, human‑centric AI.
Speakers: Babak Hodjat, Sunita Mohanty, Balaji Thiagarajan
Balanced guardrails with human‑in‑the‑loop and uncertainty assessment (Babak Hodjat) Human‑centered AI and the need for clear regulatory guidance to sustain innovation while protecting users (Sunita Mohanty) Transparency in AI‑driven customer service: explicit opt‑out disclosure so users know when they are interacting with a bot (Balaji Thiagarajan)
Takeaways
Key takeaways
Effective AI guardrails require a balance between human‑in‑the‑loop oversight and automated uncertainty assessment, avoiding both blind trust and excessive rubber‑stamping. Standards for agent identity and interoperability in multi‑agent ecosystems are still immature and need development. Public processing capacity and sovereign sandbox environments are essential to enable academia, startups, and regulators to experiment safely. Technical robustness for the Global South must address heterogeneous, noisy, multilingual data through synthetic data generation, tunable noise, and federated learning to preserve privacy. Domain‑specific, sovereign LLMs are critical for low‑resource languages, reducing translation overhead and improving ROI for enterprises and governments. Sustainable AI infrastructure should prioritize liquid cooling, renewable energy, and clear KPIs such as energy‑per‑token, while employing modular, future‑proof data‑center designs. Responsible AI at consumer scale demands high‑quality data, strict access controls, fairness across pricing and service quality, and transparent disclosure (opt‑out) for AI‑driven customer interactions. A coordinated academia‑industry‑government pipeline (single‑window consortium) can align research, technology transfer, commercialization, and regulation.
Resolutions and action items
Create a publicly accessible processing‑capacity platform to democratize AI experimentation (suggested by Babak). Establish a sovereign sandbox where startups, academia, regulators, and enterprises can test agentic systems and co‑develop regulatory frameworks (Babak). Define and adopt energy‑per‑token and water‑per‑token KPIs for data‑center operations, incentivising compliance (Amod). Adopt modular, scalable data‑center designs that can accommodate future AI chip generations (Amod). Develop synthetic data pipelines with tunable noise for training robust models on noisy, multilingual data (Anupam). Implement federated‑learning workflows to merge proprietary models while preserving data/model privacy (Anupam). Accelerate development of sovereign, domain‑specific LLMs for Indic and other low‑resource languages (Tanvi). Integrate explicit opt‑out disclosures for AI‑driven customer service bots and enforce transparency policies (Balaji). Promote a balanced regulatory approach that avoids both over‑regulation and under‑regulation, using sandbox feedback to inform policy (Babak, Sunita).
Unresolved issues
No established industry standards for verifying the identity and trustworthiness of third‑party AI agents in multi‑agent ecosystems. Specific mechanisms for measuring ROI of AI deployments across diverse sectors remain vague. How to uniformly enforce fairness and quality across millions of sellers and products on large marketplaces like Flipkart is still an open challenge. The exact process for scaling sovereign LLMs to cover all regional languages and dialects has not been finalized. Details on how government will sustain and fund the public processing‑capacity platform and sandbox over the long term were not addressed. Methods for continuous monitoring and human oversight of AI systems at national‑scale deployments need further definition.
Suggested compromises
Adopt a balanced regulatory stance—neither overly restrictive nor completely laissez‑faire—using sandbox experiments to calibrate rules (Babak). Implement default opt‑out for AI‑driven interactions, allowing users to opt in if they wish, balancing transparency with user convenience (Balaji). Combine generic large‑scale LLMs for high‑level intent detection with smaller, domain‑specific models for detailed, localized tasks, achieving both breadth and precision (Balaji). Design data‑center infrastructure that is modular and future‑proof, allowing incremental upgrades without full rebuilds, thereby reconciling sustainability goals with rapid AI workload growth (Amod).
Thought Provoking Comments
One of the biggest risks is this notion that because the AI systems respond and reason very well, after one or two reasoning steps we can let them continuously reason – they make trivial mistakes after several hundred reasoning steps.
Highlights a subtle but critical failure mode of AI systems: error accumulation over long inference chains, which is often overlooked in hype‑driven discussions.
Shifted the conversation from generic guardrails to concrete technical challenges, prompting later speakers (e.g., Anupam) to discuss robustness and error‑mitigation techniques such as synthetic data and uncertainty estimation.
Speaker: Babak Hodjat
When you’re building a system fully in‑house you control the agents, but increasingly we have third‑party agents talking to ours – we don’t have well‑established standards to determine the identity of these agents.
Introduces the emerging problem of agentic identity and interoperability, a gap in current AI governance frameworks.
Led Sunita to ask about sovereign LLMs and regulation, and set up Babak’s later suggestion for a public sandbox and processing capacity to address ecosystem‑wide standards.
Speaker: Babak Hodjat
Our deep‑fake detection model performed well on clean data but failed dramatically on noisy, multilingual inputs; we tackled this by creating synthetic noisy datasets, automatic fact‑checking pipelines, and federated learning to merge proprietary models without leaking data.
Provides a concrete, real‑world example of how data quality, heterogeneity, and privacy constraints affect AI reliability in the Global South.
Expanded the discussion from abstract guardrails to practical research solutions, influencing subsequent dialogue on synthetic data, hardware‑aware AI, and the need for domain‑specific models (referenced later by Tanvi and Balaji).
Speaker: Anupam Chattopadhyay
Sovereignty means building domain‑specific models that are trained on our own data, in our own language, so we control the cognition and can meet regulatory accountability – this is why we are creating ‘Domain Specific Models’ rather than relying on open‑source or foreign LLMs.
Frames AI sovereignty as a technical and regulatory imperative, linking model ownership to compliance, ROI, and national security.
Prompted Sunita to explore ROI and sovereign LLMs, and inspired Balaji’s explanation of internal vs external model usage and the agentic orchestration framework.
Speaker: Tanvi Singh
Fairness at Flipkart spans the entire customer journey – from pricing, product quality, to after‑sales service – and is achieved through high‑quality data, strict access controls, encryption, and a mixture‑of‑experts architecture that selects domain‑specific models for each task.
Connects abstract fairness concepts to operational practices at massive scale, illustrating how data governance, model selection, and architecture intertwine.
Deepened the conversation on practical implementation of responsible AI, leading to follow‑up questions about transparency (bot vs human) and influencing Babak’s later policy‑sandbox proposal.
Speaker: Balaji Thiagarajan
We need KPIs such as energy‑per‑token or water‑per‑token for data centers and incentives for those who meet them; sustainable design (e.g., liquid cooling) is the foundation of responsible AI.
Introduces measurable, infrastructure‑level guardrails that link AI usage directly to environmental impact, a perspective often missing in model‑centric debates.
Steered the discussion toward the physical layer of AI responsibility, prompting Sunita to tie infrastructure considerations to ROI and later to Amod’s modular, future‑proof data‑center design recommendations.
Speaker: Amod Kabade
Create publicly available processing capacity and a sovereign sandbox where startups, academia, regulators, and entrepreneurs can safely experiment; the government’s role is to nurture the ecosystem, not to build every stack itself.
Proposes a concrete policy mechanism that balances innovation with oversight, addressing earlier concerns about over‑ and under‑regulation.
Served as a turning point toward actionable governance recommendations, influencing Amod’s emphasis on modular design and Anupam’s call for a single‑window consortium linking research to regulation.
Speaker: Babak Hodjat
The AI.sg model in Singapore provides a single‑window consortium that links research funding, technology innovation, transfer, commercialization, dissemination, and regulation – a template for strong industry‑academia partnership.
Offers a proven governance framework that integrates multiple stages of the AI lifecycle, addressing the fragmented approach observed elsewhere.
Inspired Sunita and other panelists to consider similar structures for India and the Global South, reinforcing the theme of ecosystem‑wide collaboration.
Speaker: Anupam Chattopadhyay
Overall Assessment

The discussion was driven forward by a handful of pivotal insights that moved the conversation from high‑level optimism to concrete, actionable challenges. Babak’s early warnings about cumulative reasoning errors and agentic identity opened a technical‑policy gap that was later filled by Tanvi’s sovereignty argument and Babak’s sandbox proposal. Anupam’s deep‑fake case study grounded the debate in data quality and privacy realities of the Global South, prompting Balaji to showcase how large‑scale commerce can embed fairness through architecture and governance. Amod’s focus on measurable infrastructure KPIs added an environmental dimension, while his modular data‑center vision linked back to Babak’s ecosystem‑building recommendation. Collectively, these comments reframed the dialogue around practical guardrails—spanning model reliability, data stewardship, regulatory sandboxes, and sustainable infrastructure—shaping a nuanced, multi‑layered roadmap for responsible AI in India and the broader Global South.

Follow-up Questions
What standards and protocols are needed to reliably identify and verify third‑party AI agents (agentic identity) in multi‑agent systems?
Without well‑established standards, integrating external agents poses security, trust, and interoperability risks for enterprises.
Speaker: Babak Hodjat
How can synthetic data creation be enabled in India and the Global South through modular “AI‑in‑a‑box” platforms for students and researchers?
Synthetic data can address data scarcity and privacy constraints, fostering trustworthy AI development in resource‑limited settings.
Speaker: Sunita Mohanty
What metrics (e.g., energy consumption per token, water consumption per token, query cost) should be used to measure AI infrastructure ROI and sustainability?
Transparent, quantifiable metrics are essential for responsible, cost‑effective, and environmentally friendly AI deployment.
Speaker: Sunita Mohanty
How should ‘sovereignty’ be defined and operationalised for AI models in critical sectors like BFSI, especially regarding data control and regulatory compliance?
Clear definition of AI sovereignty impacts model risk management, regulatory approval, and trust in regulated industries.
Speaker: Tanvi Singh
What framework should governments in the Global South adopt for building scalable AI stacks that include monitoring, human oversight, and vendor accountability?
A structured framework guides safe, transparent, and inclusive public AI deployments while balancing regulation and innovation.
Speaker: Babak Hodjat
What early design patterns (e.g., modular data‑center architecture, chip roadmap alignment, liquid cooling) enable reliable and trustworthy scaling of AI infrastructure?
Identifying proven design choices helps organisations avoid costly retrofits and ensures sustainable high‑density AI operations.
Speaker: Amod Kabade
How can academia and industry jointly treat model efficiency, reliability, and assurance as a single design problem rather than separate ethical, performance, and infrastructure layers?
A unified approach aligns research outcomes with real‑world constraints, reducing gaps between theory and deployment.
Speaker: Anupam Chattopadhyay
How can AI applications balance broad interoperability with deep, scalable domain‑specific integration, as learned from work with Palantir, OpenAI, the Vatican, and New York City?
Finding this balance ensures solutions are reusable across contexts while meeting specialized regulatory and cultural requirements.
Speaker: Tanvi Singh
What criteria should organisations use to decide when to build internal AI models versus adopting external models, and how do these choices affect long‑term business strategy?
Strategic model‑selection impacts cost, control, compliance, and competitive advantage for large‑scale consumer platforms.
Speaker: Balaji Thiagarajan
To what extent was AI actually used in organizing the AI Impact Summit (e.g., for cyber‑security, logistics, real‑time translation), and what lessons can be drawn?
Understanding real‑world AI deployment at scale showcases capabilities and reveals gaps for future event‑level AI applications.
Speaker: Anupam Chattopadhyay
How can a publicly available processing capacity (e.g., shared GPU cloud) be created to democratise AI experimentation for students, startups, and researchers in the Global South?
Open compute resources lower entry barriers, stimulate innovation, and reduce concentration of AI capabilities in a few large firms.
Speaker: Babak Hodjat
What would a ‘single‑window’ consortium model (like AI.sg) look like for end‑to‑end AI development—from research funding to regulation—and how can it be replicated in other regions?
A unified platform streamlines collaboration, accelerates technology transfer, and ensures coordinated governance across stakeholders.
Speaker: Anupam Chattopadhyay
What standardized benchmarks should be established for fairness, ethical lapses, hallucinations, alignment issues, and jailbreak resistance in AI models?
Measurable standards are needed to assess and enforce trustworthy AI behavior across diverse applications.
Speaker: Anupam Chattopadhyay
What key performance indicators (KPIs) such as ‘energy per token’ or ‘water per token’ should be defined for AI data‑centers, and how can incentives be structured to promote compliance?
KPIs linked to sustainability drive greener AI infrastructure and provide clear targets for industry and regulators.
Speaker: Amod Kabade
What best practices ensure transparent disclosure in AI‑driven customer service (e.g., opting out vs. opting in by default) to maintain user trust and meet compliance?
Clear disclosure policies affect consumer confidence, regulatory adherence, and ethical deployment of conversational agents.
Speaker: Balaji Thiagarajan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Waves of infrastructure Open Systems Open Source Open Cloud

Waves of infrastructure Open Systems Open Source Open Cloud

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session, led by Renu Raman, introduced Proximal Cloud’s vision to build enterprise-private cloud infrastructure that brings compute close to data for India’s large-scale AI needs [1-5]. Raman outlined historical technology cycles-semiconductor advances in the 80s-90s, the cloud era of the last two decades, and the current AI wave driven by large language models-as the backdrop for a new “infant-scale” compute layer aimed at population-scale workloads [28-34][38-41]. She emphasized a growing demand-supply gap in AI-ready infrastructure, noting that global compute spending is projected to rise from $50 billion to $300 billion and eventually near $2 trillion, creating pressure for more affordable, distributed systems [47-49][57-60][46-47].


To address this, Proximal is partnering with UC San Diego’s new data-science institute to develop hardware kernels and inference engines for health-science and agriculture use cases [9-14][108-110]. A hardware-software co-design strategy with AMD is being pursued to combine x86 CPUs and high-memory GPUs capable of running 128-billion-parameter models, enabling single-node solutions for many customers [105-106]. The company defines “Proximal” as bringing compute nearer to data, memory, and the business domain, thereby supporting sovereign, low-cost AI deployments especially in India’s health and agriculture sectors [130-133].


Jensen Huang reminded that most data-processing workloads still run on CPUs, reinforcing the need for a balanced CPU-GPU stack in private-cloud offerings [97-104]. Divium, presented by Lalit Bhatt, offers an inference layer that automatically evaluates model quality, selects the best-performing model per dollar, and reduces cost and latency for enterprise pilots, as demonstrated with a 60 % cost cut for an Indian travel aggregator [156-182]. Instant System’s venture-builder Sandeep Kumar highlighted common AI challenges such as hallucinations, disambiguation, and data-privacy, and claimed its platform achieves 99 % reliability while keeping costs low [196-205][207-226].


Infosys representative Arya Bhattacharjee argued that India’s AI future depends on software and AI capabilities rather than waiting for domestic chip fabrication, and cited on-premise data (over 90 %) as a driver for agentic AI solutions [244-252]. Raman quantified the infrastructure needed for a 10-gigawatt AI-ready capacity in India as roughly $250 billion in hardware, suggesting this scale could spawn a domestic ecosystem of semiconductor, OEM, and application companies akin to global SAP or Palantir players [315-322]. She also noted that emerging Indian manufacturers such as VVDN and public-market financing could supply chassis and board design, but substantial early-stage investment is required to bridge the demand-supply and skill gaps [378-395].


The discussion concluded that coordinated public, private, and venture funding, together with open-source model innovation and localized hardware, is essential for India to achieve sovereign, low-cost AI compute at population scale [274-288][298-303].


Keypoints


Major discussion points


Proximal Cloud’s vision and market focus – The presenters framed the company as a provider of “infant-scale” sovereign compute that brings processing close to data, especially for India’s massive population and for verticals such as health, education and agriculture [5-7][13-15][94-95][112-115][130-133].


Technology trends and infrastructure gaps – A recurring theme was the shift from CPU-only workloads to heterogeneous systems (CPU + GPU, new memory hierarchies, terabit-scale Ethernet) and the resulting demand-supply gap in compute, power and capital expenditure (-$300 B now, projected $2 T in the next decade) [28-34][57-61][78-80][84-88][115-118][41-47].


Strategic partnerships and concrete use-cases – The talk highlighted collaborations with UC San Diego, AMD (CPU/GPU blend), PharmEx (precision agriculture), Divium (LLM inference routing), ZetaVault and other ecosystem players to demonstrate real-world applications in education, health-sciences and industry [9-14][105-108][135-152][155-162][170-179][190-194].


Barriers to scaling Gen-AI pilots – The panel stressed three core obstacles: undefined quality metrics, unpredictable inference costs, and rapid model churn, which Divium aims to solve through automated model evaluation and routing [156-164][170-179].


India’s strategic opportunity and investment needs – Speakers argued that India’s push for 10 GW of AI-ready power translates into $250 B of hardware spend, creating space for new semiconductor, hardware, and software firms (potential “SAP-like” or “Palantir-like” companies). They called for coordinated public-private funding and a domestic manufacturing ecosystem to capture this value [113-115][250-258][315-322][378-386][390-398].


Overall purpose / goal


The session was designed to introduce Proximal Cloud, outline its technical roadmap and business model, showcase partner ecosystems and early use-cases, and position India as a fertile ground for a sovereign, low-cost AI compute stack. By mapping technology trends, market gaps, and investment requirements, the presenters aimed to attract collaborators, customers, and capital to accelerate the rollout of infant-scale AI infrastructure in India and beyond.


Overall tone and its evolution


Opening (0-15 min): Highly enthusiastic and visionary, emphasizing excitement about new offerings and the long-term “technology shifts” that will reshape computing [1-6][28-34].


Technical deep-dive (15-40 min): Shifts to a detailed, data-driven tone, citing historical cycles, infrastructure statistics, and hardware specifications [41-47][57-61][78-80].


Partner & use-case showcase (40-55 min): Becomes collaborative and demonstrative, highlighting concrete projects (UC SD, PharmEx, Divium) and their impact [135-152][170-179].


Strategic & policy discussion (55-70 min): Moves to a broader, forward-looking and somewhat persuasive tone, arguing for India’s sovereign AI agenda, the need for massive investment, and the potential emergence of new “national champions” [113-115][315-322][378-386].


Closing (70-71 min): Returns to an inclusive, call-to-action tone, inviting questions, emphasizing ecosystem building, and thanking participants [374-376][380-386].


Overall, the conversation progressed from excitement to technical depth, then to partnership validation, and finally to a strategic, policy-oriented appeal, maintaining an optimistic and collaborative spirit throughout.


Speakers

Full session reportComprehensive analysis and detailed insights

The session opened with Renu Raman welcoming the audience, announcing a flurry of activities and the recent launch of Proximal Cloud’s offering, and positioning the company as a provider of enterprise-private-cloud infrastructure that brings compute close to data for the Indian market [1-5]. She outlined the agenda – setting the industry context, presenting partner ecosystems, and a planned Q&A that would have featured presentations from Bharat Jain and Zeta Bolt[6-8]. The recorded session, however, moved directly to partner talks without those presentations.


Raman highlighted the sponsorship of UC San Diego’s public-private AI initiative, noting the university’s new School of Computing, Information Sciences and Digital Sciences as a collaborative hub for health-science use cases[9-14]. She also referenced an AI-for-Education component that will provide Jupyter-style notebooks, a research-paper archive, and a commercial AI chat service [200-202]. An intended MRI-image demo could not be shown, illustrating the health-science application she hopes to enable [203-204].


Shifting to a historical perspective, Raman reminded listeners that humanity tends to under-estimate ten-year horizons while over-estimating two-year gains[18-20]. She traced three major technology waves: the semiconductor boom of the 1980s-90s driven by Moore’s Law [29-30], the cloud era of the past two decades [31-33], and the current AI surge powered by large language models [34-38]. Emphasising a hardware-software co-design philosophy, she stated that “serious software teams should eventually design their own hardware, and serious hardware teams should design their own software” [34-36].


Raman then quantified the growing demand-supply gap in AI-ready infrastructure. Global compute spending has risen from roughly $50 B in 2000 to $300 B today, and is projected to approach $2 T within the next 5-10 years[41-47]. This surge translates into massive capital outlays for power, memory, networking and storage – roughly $3-5 of compute-related capital for every $1 of power[44-46]. She noted that AI will affect 95 % of work, far exceeding the productivity gains of the SaaS era, and therefore demands far greater compute capacity [38-40][57-61].


To address the gap, Proximal is forging strategic partnerships. The collaboration with UC San Diego’s data-science institute enables joint work on hardware kernels, inference engines and health-science applications [108-110]. A hardware-software co-design deal with AMD supplies a balanced CPU + GPU stack with high-capacity memory (256 GB HPM, scaling to 512 GB) capable of hosting 128-billion-parameter models on a single node [105-108].


Raman also described the network and memory hierarchy evolution – moving from 10 GbE to 800 GbE and toward terabit links, and the debate between single-type versus multi-type memory architectures [205-207].


Jensen Huang reinforced the CPU-centric nature of current data-processing workloads, noting that platforms such as Databricks, Snowflake and Oracle’s SQL engines still run almost entirely on CPUs [97-104]. He announced an upcoming initiative to accelerate data processing, echoing Raman’s call for a heterogeneous “happy blend” of CPUs and GPUs[105-107].


Partner use-cases followed:


* PharmEx (Lalit Bhatt) showcased a precision-farming platform that integrates soil sensors, drone imaging and autonomous tractors. By placing inference locally, the solution reduces cost for cost-sensitive farmers (≈ ₹45,000 per unit) and supports applications such as irrigation scheduling, anomaly detection and yield prediction [135-152].


* Divium (presented by Bharat, Director at Divium) tackled the three killers of Gen-AI pilots – undefined quality, unpredictable costs and rapid model churn. The platform automatically evaluates model quality, routes queries to the most cost-effective model, and continuously upgrades without breaking production. Pilot results showed a -60 % cost reduction for a travel-aggregator and -30 % latency with 95 % case-resolution for an e-pharmacy[156-186].


* Instant System (Sandeep Kumar) described a venture-builder framework that mitigates hallucinations, disambiguation errors, data-privacy breaches and reliability issues, achieving 99 % reliability while keeping costs low for enterprise AI agents [196-226].


In the India-focused segment, an Infosys representative (Arya Bhattacharjee) argued that India’s advantage lies in software and on-premise AI rather than immediate chip fabrication. She cited that >90 % of enterprise data remains on-prem and that on-prem AI factories can cut fab-level costs by ≈ 25 % (≈ $10 M per day)[244-252]. Raman quantified the infrastructure required for a 10 GW AI-ready power capacity in India as roughly $250 B in hardware, a scale that could nurture a domestic ecosystem of semiconductor, OEM and application companies comparable to global SAP or Palantir players [113-115][315-322][378-398]. She pointed to emerging Indian manufacturers such as VVDN and Sanmina, and to public-market financing avenues, as the nascent supply chain needed to materialise this vision [378-386][390-398].


Latency emerged as a critical engineering target. Citing Google’s historic 20 ms query-response benchmark, Raman proposed a more realistic ≈ 120 ms target for population-scale services in India, arguing that additional compute resources and algorithmic improvements can achieve this goal [300-304]. Audience member Abhishek Singh asked whether sub-second or sub-millisecond responses could be delivered to 1.5 billion users at a cost of ~200 rupees per month; the panel discussed the feasibility of such ultra-low latency at massive scale [292-297].


Raman framed open-source models as the next abstraction layer in distributed computing, likening them to the role hypervisors and Linux played in earlier eras. She explained that while closed models will continue to evolve within organisations like OpenAI and Google, the broader ecosystem will innovate around open models, potentially leading to a Distributed Computing 3.0 paradigm [274-288]. In response to a question from Abhishek, she emphasized that models now serve as a new “graph layer” underlying email, documents, Teams, etc., which she described as the most important enterprise database[308-310].


Funding mismatches were highlighted: government startup grants of 20 crore contrast sharply with the $100 M required for deep-tech talent, raising doubts about the availability of capital to build Nvidia-scale ventures in India [343-352]. Raman noted a “demand-supply-skill” gap that must be bridged through coordinated public-private investment and early-stage venture support [361-368][378-395].


In closing, Raman reiterated the need for a sovereign, low-cost, infant-scale compute layer that serves health, education and agriculture, invited participants to engage further with Proximal Cloud and its partners, and emphasized that realising India’s AI ambition will require long-term investment, ecosystem collaboration and a balanced hardware-software strategy[374-386].


Overall, the discussion displayed strong consensus on the necessity of heterogeneous compute, on-premise AI for data sovereignty, and the importance of latency and cost optimisation. Divergence remained on the relative emphasis of hardware versus software development and on the adequacy of current funding mechanisms. The participants collectively outlined a roadmap that blends visionary goals with concrete partnerships, use-case pilots and policy-level considerations to drive India’s transition to population-scale AI infrastructure.


Session transcriptComplete transcript of the session
Renu Raman

Announcements and a lot of activities going on here this week. Excited about it. We are excited about introducing what we do and what we do more in the context of India. We just launched our offering and we’ll be talking more of what we do with our partners in the coming weeks and months. But today, I’d like to introduce ourselves. But before we introduce, we want to set the context of where we fit in, both in the industry trends and the ecosystem and what category we go after from an enterprise private cloud infrastructure. And then we’ll get into sharing some of our partners that we work with and then a Q &A at the end of it with a presentation from Bharat Jain and from Zeta Bolt.

We’ll have an interactive Q &A on some key top three questions or end questions that we think need to be answered. With that, let me start with the first. I want to… thank our sponsors and our collaborators and partnerships at UC San Diego, where they have an initiative for public -private partnership at UC San Diego for AI for education, AI for research, and AI for industry. And we are one of the early industry partners. There’s a newly constituted data science and data center institute called School of Computing, Information Sciences, and Digital Sciences. And we’ll talk a little bit more about it downstream. But this collaboration enables us to not only work on technologies, but also look at key use cases, particularly on health sciences, because San Diego has got one of the largest health science, both hospital system as well as clinical research and variety of health and biotech research.

With the thesis that fundamentally computing is going to be driven by biology and health, it’s a very key partnerships that we hope to work with. going forward. With that, let me step back. This is my standard slide I use in any presentations in terms of long -term reminders, what happens in technology. So where we fit in, we’ll just walk through for the next 20, 30 minutes about what we are doing from a systems innovation, but the systems innovation is going to be punctuated or represented in the context of where the technology shifts that have occurred and will occur as we go forward. So simple reminders are we, as humanity, underestimate. We overestimate what can be done in two years, but we underestimate what can be done in 10 years.

You can go back in history, look at self -driving cars, look at neural link. I remember a slide I had put at UC Berkeley, a conference about programming languages and productivity languages and kind of a very tongue -in -cheek thought, and you just have to think and write and get confused. And I thought, well, I’m going to record out. that was in 2014. I’ll put the slides out later. I thought it would be science fiction, never happened for hundreds of years. But guess what? You can think, you can put a neural link, and probably have cursors generate code for you today. That I never thought about in 2014. So never underestimate what will happen. The big technology shifts that occur every 30 years, 15 years, 7 years.

But the key thing is semiconductors drove the technology innovation in the 80s and 90s, thanks to Moore’s Law. And the cloud phenomenon happened in the last two decades. I do see the pattern now as you are innovating, as you can see, where NVIDIA is innovating tremendously from the silicon side up. And of course, there are innovations going from the top -down, from the use case, from the language models, and higher order functions in AI. And both are coming at the same time, together. A third bullet I would say is, people who are serious about software should make their own hardware. The corollary is, people who are serious about hardware should also make their own software.

So I’m a hardware guy who’s done software, and this venture, I should be doing software first, going to the hardware later, kind of reverse model. this is the last one day one thing I’ll say about myself my professional life has been shaped by luckily I didn’t realize where between 1980s and 2000 there was a peak of Moore’s law there was an exponential part but happened to be lucky to have been part of the semiconductor innovation cycle having developed and delivered a number of world class microprocessors so today we talk about models there are only 4 or 5 guys who could do microprocessors there’s a difficult very small teams 150 person if you look at model foundation models today it’s the same characteristic there are hard problems of course it’s a lot more money you need a billion dollars and lots of GPUs but you still need the same 150 people to do the models it’s not like everybody can do the models so there is a similarity between what happened in the 90s about microprocessors and what I see today in model building it’s the same level of complexity where you need the best and brightest roughly about it’s not me it’s some altman coding that it’s 120 people and I think that’s the difference and you need to have them with the right resources computing.

We also need to have a lot of computing resources to go build the models. So with that let me start I think the next wave and we hope to drive the innovation and disrupt in terms of systems building going forward but the context is why it’s economically interesting and valuable is I think everybody knows if you look at GDP we’ve gone in the last 20 years from 33 trillion dollars to almost 100 trillion dollars by all accounts the GDP could improve by 2x or 4x in the next 20 years but the SAS era was really a productivity improvement so it really scratched the surface about productivity whereas AI is going to impact 95 % of work so the time is much bigger the impact is much bigger, the blast radius is much bigger than the last 20 years.

That’s why the computing also is needed much more. We have 300 billion dollars of infrastructure we’re in from 50 billion dollars I believe in 2000 So we’re going to be in from 50 billion dollars in the next 20 years. So we’re going to be in from 50 billion dollars in the next 20 years. So we’re going to be in from 50 billion dollars in the next 20 years. to now about $250 to $300 million of capital expenditure spent for infrastructure. So power in, capital spent. So every dollar of power you spend ends up being $3 to $5 of capital for compute, memory, network storage. And from there, you do the upper layers of software and then applications. So that $50 billion was $300 billion, but if you look at all the spending, we’re already at $400, $500 billion, and all accounts in the next 5 to 10 years will be almost $2 trillion of spend.

That creates, obviously, there’s a big demand -supply gap. The great thing about programming is every time there is a layer of abstraction, the programming gets simpler, which means it brings more people to the party to be able to compute. I think what LLMs and natural and transform models have done is bring everybody to be able to program. We all are logical. We can algorithmically think. We can program makers, but not everybody could program. finally we have a tool to be able to program in the natural language your mother, your grandmother can also talk to the computer and tell what steps to take and it will do the steps for you or it will tell you what steps to do so that’s the fundamental shift which means at population scale you’re going to have computing for everybody, that creates a huge gap, it’s not even 1000x as Jensen would say it’s a billion x absolutely true, but it creates a big technology gap, supply gap and increasingly because of model and languages and data the sovereignty gap also that’s appeared, that’s the theme of the conference that continues to drive tremendous amount of demand now we have seen a little bit of this before, I have been through the first two cycles of innovation in semiconductors in my first job as at Sun Microsystems and then the dot com era and then now and there was always a demand supply gap in one of these transitions and But we solved it in one way.

It doesn’t mean you can solve it the same way, but we are at the crux of solving it also in a similar way, but with a different set of boundary conditions, if you will. So what we solved between 1990 and 2000, if you look at, we went from clock rate, single CPU, to fundamentally shifting to multicore threaded and distributed systems, and that was the cloud phenomena. I have a slide later to show what the transition was. I’ll probably skip this slide. I think everybody knows we need lots of power, and one interesting point is I think India is going from almost nothing less than a gigawatt to about 10 gigawatt buildup, while the U .S. is going from 25 to 125 gigawatt in other regions, and China.

EMEA is going to be on a comparative basis, on a relative basis, a lot less. But the need for AI -ready geolocal data centers we already see. Everybody is building out. And what is the infrastructure? What is the architecture to support that? there’s certainly reference architectures inside the hyperscalers Google has got a TPU based and AWS has got Tranium based infrastructure Tranium plus general purpose computing and of course Microsoft has got Maya and of course NVIDIA so those are probably and of course AMD but increasingly over time you want to have an open multi -vendor strategy that’s probably where we’ll check we’ll talk more about so why do I believe these transitions and distributed systems are drivers of new innovation up and down the stack this is not new this has happened in history starting from VAX 11 780 was disrupted by of course at that time PC but more so in the enterprise side was Sun and the workstation if you think of the first distributed system in the modern era it was Sun Microsystems where Ethernet was used to build a distributed system network file system and that was version one and over time it’s like evolution you gain more mass, more momentum more weight in your capabilities and you end up building big monstrous machines in E10K that drove the internet and the dot com era but that was also was an Achilles heel because that was not going to enable the scale that people had to go build at much bigger so Google was probably the epitome of the next big shift I’ll talk about that and similar thing we see today is we’ve gone from CPU only dual socket x86 memory clusters to heterogeneous compute but also gone to a fairly large scale up now the interesting transition today was then is training and inferencing as you’ve seen the news lately with the Grok acquisition by Nvidia and others there’s clearly a separation between the training kind of workloads and inference type of workloads and what kind of systems you want to support because inference is going to drive a lot more of the compute so the one way I think about is inference and biology or workloads related to biology and healthcare are going to be the drivers of computing like it was for graphics in the 1990s.

So this is back again to reinforce the point that between 1994 and 2005, we saw the shift going from the version 1 .0 of distributed system to version 2 .0, which is open source. So the first one was open systems in the first 20 years. And open source came and enabled a new way to build distributed system because from an economics, it removed the cost of middle age of software. Everybody got access. In this case, Linux. This is Solaris. But that also enabled to build truly hundreds of thousands of machines in a single cluster. And out of it came Borg, Kubernetes, a whole bunch of other distributed file systems, all kinds of innovations that happened. So the proposition here is I think we are at the cusp of similar things on the infant scale computers.

And I think we are at the cusp of similar things on the infant scale computers. And I think we are at the cusp of similar things on the infant scale computers. And I think we are at the cusp of similar things on the infant scale computers. So just a reminder. And the punctuation that happens every time turns out to be, if you look back in history, it’s Ethernet. Yes, the network is the computer, but more important is 10 megabit was the onset of replacing big mainframes or miniframes like VAX to workstations and network of workstations. Then right at the point of 10 gigabit Ethernet coming around 2000 to 2002 timeframe, along with it was multi -core, was enabled the new distributed building block.

We are at the same point. We have got 800 gigabit Ethernet going to 800 gigabit Ethernet and probably a terabit Ethernet networks. And that’s hopefully, and that will be the enabler, and that’s a bet we are making. So the other element of the system is its network and then the memory. And do you build a full scale -up system at data center -wide? Certainly you need for training for backpropagation and forward. but inference can be much more distributed, shardable and it’s time to rethink what kind of systems you want for inference only dominating infrastructure. The other dimension to think about is we’ve gone from a single memory type to multiple memory types so do we need four light types of memory to deal with a variety of layers or just two or one?

That’s a lot of debate in the technical community but that’s a critical decision that will happen. So a way to think about this, we think of the entire system not from flops and GPU and compute. GPU compute, CPU compute are needed but really what does the memory hierarchy or memory system look like because there is a physical view because that dominates the cost function and the power function but equally at the same time you have to represent that especially from a performance standpoint you are caching lots of different data. for computing. So think of the KV caches for the LLM side, the in -memory representations of many of the data from a performance standpoint. So that’s a layer that is continuous, is rich in innovation and technical innovation that we hope to have an influence as well as probably make a mark.

And then the large part is the logical view of memory, especially deep context. You want to go from session to session, location to location, and you want to have your memory state. You want to be able to switch models and have some state of the memory state. All of these consume various layers of the logical and the physical layers of memory. So that’s what we think about. So net, putting all this together, we think of taking a bet with interrnet, taking a bet with memory, and build an infant -scale compute for population scale, like in this case India, but also in certain key verticals like health sciences and others. So… So there’s another important element we want to highlight.

I can’t take a quote from Jensen.

Jensen Huang

One of the applications that my favorite is just good old -fashioned data processing, structured data and unstructured data, just good old -fashioned data processing. And very soon we’re going to announce a very big initiative of accelerated data processing. Data processing represents the vast majority of the world’s CPUs today. It still completely runs on CPUs. If you go to Databricks, it’s mostly CPUs. If you go to Snowflakes, mostly CPUs. SQL processing at Oracle, mostly CPUs. Everybody’s using CPUs to do SQL, structured data.

Renu Raman

So taking a cue from what he’s saying, historically, databases, SQL, all run on CPUs. And that will remain the case for a variety of reasons. so that’s an important metric in terms of why we believe the new systems that we compose going forward needs to have a happy blend especially the ways to design systems for the hyperscalers but also the whole category of use case and customers in the private side where they don’t need to have 100 ,000 machines but smaller scale machines but it needs to have a happy blend of CPUs and GPUs that’s the main point in terms of so in that context we have taken a position to start working in partnership with AMD because they’ve got the x86 CPU assets and a compelling GPU roadmap as well as an architecture that supports both from the network side as well as the memory side they have higher memory capacity for LLM so it started with 256GB of HPM which supports 128 billion parameter models at least now it’s going to go to 288 and 512 and no time which means we can fit fairly sizable models so that enables one to do more kind of classical distributed systems principles of single node that captures most of the workload for most customers and be able to optimize it on that.

So coming back, before we get into what we do in Proximal, I want to emphasize the partnership with UC San Diego. They have a data center, as well as I told, it’s a supercomputing data center for research for NSF and DARPA, where we are doing some of the work in terms of the hardware level at the middle layer, in terms of the compute kernels, as well as in the inference engines, as well as the use cases, as I said, because there’s a data science institute, AI for education, to transform the undergrad and graduate level programs using the same tools to have advanced research capability, as well as for health sciences. So with that, I think that is a part to set the motivation for the future of the field.

Thank you. what we’re doing in Proximal Cloud. The next phase we want to go into specifically what we are launching in the four layers, the key components of what we are building and delivering to many cloud partners in India, starting with. There’s also a why India question. I think I’ll say one aspect is India demands an extremely low -cost infant -scale compute at population scale, and that’s a challenge. So we really are excited to work on that problem to start with. So the first thesis, why do we need compute other than the cloud? I think the best way to quote is Michael Dell telling you what he sees. To the beginning here.

Michael Dell

Yeah so we in the last year you know delivered a little over 3 ,000 of these still AI factories and you know those are increasingly to enterprise and commercial customers that want to bring the AI to their data not the data to the AI and you know there’s just a ton of data that is still on -prem and being generated on -prem and it turns out to the beginning here

Renu Raman

If you have a particular question in the domain that you understand, we can try it out after this. So we enable with this interactive learning for the students, contextualized intelligence, and, of course, instructor empowerment in that. And the way it will look and feel will be like a Jupyter notebook on the extension side will be the research content, the archive papers for them to use. It’s an add -on thing. It doesn’t have to be integrated. It’s a commercial AI chat, if you will. Then the next example would be MRI images. Unfortunately, I’m not able to log in remotely onto that right now. The other one I had a local copy. I’m not able to show the MRI images right now.

So at this point, I want to summarize saying that what is Proximal? The word Proximal brings compute closer to your data. The word Proximal means it’s sovereign to the nation or the region or the business that cares about it. And the word Proximal also means we bring compute closer to memory. We bring compute closer to where the business is. so that was the thesis we are not doing this alone we are doing it with some technology partners as well as we have some key customers and partners so with that let me give an example for a given education use case. Let’s go to I’ll bring Lalit Bhatt here to talk about who is director here heading the India office for PharmEx the key partner

Lalit Bhatt

Thanks Renu So what I’ll do is and thanks to Proximal Cloud for giving the stage out here what we’ll do is that basically first I’ll little bit talk about what PharmEx stands for and then why in this space and different space why local compute and all these things are becoming important so PharmEx is basically a comprehensive AI stack so if you see on the left hand side we have lot of infield sensors and So we have a complete comprehensive platform in terms of not only soil moisture sensor but dendro meters and multiple sensors. We also have imaging capabilities where we can take images using satellite, using drones. And we also have an autonomy stack and we just now have acquired an autonomous electric tractor.

Basically these are pretty big machines. They might look like transformers but they are like almost 70, 80 horsepower machines. And we are putting our autonomy stack here so that they will go completely as autonomous ones. So what I’ll do is that I’ll just run a small clip. And I’ll just run a small clip. Thank you. Thank you. Thank you. again I think this is probably very standard everyone understands this you need to do AI you need data these two things we need that one then what becomes important is how efficient you are in terms of running those inferences using those data and we are also dealing with huge amount of data and that’s where we are looking into this technologies where we can reduce our cost everyone understands that in agriculture it is very difficult to ask lot of money from the farmer so where we can really make our operation more efficient if we start like making sure that we are very efficient very effective in terms of dealing with a large amount of data and able to run inferences on top of that one but essentially that’s what happens we get a lot of data both from the imaging side both from the sensor side and then we have our all engines running which basically leads to diagnostic and recommendations and this is just an example of the kind of thing that we do with our customers.

You would see here like complete or autonomous irrigation scheduling. A lot of data points would go into those models basically to create those schedules, anomaly identifications, crop stresses, yield predictions, frost predictions, and even we have worked on this soil percolations model as well. It depends on what all sensors you take. So in India I can tell you like we sell one, there is two feet four sensing probe, which is like four sensing, it goes two feet one, and with the whole controller unit it says 45 ,000 per unit basically. Usually in India we recommend one unit in one hectare, but again it will change based on the variation of the soil and these things like that, but this has been a good ballpark basically.

So yeah, I guess that’s it. And I think the whole theme is that we also are looking for really reduce our inference cost and that’s where Proxima Cloud comes into picture. Thank you.

Renu Raman

Thank you, Bharat. Okay, next we’ll have Bharat, Director at Divium, who is a key partner, and as I mentioned earlier, about model selection and runtime optimization that is integrated or will be integrated into a stack. So, Bharat.

Lalit Bhatt

Hey, good afternoon, everyone. So, let’s address the… hard truth out there. 90 % of Gen -AI pilots never make it to production. Not because the demo was bad or the models were weak or bad. It’s primarily because of three reasons. Number one, quality is undefined. What’s good for one use case is not necessarily good for another one. There’s no standardized way for evaluation or regressions. Number two, the costs are unpredictable. Be the cheapest model or the best model, you can see the price of these models ranging different from like 10 to 50x. The moment your application goes into production and hits real traffic, the costs spike up. There are AI engineers who are running experimentations and trying to tune this.

But model selection is always a moving target. There are always new models coming which are trying to fix something and are breaking something else. So without addressing all three, it’s very difficult for an enterprise to take their pilots to production. And that’s why we built Divium. So Divium is the only inference layer built on quality. Thank you. Divium defines measurable evals aligned against each use cases And it optimizes every incoming query to select the model Which is giving you the best quality per dollar The other part is that Divium automates the entire model selection process By continuously evaluating new models Deprecating the previous old ones And migrating you to new ones If we find something better, we auto -upgrade without breaking production Evals first, routing second And that’s what makes Divium different from every other routing platform out there Divium is the only inference layer with customer -specific intelligence Your apps can be AI agents, rack pipelines, or multi -agent workflows And the LLMs can be from the standard OpenAI, Anthropic Be your own fine -tuned models or deployed open -source models We sit right in between We provide you a single API.

We are continuously evaluating each and every incoming request, routing it to the model which is giving you the most optimal performance and also giving you detailed visibility on what models are working, how is your agent performing and what’s the overall quality. Remember, DVM is trained on your data, your agents and your quality. There’s nothing generic out there. And this is just a theory. We’ve already proven it across multiple deployments. For the India’s largest travel aggregator, which runs a conversational shopping assistant in their application homepage, we were able to cut down the cost by more than 60%. For one of the leading e -pharmacies of India, the customer support chatbot had a little bit lower latency. So we ended up reducing the cost by 30%, reducing the latency, latency by 30%.

leading to a case resolution improvement of 95%. As you can see, different use cases, different industries, but the result is the same. Lower cost and better outcomes. And we understand enterprise realities. You can keep your data secure. We have flexible deployment options, be it SaaS, privately hosted, or on -prem clusters. You stay in control. If you’re trying to take your AI pilots to production, feel free to reach out to us. Thank you.

Renu Raman

Thank you, Bhatt. So we talked first about application use case, one in education and agriculture. Second one, how we are bringing optimization to the system stack. Some of it we do and some of it with our partners. Third, we want to bring in how do we get customers, many of them mid -market, small, as well as large ones, enabled on our platform. I’m happy to introduce Sandeep Kumar. Coming is part of… venture builder instances. It’s a company that we partnered with in Delhi here to take this to a variety of customers, small, medium, large, with a higher velocity. Let him describe what they can do and how we partner.

Sandeep Kumar

Hello, everyone. I’m Sandeep Kumar from Instant System. We are a Silicon Valley -based venture builder. We do not just build startups. We grow them. We are partners in every domain of a startup, be it engineering, be it product, be it marketing. We just give them full blueprint to be a successful startup. We co -invest in the startup so that we are there in every journey of them for them to be a successful startup. We are a venture builder. Sometimes it’s often confused with the incubator, but we are a venture builder. We are a venture builder where we actually help in every step of your startup to be a successful startup. Part of the engine system I am mostly responsible for a company called VanEye though we usually do not disclose the name of the company that we are partners with to protect the IP and the confidentiality but just to give you a use case that you know what our capabilities are and what we have been able to build so far so this is a use case that I am taking this company has got nearly around 200 million dollars funding from the top investors including South Bank we are building an AI conversation software here and we are dealing with real use case real challenges of you know for a mostly like financial domain or financial based industries but all of these solutions are also generic for the analytical based industries as well so I am going to talk about you know some of the challenges that are actually common to every problem or every AI -based solution.

But we’ve been able to identify these challenges and we’ve been able to solve these challenges for this particular use case. So one of the most challenged that every AI -based software face is hallucinations. So LLMs always try to answer to your question irrespective of how much of the context it does have. We’ve been able to solve this problem up to a very good extent and our system is almost 99 % reliable. They do not hallucinate. That was the biggest problem that we’ve been able to solve. Next challenge that we face is disambiguation. So in spite of providing the context, sometimes the system is not able to understand how to disambiguate between some specific terms which may exist in different domains.

So that’s also the problem that we’ve been able to solve. As the theme of the system, and it’s very closely related to the theme of the system, because data security and the data privacy is one of the major industry concerns that we’ve been able to solve. So we’ve been able to address and challenge this problem so that the data privacy and the data access control is being managed at the raw level or in a very technical term I would say at the object level. We’ve been able to tackle that problem and solve that problem efficiently and that’s already running there and working fine. Evaluation and the quality management, that’s also one of the key areas.

That we need to solve as part of the venture that we are building. That’s also that we’ve been able to solve very efficiently. Another thing is the reliability because since we are talking about the financial system, the system has to be reliable. It has to be reliable every time. You cannot send a million dollars to someone’s account by mistake. That doesn’t work in the financial world. Or you cannot report data where you could show losses. instead of revenue or vice versa because you cannot survive in that world with hallucinated or the data which is not correct or factual. So being able to, with our advanced architecture system, we’ve been able to solve that problem as well.

There’s a long list of the pointers that we’ve been able to solve, but I’ll just cut down short. The system that we’ve been building, our performance, the reliable, we’ve been able to keep a check on the cost and efficiency of the system. That’s how we’ve been able to serve to the different audience, different customers from the different niche. So that’s what our theme is. We are a venture builder. Please feel free to reach out to us, and we’d love to talk to you about your startup. We don’t pick a selected startup to work with, but you all are free. You’re welcome to reach out to us, and we can discuss all the stuff that we are doing.

working on. Thank you so much.

Renu Raman

Thank you, Sandeep. I think that ends what we’re doing in Proximal and what our partners and our customers are working with in the early phases. We have partners in the U .S. like UCSD and Life Sciences, Health Sciences, Education, and here in Agriculture and soon to other, particularly we’re going to focus more on, it turns out to be that the Government of India initiative of Education, Health and Agriculture coincidentally aligned. It was not planned. It turned out to be that way. With that, I can go back to any questions. We’ve had a small panel session we can go to. I don’t know if Piyush has come here or not, but I think there’s a question here.

I’m here from ARIA, from Infosys, Senior VP at Infosys. please

Audience

Hello excuse me my name is Arya Bhattacharjee I am from entrepreneur from Silicon Valley so right now like Renu said I am driving this semiconductor and AI vision for Infosys from the United States and India also so the reason I am here is because like Renu said very correctly a very important question that what’s in future for India how can India capitalize or make a mark in this journey so not a small answer but I can tell you what we are trying to do at Infosys because if India is going to win this Semicon 3 .0 or 4 .0, 2 .0, I don’t know, it has to be in software, it has to be in AI. The chip building -wise is going to take some time.

So Renu said that 80 % of the data is on -premise. And what we are working on together is to see on the semiconductor price, this is true, absolutely true, more than 90 % of the data is on -premise. Yes. So the whole journey of how to take the data and how to create solutions through agentic AI approach, through distributed computing, and actually by owning the architecture to lower inferencing cost is a main challenge. So to answer the question which Renu asked me, what’s the future of India? I think that India… what we’re going to do is we’re going to look at a domain. So at Infosys at least we have selected domain and semiconductor, I was talking to him also, that is a large domain and we have taken the leadership with some major clients right now, I can’t talk about details, we’re using an agentech AI on premise and delivering productivity solutions, AI solutions and by cutting our productivity for chip making at least by 25 % and every day in a semiconductor fab you save $10 million, benchmark for a 7 nanometer type of technology, not even 1 .9.

So with that, good luck to Renu and I look forward to collaborating. Thank you.

Renu Raman

Thank you, Arya. Now we welcome Abhishek Venjan but before, just come on board. To summarize what we do, graph, that is underneath what I think is the most important AI factors. Organizing the data layer turns out to be probably the most complicated thing, which spans the enterprise such that it can meet the intelligence. And so that’s the stuff that I think we’ll probably do a lot of. We still don’t really have deep research in a corporate context. We do. That’s what Copilot is about. But most people day -to -day do not have this. So are they just underusing AI that exists? Yes. In fact, it’s interesting you brought that up because to me that is the killer feature.

So the biggest thing we did was we took this graph that is underneath what I think is the most important database in any company, which is underneath your email, your documents, your Teams calls, what have you. It’s the relationships that, by the way, own AI factors. Organizing. Organizing the data layer. so that’s a best summary obviously Satya wants to do it in the cloud and that will happen but also you need to have it in your on -prem, near -prem isolated from other sovereign as well and have the same capabilities, in a sense that’s what we bring to the enterprise if you will any other questions before we go to a panel session

Abhishek Singh

Thanks for having me here this is Abhishek, founder of ZetaVault we did a lot of work on the LLM acceleration, what it means is that we offload the large language models to the specific chips and custom silicon and thereby get the inferencing states and all we have Renu here who has wealth of experience on the distributed computing side and we were supposed to have a panel discussion but I thought I would pick his brain on what the challenges and what kind of changes he has seen in the industry. So, Renu, like, you have been part of, like, Sun Microsystems and early sort of, like, pioneers of distributed computing. So from Sun, which was maybe the distributed systems 1 .0 to Linux, which pretty much democratized the entire competition space and brought the Linux and x86 and now almost every embedded device, every competition pretty much happens on Linux.

So that was the distributed systems 2 .0. And now coming to the distributed computing space with the open models, right? Open source has played a lot of sort of role in the proliferation of the distributed computing. What do you foresee or what do you envision the open models are going to do for the distributed computing? Are we going to see a distributed computing 3 .0?

Renu Raman

Hello. Yeah. So that’s a fundamental thesis in that. I mean, we are, in a way, in part of that continuum to some extent. If you look at… not to take anything away from how NVIDIA designs, but there is a clear bifurcation going on right now as we speak on, as we said, training versus inferencing. And then there is open source models, and a variety of customers’ use cases would use and need the open source models. There’s always been the history of open and close in every transition. I mean, if you go back in the 80s and you go back, I mean, if you look at what enabled the cloud was hypervisors. There was KVM and VMware.

The same thing will apply. There will be open models and closed models. But the way I like to think about it is models is a new abstraction layer that separates between the underlying computing needs and everything above. Hypervisors separated the physical machine to a virtual machine, and then operating systems unix at that time also did that. The same thing is models are the abstraction layer that provides a higher degree of innovation both from closed and open models. The closed one will probably be innovated within OpenAI and Google, but the rest of the world will take the open source model, like what happened with Linux, and innovate. It’s not just going to be an NVIDIA GPU or AMD GPU, or there could be a plethora of GPUs, country -specific, region -specific, domain -specific.

Anything can happen over time.

Abhishek Singh

That’s a very wonderful take. One of the things that we have been wondering is about the latency you talked about in the various scientific and other applications you’re working. When we build the solutions for our customers, we build a lot of, actually, natural language to query processing kind of solutions. Like, we have been able to do maybe a sub -minute kind of a solution, which is acceptable to the customer because from weeks or days, he is able to answer or get the answers to their queries in less than a minute, right? But even a minute is not sufficient, right? When you talk about, like, really interactive queries and all, you want, like, sub -millisecond. Or maybe, like, sub -second kind of response.

what are your thoughts on that? Is it even possible that to a population or to a large customer base that we have in India about 1 .5 billion people at a very low cost, maybe like 200 rupees per month, you can provide query processing at a scale which is like within sub second?

Renu Raman

I think that’s a very good question. Sometimes scaling the problem is more important than the answer. This is an interesting way to frame the problem is, if you go back and look in history, why did Google succeed? A fundamental decision that was made by them on the toolbar is every query response has to be in 20 milliseconds. Now, nobody thought about it prior to that. It’s obvious today. But that key proposition or definition or the question that was asked, maybe by Larry Page or Sergey Brin, whoever it was, led to what we see as Google today in the back end, which is a huge amount of infrastructure to satisfy the 20 millisecond response to any query. so to me the same thing applies today, maybe 20 is too hard I’m just going to arbitrarily pick I have a simple demo or animation thing I was trying to show every 120 milliseconds you want to have the answer today if you go ask a question it will take seconds sometimes longer than that we are all impatient, we want the answer in quick order, when I ask you a question you don’t say let me think and come back, you want to give the answer if you don’t know how to think and come back, ok, you can but that’s a very deep question you can think and come back but we can throw more computing resources to get that so what it tells you is you can throw a lot more computing to get to the answer, it’s not just hardware, it’s going to be algorithmic improvements other improvements, but to me that’s the benchmark, get 120 milliseconds to any query for anybody so there’s a global context and India context, India provides an ample opportunity for 1 .4 billion people if you can deliver at a cost point and you can deliver at a cost point like 200 rupees a month at 120 seconds and any query to be handled which is a long road to go but if you can meet the objective in 10 to 20 years it serves a lot of people but it also will drive tremendous amount of innovation that’s why when somebody says population scale unique India has a unique thing about the population scale problem and the cost problem so hopefully there are enough people within as Arya said here semiconductor 3 .0 and other innovations that can drive to build India’s own sovereign lowest cost, shortest answer to any language to the question that you asked

Abhishek Singh

interesting take on that one of the things that keeps coming is about the scale of the global corporations or the size that they have been able to reach with AI gaining mainstream in India. And there is a parallel actually theme going on, which is on the semiconductor side, right? Like we are putting, the government is putting a lot of focus. Private players are putting a lot of focus. We have an audience like esteemed, like our Infosys guest. And the question that everybody keeps wondering about is with the AI and AI speeding up things, he’s talking about productivity gains and all. Like, can we, like what kind of corporations can come out of India? Can we see like NVIDIA is coming out or, I don’t know, like Palantir or Supermicro or even a new version of some microsystems coming out just because there is so much emphasis on AI or the Semicon side.

I’ll let Renu talk and then maybe you can also have your take on this particular question, right? What kind of corporations can come out, right? Your take.

Renu Raman

transition, can there be an SAP coming out of this transition in the AI? Why not? To give you some raw numbers, every gigawatt of power will require $25 billion worth of compute memory network storage. So if India is going to do 10 gigawatts, that’s $250 billion of hardware. That brings multiple super micros, or that sustains a semiconductor ecosystem at that scale. So certainly the investments going in for power, which is a long lead item, is important, but the next layer provides the economic value to host the hardware systems company, the HP, the Dells, and Supermicro can emerge. You can go each layer of the stack. The next layer is the application in your tier. Proximal is that. Maybe we could become the SAP of tomorrow.

That could be a Palantir, which is the application tier, not just Palantir, any other company. So both the scale and if you can solve the technology, the scale, and the cost economic it’s not just restricted India it can be global unlike the China model where it ended up being a very close garden wall I think India has the opportunity to be make in India and make for global it’s much better but you just have to think bigger take more important is take bolder bets and go for the long haul not just work for it for 5 years 7 years these are 10 20 years cycles to change very interesting take and thanks a lot for actually and I would like to have your opinion on this particular question do we see NVIDIA SAP’s and Oracle’s coming out thanks sir so I think the semiconductor data for example recently working with more than large companies I want to give some specific examples they are just ingesting data right now just I’m talking about a fab not design they have got 7 petabytes of data ingested and they don’t know what to do with it And like I said, a typical fab manufacturing facility is worth at least $10 billion.

And it’s got thousands of steps. It takes about 120 days to make a chip. So $10 billion for 120 days producing a wafer, and there are defects, design issues, things. So if you take the data just from basic information, run -time, real -time data, defects, soft defects and hard defects, because, you know, just because a chip fails doesn’t mean it’s slow. Slow means no money. That’s a failure. So collecting all that data, classificating the data, understanding and using agents in an edge computing way. You cannot solve this in a server. And then feeding it back to design infrastructure. So the design time also has shrunk a lot. And the yields are going up. 30 years ago when I was in Intel, we were talking about die sizes of like maybe one centimeter by one centimeter.

Today, in a 300 millimeter wafer, NVIDIA’s latest wafer level ship is about 20 centimeter by 20 centimeter. That level of yield and reliability is unimaginable without the use of AI. So I can go on and on, but I think if India has to win, I don’t think India needs to become a Palantir. And India does not want to become a slave shop. So the way I explain that in a one level page, the Palantir’s gross margin is 95%. Indian company’s gross margin is 30%. Can we build a business at 50 % gross margin where the amount of domain expertise India provides with the amount of data is available to take these technologies we talk about to implement them in real practice? That’s what India can win.

it is the execution with the best technology thank you thank you

Abhishek Singh

thanks everyone I have one question for the venture side so like all these like technologies require a lot of investment before they can actually become fruitful right I heard somewhere that the government of Karnataka and I don’t want to demean them by the way I put like 20 crores of fund for actually funding the startups and all that right the single engineer which actually Meta is hiring right now they are throwing how much 100 million dollars at that engineer 20 crore for funding like hundreds of startups versus 100 billion dollar like being given to one engineer right there is a huge mismatch now the question is do we like for Indian companies do the venture capitalists or this or the private equity do they have such deep pockets to fund them for like continuously fund them for hundreds of millions of billions of dollars so that an Nvidia or AMD or I don’t know like the Sun Microsystems can

Renu Raman

I want you to take. Answer this question. Actually, I would like to take your. I don’t want to answer. I want you to answer the question. Answer your own question.

Abhishek Singh

I want to answer my own question. Yes, it will require that kind of investment, right? I think this was one topic I touched upon a long time back. ISRO has been funded like continuously, right? Initially, the ISRO rockets would all land up in the ocean, right? Or the sea or whatever. But over a period of time, they gained competence. They are among the top four in the world right now, right? I think that kind of like continuous and continued support is needed for whatever industry we are picking, whether it is AI or whether it is Semicon. Like we need the private players. We need the government to support it like till the end, right? And that’s when maybe the key players and the winners will emerge.

Renu Raman

I think your question has got two parts. I think the first part is that the government is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going Sorry, there was a public announcement.

There’s an interruption here. I think there are two parts. One is there’s a mismatch between what a demand supply gap and skills in the model companies, if you

Abhishek Singh

Hopefully, yeah.

Renu Raman

So why did you do it and what do you think? That’s why I asked the question back to you.

Abhishek Singh

It’s good. It’s fun to build for India, by the way, and build from India, right? Build for India, build from India. That’s why we are here. And that’s why all this conference is here. All the discussions are happening.

Renu Raman

But thanks a lot, Renu, for all the wonderful insights. Last call. Anybody from the audience wants to ask any question to Renu?

Audience

Yes, sir. Thanks, sir. So my question is that you shared that if 10 gigawatt business comes to India, that means $250 billion worth of equipment will be purchased or something will happen in India. so how can we ensure that 10 gigawatts just leave 10 let’s start with 1 so how will that business come to India how will that I am just sharing that if 1 gigawatt business you said will come so how will that business come how will that business come

Renu Raman

today we already see most of the hardware is either broadened by the hyperscalers who got some capacity and then Dell HP are the largest OEMs as I understand super micro is behind I guess most of the hardware level systems are manufactured in Taiwan and other places and brought here and there are emergent players like VVDN Sanmina has got a manufacturing plant in Chennai who is going to come and do make in India I don’t want to steal the thunder so there are emergent ones seeing the economic value of that scale to start designing but we have already seen I don’t know the details of all the phone manufacturing that’s happened So the ecosystem of building chassis systems, board design, design capability was there, but manufacturing and operations support and all that was not there.

So I do expect that to start happening. That’s why we started working with CDAC and VVDN to some extent. We do see the opportunity that there’s at least a $300 to $500 million opportunity. If you look at the interesting aspect is the Indian public market is also valuing these things fairly high. Look at NetWeb and others. So you can’t go and raise money in the public market for these kinds of businesses in NASDAQ. You can certainly do that in India. So it’s an interesting point in time where there’s a demand, there’s a need, there needs to be enough people willing to invest. And there’s also probably a way to scale the business. I don’t view going public as an exit.

But really? I’m viewing going public as a way to raise money to scale the business. So there’s enough financial muscle that’s getting built at all stages. But the question is, are there enough people funding at the early phases to fund some of these, right? That I think they have to come together. I’m on the entrepreneur side, not the venture side. I’ve played both, but that has to come. That’s my point of answer to Abhishek’s question is, at a 10 gigawatt, it’s going to be multi -hundreds of billion dollars worth all the layers of the stack, and there should be enough investments going in. And if you look at what has happened in China, there’s a different way to drive that capitalistic structure, right?

They have taken a centralized model, but enabled a lot of districts and regional people to go invest. Look at the cars. How many car companies are there? I’m not saying you should follow the same model, but there should be enough early stage at various layers of the stack to be invested. So, the opportunity, the exit

Abhishek Singh

Thank you. Thank you, Renu, for all that, and everybody who has participated. Thanks for coming, guys. We’ll close the session here, so thanks a lot. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Renu Raman announced the recent launch of Proximal Cloud’s offering and described a flurry of activities targeting the Indian market.”

The knowledge base notes that the team just launched their offering and is focusing on activities in India, confirming the report’s statement [S24].

Additional Contextmedium

“Proximal is forging a hardware‑software co‑design partnership with AMD that provides a balanced CPU + GPU stack for AI workloads.”

AMD’s presentation highlighted that AI extends beyond GPUs and involves a full suite of hardware and software, adding nuance to the reported AMD partnership [S23].

Additional Contextmedium

“Proximal positions itself as a provider of enterprise‑private‑cloud infrastructure for India, emphasizing public‑private collaboration.”

The knowledge base discusses the importance of public-private partnerships in India’s AI and semiconductor ecosystem, providing broader context for Proximal’s market positioning [S96].

!
Correctionhigh

“Humanity tends to under‑estimate ten‑year horizons while over‑estimating two‑year gains.”

A cited speaker stated that technology shifts are generally over-estimated both in the short term and the long term, contradicting the report’s claim of under-estimation for the ten-year horizon [S124].

External Sources (125)
S2
Oracle to oversee TikTok algorithm in US deal — The White House has confirmed that TikTok’s prized algorithmwill be managed in the US under Oracle’s supervisionas part …
S3
How TikTok is changing world politics — The 2025 U.S. deal may set a new precedent for navigating this complex field. Under the terms, a group of American inves…
S4
Waves of infrastructure Open Systems Open Source Open Cloud — – Renu Raman- Abhishek Singh – Renu Raman- Abhishek Singh- Audience – Renu Raman- Jensen Huang – Renu Raman- Michael …
S5
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Azeem Azhar: Good morning, and welcome to our panel discussion today on quantum computing, titled From High Performanc…
S6
Driving U.S. Innovation in Artificial Intelligence — 15. Jensen Huang – CEO and Founder, NVIDIA
S7
Nvidia CEO Jensen Huang claims AI hallucinations are solvable; AGI is 5 years away — CEO Jensen Huang addressed the press this week at Nvidia’s annual GTC developer conference, sharing histhoughtson AI hal…
S8
Waves of infrastructure Open Systems Open Source Open Cloud — The partner presentations demonstrated practical applications across diverse sectors. Lalit Bhatt from PharmEx presented…
S9
https://dig.watch/event/india-ai-impact-summit-2026/multistakeholder-partnerships-for-thriving-ai-ecosystems — I would like to introduce, sitting on my left, Dr. Barbel Koffler, who is the Parliamentary State Secretary at Germany’s…
S10
Open Forum #30 High Level Review of AI Governance Including the Discussion — – **Abhishek Singh** – Under-Secretary from the Indian Ministry of Electronics and Information Technology Abhishek Sing…
S11
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S12
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, t…
S13
Waves of infrastructure Open Systems Open Source Open Cloud — Hello, everyone. I’m Sandeep Kumar from Instant System. We are a Silicon Valley -based venture builder. We do not just b…
S14
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are g…
S15
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S16
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S17
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S18
Keynote by Naveen Tewari Founder &amp; CEO, inMobi India AI Impact Summit — “the third is is a very disproportionate rate of growth of economic prosperity because of all the factors that the level…
S19
AI expected to reshape 89% of jobs across the workforce in 2026 — AI is set totransformtheUKworkforce in 2026, with nearly 9 out of 10 senior HR leaders expecting AI to reshape jobs, acc…
S20
The impact of AI on jobs and workforce — The ILO’s webinar was triggered by the recent impact of ChatGPT on our society and jobs. OpenAI’s ChatGPT, in particular…
S21
Strategy — The term AI in itself has morphed over the years since it was coined by John McCarthy et al at Dartmouth University in 1…
S22
NRIs MAIN SESSION: DATA GOVERNANCE — Artificial Intelligence depends on the data system, which has to be balanced
S23
Building the AI-Ready Future From Infrastructure to Skills — “Full stack, meaning hardware and software being able to deliver to the customer solutions, is what AMD is aiming for.”[…
S24
https://dig.watch/event/india-ai-impact-summit-2026/waves-of-infrastructure-open-systems-open-source-open-cloud — And it’s got thousands of steps. It takes about 120 days to make a chip. So $10 billion for 120 days producing a wafer, …
S25
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — First of all, Jacob, let me just say congratulations on this India and U.S. Paxilica signing today. This is certainly a …
S26
Driving Indias AI Future Growth Innovation and Impact — “Investment also includes energy infrastructure, because without energy, there is really no compute infrastructure you c…
S27
Artificial Intelligence &amp; Emerging Tech — Connectivity issues in developing countries for leveraging AI are also highlighted. This negative sentiment emphasizes t…
S28
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “When it comes to discovery, we need to develop foundation models for proteins, RNA, cellular circuits and systems biolo…
S29
From KW to GW Scaling the Infrastructure of the Global AI Economy — Hundreds of thousands of dollars. Real money. Right? Real money. So while you as a cloud provider might be thinking, and…
S30
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Bilel Jamoussi:Thank you very much for the great introduction and good afternoon. I really admire your energy to stay wi…
S31
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — And then the other one is a complementary side, which is working with the ecosystem, working with partners in Africa, in…
S32
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — I think the biggest challenge in not making AI aligned is that we will become products, not even consumers, right? We wa…
S33
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — 3. **Processing Architecture Shift**: The transition from CPU-based to GPU-based computing, fundamentally altering how c…
S34
Panel Discussion Data Sovereignty India AI Impact Summit — This example demonstrates what Gupta termed “partnership not dependence” – utilizing “the best of foreign technologies” …
S35
Panel Discussion: 01 — Unexpectedly, both speakers identified knowledge gaps and institutional capacity as more significant barriers than techn…
S36
Next-Gen Industrial Infrastructure / Davos 2025 — There are significant disparities in global investments in compute power, with the US and Asia leading, while Europe and…
S37
Laying the foundations for AI governance — – The four fundamental obstacles identified by the moderator: time, uncertainty, geopolitics, and power concentration R…
S38
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S39
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Key barriers to scaling include the need for high-quality data foundations, reimagined business processes, and comprehen…
S40
Keynote-Jeet Adani — Adani announced that “earlier this week, the chairman of the Adani Group made one of the most transformative announcemen…
S41
Keynote Adresses at India AI Impact Summit 2026 — The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India partner…
S42
AI for Good Innovation Factory Grand Finale 2025 — Infrastructure | Economic Predixion employs both a hardware-with-software strategy for independent hospital deployment …
S43
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — He advocated for a layered approach to sovereignty, focusing on controlling critical chokepoints whilst accepting strate…
S44
Indias Roadmap to an AGI-Enabled Future — This comment shifted the entire discussion from a hardware-centric view to an algorithm-centric one, giving hope that In…
S45
Secure Finance Risk-Based AI Policy for the Banking Sector — -India’s Strategic AI Positioning: Discussion centered on how India should position itself globally in AI governance, le…
S46
Agents of Change AI for Government Services &amp; Climate Resilience — Srinivas Tallapragada introduced an important distinction between strategic sovereignty and technical sovereignty that p…
S47
The Foundation of AI Democratizing Compute Data Infrastructure — The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI developme…
S48
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S49
WS #462 Bridging the Compute Divide a Global Alliance for AI — Elena emphasizes that sustainable collaborative models need credibility and trust to maintain participation and continue…
S50
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Aubra Anthony: Yeah, thanks, Yuping. And, yeah, a very auspicious time, really. I mentioned earlier some of the issues t…
S51
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Additionally, in an AI-driven economy, it will be necessary to take practical steps to implement policy considerations t…
S52
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Policy, regulation, and market rules were mentioned as important factors to address in order to limit the circulation of…
S53
Cyber Resilience Playbook for PublicPrivate Collaboration — – Within a given country, there is often intense competition for the promise of enormous investment by companies buildi…
S54
Research Publication No. 2014-7 March 17, 2014 — Improved interfaces are not only necessary at the data level, but also with respect to the normative spheres, at the val…
S55
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Artificial intelligence | Information and communication technologies for development Arun advocates for moving inferenc…
S56
https://dig.watch/event/india-ai-impact-summit-2026/ai-algorithms-and-the-future-of-global-diplomacy — For example, AI in healthcare is a fantastic opportunity for. Indo -German cooperation, there is fantastic data availabl…
S57
Waves of infrastructure Open Systems Open Source Open Cloud — Jensen announces an upcoming initiative to accelerate data processing, signalling a shift toward GPU‑based workloads.
S58
I NTRODUCTION — – Review and enhance the existing data governance framework to ensure comprehensive coverage of the data management life…
S59
African Union (AU) Data Policy Framework — Data processing roles as a form of security protection should be specified in policy by policymakers. Member States sho…
S60
Developing data capacities for policy makers and diplomats — Asked about the single most important capacity development need of policy makers and diplomats, panellists put awareness…
S61
Building Population-Scale Digital Public Infrastructure for AI — Good afternoon. We have an exciting panel discussion ahead. Let me start off with where Nandan stopped. Hundred pathways…
S62
Overview of AI policy in 10 jurisdictions — Summary: Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspire…
S63
Driving Indias AI Future Growth Innovation and Impact — Dr. Vivek Mohindra from Dell Technologies presented a comprehensive AI blueprint built upon three foundational pillars d…
S64
Building the AI-Ready Future From Infrastructure to Skills — And he said, Tim, India is software. This is what we do. He said, you’re going to be in front of the best people in the …
S65
The Virtual Worlds we want: Governance of the future web | IGF 2023 Open Forum #45 — Another major challenge highlighted is network latency in the context of virtual reality (VR) and extended reality (XR) …
S66
Artificial General Intelligence and the Future of Responsible Governance — The participants generally agreed that AGI represents AI systems capable of performing any human task at professional le…
S67
Waves of infrastructure Open Systems Open Source Open Cloud — Focus on Government of India initiatives in Education, Health, and Agriculture as primary market segments Proximal Clou…
S68
https://dig.watch/event/india-ai-impact-summit-2026/waves-of-infrastructure-open-systems-open-source-open-cloud — Thank you. what we’re doing in Proximal Cloud. The next phase we want to go into specifically what we are launching in t…
S69
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — First, India possesses “a huge talent pool of young, vibrant, intelligent, smart, educated people,” with one of the worl…
S70
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — 3. **Processing Architecture Shift**: The transition from CPU-based to GPU-based computing, fundamentally altering how c…
S71
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — -Infrastructure Constraints and Resource Management: Significant focus on three critical bottlenecks – power consumption…
S72
Next-Gen Industrial Infrastructure / Davos 2025 — There are significant disparities in global investments in compute power, with the US and Asia leading, while Europe and…
S73
AI Infrastructure and Future Development: A Panel Discussion — Compute Capacity and Demand Dynamics Efficiency improvements will accelerate rather than reduce infrastructure demand
S74
Building Public Interest AI Catalytic Funding for Equitable Compute Access — This comment introduced a completely different perspective on the compute scarcity problem, suggesting that technologica…
S75
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Ana Paula Assis: One example is what we are doing with ExxonMobil, for example, with their strategy and research divis…
S76
AI at 45W: Neuchips showcases energy-saving chips for LLMs — As global energy demand surges alongside AI growth, Neuchips isstepping up with energy-efficient solutionsthat deliver h…
S77
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Key barriers to scaling include the need for high-quality data foundations, reimagined business processes, and comprehen…
S78
Leveraging AI4All_ Pathways to Inclusion — The discussion revealed that many AI products remain stuck in pilot stage due to surrounding system challenges rather th…
S79
Building Population-Scale Digital Public Infrastructure for AI — -Scaling AI from Pilots to Population-Scale Implementation: A key challenge discussed is moving beyond impressive pilots…
S80
Keynote Adresses at India AI Impact Summit 2026 — The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India partner…
S81
Keynote-Jeet Adani — Adani announced that “earlier this week, the chairman of the Adani Group made one of the most transformative announcemen…
S82
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S83
AI in education: Leveraging technology for human potential — The tone is consistently optimistic and inspirational throughout, with Mills maintaining an enthusiastic and visionary a…
S84
Opening — Pace of technological progress is accelerating unpredictably
S85
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S86
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S87
Critical Infrastructure in the Digital Age: From Deep Sea Cables to Orbital Satellites — The discussion maintained a balanced tone that was simultaneously informative and concerning. It began with an education…
S88
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — The discussion maintained a formal, academic tone throughout, characteristic of a research presentation or conference se…
S89
AI and Data Driving India’s Energy Transformation for Climate Solutions — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than…
S90
The Foundation of AI Democratizing Compute Data Infrastructure — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s ideas rather than de…
S91
Presentation of outcomes to the plenary — The event showcased the power of collaboration and innovation.
S92
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — ## Concrete Examples of Multi-Stakeholder Success Hisham Ibrahim: I’ll also mention three quick ones, looking across my…
S93
WS #453 Leveraging Tech Science Diplomacy for Digital Cooperation — Need to showcase concrete examples and successes; AFNIC’s collaborative projects as examples of multi-stakeholder work
S94
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Deborah Rogers:I guess my closing remark would be that technology is a great enabler. It can actually be used to decreas…
S95
Building Indias Digital and Industrial Future with AI — The discussion maintained a collaborative and forward-looking tone throughout, with industry experts, regulators, and po…
S96
The Global Power Shift India’s Rise in AI &amp; Semiconductors — The discussion maintained an optimistic and forward-looking tone throughout, with speakers expressing confidence in Indi…
S97
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S98
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving…
S99
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S100
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S101
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S102
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S103
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — The discussion maintained a consistently optimistic and forward-looking tone throughout. Speakers expressed confidence i…
S104
AI Meets Agriculture Building Food Security and Climate Resilien — The discussion maintained an optimistic and collaborative tone throughout, characterized by visionary leadership and pra…
S105
Opening of the session — Emerging Technologies:Present both challenges and opportunities. Recent Initiatives:Provide a foundation for further pr…
S106
Opening of the session — relevance of technological innovation and the establishment of new norms to guarantee freedoms and protections online
S107
Tightening the interconnectedness of ICT, Digitalization and Industry 4.0 to accelerate Economic growth and industrialization in developing countries — Ana Paula NISHIO DE SOUSA:Ah, thank you. Yeah, so you’re absolutely right about, I would say, 35 years ago, maybe even 4…
S108
WS #51 Internet &amp; SDG’s: Aligning the IGF &amp; ITU’s Innovation Agenda — Jasmine Ko emphasised the need to prioritise and set achievable goals within limited resources. She suggested using desi…
S109
Empowering Women Entrepreneurs through Digital Trade and Training ( Global Innovation Forum) — The analysis identifies two remarkable entrepreneurs from Senegal, namely Awa Caba. Awa Caba is actively involved in the…
S110
Multistakeholder Partnerships for Thriving AI Ecosystems — This extends to evaluation and quality assurance, where the absence of regional AI evaluation hubs creates uncertainty a…
S111
Advancing rights-based digital governance through ROAM-X | IGF 2023 — In her closing remarks, Grigoryan reflected on the insightful discussion and offered speakers an opportunity for final t…
S112
Session — A pointed inconsistency is detected within international negotiations concerning the balance between human rights protec…
S113
Ad Hoc Consultation: Thursday 1st February, Morning session — This expression of gratitude not only served as a respectful acknowledgment of the session’s orderly progression but als…
S114
Blockchain and Biometric-based Digital Identity Solution — The session concluded without any questions from the audience, suggesting that the presentations were comprehensive and …
S115
Institute of AI Education marks significant step for responsible AI in schools — TheInstitute of AI Educationwas officiallylaunchedatYork St John University, bringing together education leaders, teache…
S116
Teachers see AI as an educational tool — Teachers have longworriedabout ChatGPT enabling students to cheat, with its ability to produce essays and solve problems…
S117
AI cheating scandal at University sparks concern — Hannah, a university student,admits to using AIto complete an essay when overwhelmed by deadlines and personal illness. …
S118
Artificial intelligence (AI) and cyber diplomacy — Adil Suleyman:Thank you. Once again. Just one last question. What does it take for the African Union Commission’s one de…
S119
AI for Social Good Using Technology to Create Real-World Impact — Kiran Mazumdar-Shaw, chairperson of Biocon Group, presented perhaps the most visionary perspective on AI’s potential in …
S120
29, filed Jan. 22, 2010, at 9-10. — New broadband-enabled solutions are transforming how teachers and students use content and media. But copyright law must…
S121
Table of Contents — Information and Communication Technologies (ICT) applied to health and healthcare systems can increase their efficiency,…
S122
The Expanding Universe of Generative Models — Attempt to perform similar processes on images or videos has not been successful
S123
Thinking through Augmentation — He presents a theoretical scenario in which AI-driven vehicles might result in only 50,000 deaths internationally, a 90%…
S124
Fireside Conversation: 02 — So, usually in technological shifts of this type, we are overestimating. And the changes in the short term and overestim…
S125
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Perhaps the most transformative aspect of the discussion centred on how AI will fundamentally reshape human-computer int…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Renu Raman
11 arguments174 words per minute6044 words2073 seconds
Argument 1
AI will impact 95 % of work, driving massive productivity gains (Renu Raman)
EXPLANATION
Renu argues that artificial intelligence will affect the vast majority of jobs, delivering unprecedented productivity improvements across the economy. This broad impact will far exceed the gains seen during the SaaS era.
EVIDENCE
She notes that AI is expected to impact 95 % of work, describing it as a “blast radius” much larger than previous productivity waves and linking this to the need for far more computing resources [52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry surveys and ILO reports highlight that AI will reshape around 89-90% of jobs and deliver large productivity gains, supporting the claim of a 95% impact [S18][S19][S20].
MAJOR DISCUSSION POINT
Scale of AI-driven productivity
Argument 2
India needs ultra‑low‑cost, infant‑scale compute for population‑scale AI (Renu Raman)
EXPLANATION
Renu highlights the unique challenge of delivering very cheap, small‑scale compute resources that can serve India’s massive population. She frames this as a core problem for the Proximal Cloud initiative.
EVIDENCE
She states that India demands an “extremely low-cost infant-scale compute at population scale” and that this is a key focus for their work [113-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Waves of Infrastructure discussion emphasizes India’s demand for extremely low-cost, infant-scale compute at population scale, and notes the need for energy-linked compute infrastructure investments [S1][S26].
MAJOR DISCUSSION POINT
Cost‑effective compute for mass adoption
AGREED WITH
Lalit Bhatt
Argument 3
Successful AI systems require tight hardware‑software co‑design; “make your own hardware if you care about software” (Renu Raman)
EXPLANATION
Renu emphasizes that serious software developers should build their own hardware, and vice‑versa, to achieve optimal AI performance. This co‑design approach is presented as a strategic principle for innovation.
EVIDENCE
She says, “people who are serious about software should make their own hardware” and the corollary for hardware developers, underscoring the need for integrated design [34-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of hardware-software co-design and building own hardware is echoed in the Waves of Infrastructure talk and AMD’s full-stack strategy, while the complexity and cost of chip development are highlighted as challenges [S1][S23][S24].
MAJOR DISCUSSION POINT
Hardware‑software integration
Argument 4
Partnership with AMD provides a balanced CPU + GPU stack and high‑capacity memory for LLMs (Renu Raman)
EXPLANATION
Renu explains that Proximal Cloud is collaborating with AMD to combine x86 CPUs with a strong GPU roadmap and large memory capacities, enabling support for sizable language models. This partnership is positioned as a way to deliver a “happy blend” of compute resources.
EVIDENCE
She describes AMD’s CPU assets, GPU roadmap, and memory capacity of 256 GB HPM supporting 128-billion-parameter models, with plans to increase to 512 GB, facilitating single-node workloads for many customers [105-107].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Renu’s AMD partnership aligns with statements about AMD’s full-stack hardware-software approach and the “happy blend” of CPUs and GPUs discussed in the Waves of Infrastructure session [S1][S23].
MAJOR DISCUSSION POINT
Strategic hardware partnership
AGREED WITH
Jensen Huang
Argument 5
Proximal’s platform supports education, health‑science, and research use cases in partnership with UC San Diego (Renu Raman)
EXPLANATION
Renu outlines collaborations with UC San Diego’s data‑science institute to provide compute resources for AI in education, health sciences, and research. The partnership leverages a supercomputing data center and aims to transform curricula and research capabilities.
EVIDENCE
She mentions the UC San Diego partnership, the data-science institute, AI for education, health sciences, and a supercomputing data center used for hardware-level work, compute kernels, and inference engines [108-110].
MAJOR DISCUSSION POINT
Sector‑specific AI deployments
AGREED WITH
Michael Dell
Argument 6
Target query latency of ~20 ms (Google) or ~120 ms for population‑scale services (Renu Raman)
EXPLANATION
Renu cites Google’s historical benchmark of 20 ms per query and proposes a more realistic 120 ms target for large‑scale Indian services. She argues that meeting such latency goals will drive massive infrastructure investment.
EVIDENCE
She references Google’s 20 ms goal and suggests a 120 ms benchmark for population-scale queries, linking this to the need for extensive compute resources [300-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 120 ms latency benchmark for population-scale services mirrors the target presented in the Waves of Infrastructure discussion, which references Google’s historic 20 ms goal [S1].
MAJOR DISCUSSION POINT
Performance benchmarks for large‑scale AI
AGREED WITH
Abhishek Singh
Argument 7
Sustained, multi‑decade investment is essential to build a sovereign AI hardware ecosystem (Renu Raman)
EXPLANATION
Renu stresses that long‑term, consistent funding is required to develop a domestic AI hardware stack, similar to historic investments in semiconductors. She frames this as a prerequisite for India’s AI sovereignty.
EVIDENCE
She discusses the need for multi-decade investment, referencing historical cycles and the importance of continuous support for building AI hardware capabilities [315-322].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Long-term investment, including energy infrastructure for compute, is emphasized as critical for building AI-ready hardware ecosystems [S26].
MAJOR DISCUSSION POINT
Long‑term funding for AI infrastructure
AGREED WITH
Abhishek Singh
DISAGREED WITH
Abhishek Singh
Argument 8
Scaling to 10 GW of AI‑ready power could unlock $250 B of hardware spend and create a domestic semiconductor supply chain (Renu Raman)
EXPLANATION
Renu quantifies the economic impact of building 10 GW of AI‑ready power in India, estimating a $250 billion hardware market that would foster a local semiconductor ecosystem. She uses this figure to illustrate the scale of opportunity.
EVIDENCE
She states that each gigawatt requires $25 billion of compute, memory, network, and storage, so 10 GW would represent roughly $250 billion of hardware spend, enabling a domestic supply chain [316-319].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI infrastructure scaling note that 10 GW of AI-ready power could drive on the order of $250 B in hardware spend, enabling a domestic semiconductor supply chain [S29].
MAJOR DISCUSSION POINT
Economic potential of AI‑ready power
Argument 9
Open‑source models will play a role analogous to Linux in the next “distributed computing 3.0” era (Renu Raman)
EXPLANATION
Renu predicts that open‑source AI models will democratize distributed computing much like Linux did for operating systems, driving a new wave of innovation. She positions open models as a catalyst for the upcoming era.
EVIDENCE
She notes that open-source models will have a role similar to Linux, enabling new ways to build distributed systems and fostering ecosystem growth [270-272].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions on open-source models driving a new distributed computing era and being comparable to Linux’s impact support this view [S30][S1].
MAJOR DISCUSSION POINT
Open models as enablers of distributed computing
Argument 10
Historical shifts (hypervisors, Linux) illustrate how abstraction layers enable ecosystem growth (Renu Raman)
EXPLANATION
Renu recounts past technology transitions—hypervisors, Linux—that created abstraction layers, allowing rapid ecosystem expansion. She uses these examples to argue that similar layers will emerge with AI models.
EVIDENCE
She references the role of hypervisors (KVM, VMware) and Linux in past shifts, showing how abstraction layers removed middle-age software costs and enabled massive scaling [280-286].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Waves of Infrastructure narrative cites hypervisors and Linux as past abstraction layers that spurred ecosystem expansion, illustrating the point [S1].
MAJOR DISCUSSION POINT
Role of abstraction in tech evolution
Argument 11
Models act as a new abstraction layer separating compute needs from higher‑level applications (Renu Raman)
EXPLANATION
Renu describes AI models as a fresh abstraction that decouples underlying hardware requirements from application logic, similar to how virtual machines and operating systems functioned previously. This layer is expected to spur both closed and open innovation.
EVIDENCE
She states that “models is a new abstraction layer that provides a higher degree of innovation” and compares it to hypervisors and operating systems [284-287].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same Waves of Infrastructure discussion describes AI models as a new abstraction layer that decouples hardware requirements from application logic [S1].
MAJOR DISCUSSION POINT
Models as a computing abstraction
J
Jensen Huang
2 arguments149 words per minute86 words34 seconds
Argument 1
Data processing workloads still run on CPUs, highlighting a gap for acceleration (Jensen Huang)
EXPLANATION
Jensen points out that the majority of structured and unstructured data processing—such as SQL queries in Databricks, Snowflake, and Oracle—still relies on traditional CPUs. This reliance signals a need for accelerated processing solutions.
EVIDENCE
He lists data processing platforms (Databricks, Snowflake, Oracle) and notes they “still completely runs on CPUs” [97-104].
MAJOR DISCUSSION POINT
CPU‑centric data processing
AGREED WITH
Renu Raman
Argument 2
Accelerated data processing must move beyond CPU‑only architectures (Jensen Huang)
EXPLANATION
Building on his earlier point, Jensen argues that future data‑processing initiatives must incorporate specialized accelerators rather than relying solely on CPUs. This shift is essential to meet growing performance demands.
EVIDENCE
He emphasizes that “very soon we’re going to announce a very big initiative of accelerated data processing” because current workloads are CPU-bound [97-104].
MAJOR DISCUSSION POINT
Need for hardware acceleration
AGREED WITH
Renu Raman
M
Michael Dell
1 argument126 words per minute73 words34 seconds
Argument 1
On‑prem AI factories enable enterprises to keep data local and cut costs (Michael Dell)
EXPLANATION
Michael describes AI factories that allow companies to run AI workloads on‑premise, keeping data where it is generated and reducing the expense of moving data to the cloud. This model is presented as a way to lower overall AI costs for enterprises.
EVIDENCE
He notes that they have delivered “over 3,000 of these AI factories” that bring AI to the data rather than the data to AI, addressing the large amount of on-premise data [118].
MAJOR DISCUSSION POINT
Local AI deployment for cost reduction
AGREED WITH
Renu Raman
L
Lalit Bhatt
3 arguments123 words per minute1070 words519 seconds
Argument 1
Local compute lowers inference cost for agriculture AI (Lalit Bhatt)
EXPLANATION
Lalit explains that placing compute close to agricultural sensors and imaging data reduces the cost of running inference, which is critical for price‑sensitive farmers. This approach improves efficiency across the entire data‑to‑insight pipeline.
EVIDENCE
He mentions that “we are looking into technologies where we can reduce our cost… it is very difficult to ask a lot of money from the farmer” and that local compute helps keep inference costs low for agriculture applications [144-146].
MAJOR DISCUSSION POINT
Cost‑effective AI for farming
AGREED WITH
Renu Raman
Argument 2
Divium provides a quality‑first inference layer that selects the best model per dollar and automates model upgrades (Lalit Bhatt)
EXPLANATION
Lalit describes Divium’s platform, which evaluates model quality against cost, routes queries to the optimal model, and continuously updates to newer models without breaking production. This ensures both performance and cost efficiency for enterprises.
EVIDENCE
He outlines Divium’s capabilities: measurable evaluations, model selection per dollar, automated upgrades, and single-API access, citing deployments that cut costs by over 60% for a travel aggregator and 30% for an e-pharmacy [170-179].
MAJOR DISCUSSION POINT
Intelligent model routing and cost optimization
Argument 3
Lack of standardized evaluation and unpredictable costs hinder Gen‑AI pilot production (Lalit Bhatt)
EXPLANATION
Lalit points out that 90 % of generative AI pilots fail to reach production because quality metrics are undefined and costs can spike dramatically. These challenges make it difficult for enterprises to scale AI initiatives.
EVIDENCE
He notes that “quality is undefined” and “costs are unpredictable,” with price variations of 10-50× and cost spikes when traffic increases, leading to pilot failures [156-165].
MAJOR DISCUSSION POINT
Barriers to AI pilot scaling
A
Abhishek Singh
4 arguments157 words per minute955 words364 seconds
Argument 1
Custom silicon can offload LLM inference, improving performance and efficiency (Abhishek Singh)
EXPLANATION
Abhishek states that using specialized chips to run large language models can accelerate inference and reduce power consumption compared to general‑purpose CPUs/GPUs. This custom silicon approach is presented as a key performance enhancer.
EVIDENCE
He explains that “we offload the large language models to specific chips and custom silicon” to achieve better inference performance [266-268].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
While custom silicon can accelerate LLM inference, industry analyses highlight the high cost, long lead times, and defect challenges of chip development, providing a counterpoint to the claim [S24].
MAJOR DISCUSSION POINT
Specialized hardware for LLMs
Argument 2
Semiconductor fab data analytics require edge AI to handle petabytes of real‑time data (Abhishek Singh)
EXPLANATION
Abhishek describes how semiconductor manufacturing generates massive amounts of data that must be processed in real time at the edge, requiring AI to classify defects and improve yields. He argues that centralized servers cannot meet these latency and bandwidth needs.
EVIDENCE
He details that fabs generate “7 petabytes of data,” need real-time defect analysis, and require edge computing to process and feed insights back into design, highlighting the scale and speed requirements [324-334].
MAJOR DISCUSSION POINT
Edge AI for fab data processing
Argument 3
Current funding gaps (e.g., 20 crore for startups vs. $100 M for single engineers) risk under‑investment in deep‑tech ventures (Abhishek Singh)
EXPLANATION
Abhishek compares the relatively modest government funding for Indian startups with the massive investments made by global tech firms in individual engineers, arguing that the disparity could hinder deep‑tech development in India.
EVIDENCE
He cites the mismatch between “20 crore of fund for startups” and “$100 M for a single engineer,” suggesting that such gaps threaten sustained innovation [343-352].
MAJOR DISCUSSION POINT
Funding disparity for deep‑tech
AGREED WITH
Renu Raman
Argument 4
Achieving sub‑second, low‑cost query responses for 1.5 billion users is a long‑term engineering challenge (Abhishek Singh)
EXPLANATION
Abhishek raises the question of whether India can deliver sub‑second query latency at a very low monthly cost for its massive population, indicating that meeting such performance at scale will require significant engineering breakthroughs.
EVIDENCE
He asks whether sub-millisecond or sub-second responses can be provided to 1.5 billion users at a cost of about 200 rupees per month, emphasizing the difficulty of the task [292-297].
MAJOR DISCUSSION POINT
Scalable low‑latency AI services
AGREED WITH
Renu Raman
S
Sandeep Kumar
1 argument158 words per minute769 words290 seconds
Argument 1
Venture‑builder model solves AI challenges such as hallucinations, disambiguation, data privacy, and reliability (Sandeep Kumar)
EXPLANATION
Sandeep outlines how his venture‑builder approach addresses common AI pitfalls: eliminating hallucinations, improving disambiguation, ensuring data privacy at the object level, and guaranteeing reliability for financial‑grade applications.
EVIDENCE
He lists solutions for hallucinations (99 % reliability), disambiguation, data privacy at the raw/object level, and reliability for financial transactions, noting these have been implemented in their systems [207-226].
MAJOR DISCUSSION POINT
Comprehensive AI risk mitigation
A
Audience
1 argument136 words per minute428 words188 seconds
Argument 1
India’s AI advantage lies in software, on‑prem AI, and productivity gains rather than chip design alone (Audience)
EXPLANATION
The audience member argues that India should focus on software and on‑premise AI solutions to achieve productivity improvements, as chip design will take longer to mature. This perspective frames software as the primary lever for AI leadership.
EVIDENCE
He states that “India has to be in software, it has to be in AI… the chip building-wise is going to take some time” and emphasizes the importance of on-premise data and AI solutions [244-247].
MAJOR DISCUSSION POINT
Strategic focus on software for AI leadership
Agreements
Agreement Points
Both speakers stress that current data‑processing and AI workloads are dominated by CPUs and that accelerated, heterogeneous compute (CPU + GPU) is needed to meet future performance demands.
Speakers: Renu Raman, Jensen Huang
Partnership with AMD provides a balanced CPU + GPU stack and high‑capacity memory for LLMs (Renu Raman) Data processing workloads still run on CPUs, highlighting a gap for acceleration (Jensen Huang) Accelerated data processing must move beyond CPU‑only architectures (Jensen Huang)
Renu notes a strategic partnership with AMD to combine x86 CPUs and powerful GPUs for AI workloads [105-107], while Jensen points out that major data-processing platforms still rely exclusively on CPUs and calls for accelerated solutions [97-104]. Both converge on the need for heterogeneous, accelerated compute beyond CPUs.
POLICY CONTEXT (KNOWLEDGE BASE)
The need for heterogeneous CPU-GPU acceleration is echoed in Jensen Huang’s announcement of a new initiative to speed up data-processing workloads toward GPU-based solutions [S57] and aligns with broader calls for edge-centric heterogeneous compute architectures [S55].
On‑premise or local compute is essential to keep data sovereign, reduce costs and improve performance.
Speakers: Renu Raman, Michael Dell
Proximal’s platform supports education, health‑science, and research use cases in partnership with UC San Diego (Renu Raman) On‑prem AI factories enable enterprises to keep data local and cut costs (Michael Dell)
Renu describes Proximal’s goal of bringing compute close to data, making it sovereign and nearer to memory and business needs [130-133]. Michael Dell describes AI factories that bring AI to the data rather than moving data to AI, reducing costs [118-119]. Both advocate for local compute to keep data on-premise and lower expenses.
POLICY CONTEXT (KNOWLEDGE BASE)
India’s layered sovereignty framework emphasizes controlling critical compute chokepoints while accepting strategic dependencies, highlighting the importance of on-premise resources for data sovereignty and cost efficiency [S43]; the distinction between strategic and technical sovereignty further reinforces this priority [S46].
Deploying compute close to the data source lowers inference cost for domain‑specific applications.
Speakers: Renu Raman, Lalit Bhatt
India needs ultra‑low‑cost, infant‑scale compute for population‑scale AI (Renu Raman) Local compute lowers inference cost for agriculture AI (Lalit Bhatt)
Renu emphasizes the need for extremely low-cost, infant-scale compute to serve India’s massive population [113-115]. Lalit explains that placing compute near agricultural sensors reduces inference cost, which is critical for price-sensitive farmers [144-146]. Both highlight local compute as a cost-reduction strategy for specific sectors.
POLICY CONTEXT (KNOWLEDGE BASE)
Advocacy for moving inference to the edge to reduce reliance on large data centres supports the claim that proximity to data lowers inference costs, as described in the heterogeneous compute for democratizing AI discussion [S55] and the strategic-technical sovereignty perspective [S46].
Achieving very low query latency at massive scale is a central technical challenge.
Speakers: Renu Raman, Abhishek Singh
Target query latency of ~20 ms (Google) or ~120 ms for population‑scale services (Renu Raman) Achieving sub‑second, low‑cost query responses for 1.5 billion users is a long‑term engineering challenge (Abhishek Singh)
Renu cites Google’s 20 ms benchmark and proposes a 120 ms target for Indian population-scale services [300-304]. Abhishek asks whether sub-second (or sub-millisecond) response times can be delivered to billions of users at low cost [292-297]. Both converge on the importance of ultra-low latency at scale.
POLICY CONTEXT (KNOWLEDGE BASE)
Low-latency requirements for massive-scale AI services are reflected in the VR/XR latency benchmarks of sub-20 ms for interactive rendering [S65] and in discussions on building population-scale digital public infrastructure for AI that stress latency as a key metric [S61].
Building a sovereign AI hardware ecosystem requires sustained, multi‑decade investment and adequate funding mechanisms.
Speakers: Renu Raman, Abhishek Singh
Sustained, multi‑decade investment is essential to build a sovereign AI hardware ecosystem (Renu Raman) Current funding gaps (e.g., 20 crore for startups vs. $100 M for single engineers) risk under‑investment in deep‑tech ventures (Abhishek Singh)
Renu stresses the necessity of long-term, continuous funding to develop AI-ready hardware infrastructure [315-322]. Abhishek highlights a mismatch between modest government startup funds and massive private investments, warning of under-investment risks [343-352]. Both agree on the critical role of sustained financing.
POLICY CONTEXT (KNOWLEDGE BASE)
Sustained multi-decade investment is highlighted in the Dell Technologies AI blueprint that earmarks long-term funding for compute infrastructure and energy systems [S63], while policy papers on collaborative financing models and equipment financing schemes underscore the need for dedicated funding mechanisms [S49][S51].
Similar Viewpoints
Both emphasize that software capabilities, especially when coupled with appropriate hardware, are the primary lever for India’s AI leadership, while chip design alone will take longer to mature. Renu argues for co‑design of hardware and software to achieve performance [34-36], and the audience stresses focusing on software and on‑prem AI solutions [244-247].
Speakers: Renu Raman, Audience
Successful AI systems require tight hardware‑software co‑design; “make your own hardware if you care about software” (Renu Raman) India’s AI advantage lies in software, on‑prem AI, and productivity gains rather than chip design alone (Audience)
Unexpected Consensus
Recognition by a GPU‑centric CEO (Jensen Huang) that the majority of data‑processing workloads remain CPU‑bound and need acceleration, aligning with Renu’s call for heterogeneous compute.
Speakers: Renu Raman, Jensen Huang
Partnership with AMD provides a balanced CPU + GPU stack and high‑capacity memory for LLMs (Renu Raman) Data processing workloads still run on CPUs, highlighting a gap for acceleration (Jensen Huang)
Despite Jensen Huang leading a company known for GPU acceleration, he acknowledges that most data‑processing still runs on CPUs and calls for accelerated solutions. Renu simultaneously promotes a CPU + GPU blend via AMD partnership, showing an unexpected alignment between a GPU leader and a hardware‑software co‑design advocate.
POLICY CONTEXT (KNOWLEDGE BASE)
Jensen Huang publicly acknowledged that most data-processing workloads remain CPU-bound and require GPU acceleration, confirming the speakers’ view [S57].
Overall Assessment

The discussion reveals strong convergence on several fronts: the necessity of heterogeneous, accelerated compute; the strategic importance of on‑premise/local compute for cost and data sovereignty; the critical challenge of ultra‑low latency at population scale; and the need for long‑term, well‑funded investment to build a sovereign AI ecosystem. Participants also share the view that software innovation, supported by appropriate hardware, is the immediate lever for India’s AI leadership.

High consensus across technical, economic, and policy dimensions, indicating a shared understanding that India’s AI future hinges on integrated hardware‑software solutions, local compute deployment, latency performance, and sustained financing. This consensus suggests coordinated action among industry, academia, and policymakers could effectively advance India’s AI infrastructure and ecosystem.

Differences
Different Viewpoints
India’s AI leadership focus – hardware infrastructure versus software/on‑prem AI solutions
Speakers: Renu Raman, Audience
India needs ultra‑low‑cost infant‑scale compute at population scale (Renu Raman) India should focus on software and on‑prem AI, chip building will take time (Audience)
Renu emphasizes building domestic, low-cost compute hardware (including an AMD CPU+GPU partnership) as essential for AI sovereignty [113-115][105-107], while the audience member argues that India’s advantage lies in software and on-prem AI, stating that chip design will take longer and the focus should be on software development [244-247].
POLICY CONTEXT (KNOWLEDGE BASE)
The India AI Impact Summit advocated a software-first, stack-centric approach to AI sovereignty, contrasting with hardware-centric ambitions, and the AGI roadmap further shifted emphasis toward algorithms over infrastructure [S43][S44]; Dell’s blueprint also stresses compute infrastructure, illustrating the ongoing debate [S63][S64].
Adequacy of funding for deep‑tech AI hardware ecosystem
Speakers: Renu Raman, Abhishek Singh
Sustained, multi‑decade investment is essential to build a sovereign AI hardware ecosystem (Renu Raman) Current funding gaps (20 crore for startups vs $100 M for single engineers) risk under‑investment in deep‑tech ventures (Abhishek Singh)
Renu asserts that long-term public and private investment will support the development of AI-ready power and a domestic semiconductor supply chain, citing a $250 billion hardware market from 10 GW of power [315-322][316-319], whereas Abhishek highlights a stark mismatch between modest government startup funds and massive private investments elsewhere, questioning whether sufficient capital will be available [343-352].
POLICY CONTEXT (KNOWLEDGE BASE)
Concerns about funding adequacy are reflected in analyses of global compute-divide financing models that call for credible incentive structures [S49], African policy recommendations for equipment financing schemes [S51], and observations on reduced foreign assistance affecting AI projects [S50].
Target latency for population‑scale AI services
Speakers: Renu Raman, Abhishek Singh
Aim for ~120 ms query latency for large‑scale Indian services (Renu Raman) Question feasibility of sub‑second or sub‑millisecond responses for 1.5 billion users at low cost (Abhishek Singh)
Renu proposes a realistic benchmark of 120 ms per query, referencing Google’s 20 ms goal as a historical target [300-304], while Abhishek asks whether sub-second (or even sub-millisecond) response times can be delivered to billions of users at a low monthly cost, indicating a more ambitious performance expectation [292-297].
POLICY CONTEXT (KNOWLEDGE BASE)
Target latency for population-scale AI services is informed by VR/XR latency targets of 10-20 ms [S65] and by the population-scale digital public infrastructure discussions that identify latency as a critical performance indicator [S61].
Approach to accelerate data processing workloads
Speakers: Jensen Huang, Renu Raman
Data processing still runs on CPUs; need accelerated data processing (Jensen Huang) Build a balanced CPU+GPU stack with AMD to handle AI workloads (Renu Raman)
Jensen points out that major data-processing platforms still rely entirely on CPUs and announces a forthcoming accelerated data-processing initiative [97-104], whereas Renu emphasizes a ‘happy blend’ of CPUs and GPUs through an AMD partnership to support AI workloads, suggesting integration rather than a shift solely to accelerators [105-107].
POLICY CONTEXT (KNOWLEDGE BASE)
Accelerating data-processing workloads through GPU-centric strategies is supported by Jensen Huang’s initiative to shift workloads toward GPU acceleration [S57] and by broader calls for heterogeneous compute to democratize AI access [S55].
Unexpected Differences
Hardware‑centric versus software‑centric AI strategy for India
Speakers: Renu Raman, Audience
India needs ultra‑low‑cost infant‑scale compute (Renu Raman) India should prioritize software and on‑prem AI, chip design will take time (Audience)
While both speakers are part of the same broader initiative, they diverge sharply on the primary lever for India’s AI leadership. Renu’s hardware‑focused roadmap contrasts with the audience’s software‑first stance, an unexpected split given their shared goal of AI advancement.
POLICY CONTEXT (KNOWLEDGE BASE)
The hardware-centric versus software-centric strategic split mirrors the layered sovereignty recommendation to focus on software stacks [S43] and the explicit statement that India’s strength lies in software rather than hardware [S64], underscoring the policy debate.
Overall Assessment

The discussion reveals moderate disagreement centered on strategic priorities (hardware vs software), funding adequacy, performance targets, and technical approaches to acceleration. Participants share a common vision of AI‑driven growth but differ on how to achieve it, reflecting divergent perspectives on investment, infrastructure, and feasibility.

Moderate – while there is consensus on the importance of AI and cost reduction, the differing views on hardware investment, latency goals, and funding mechanisms could lead to fragmented efforts unless reconciled, potentially slowing coordinated progress toward India’s AI ecosystem.

Partial Agreements
All participants agree that lowering AI inference and data‑processing costs is crucial for widespread adoption. Renu proposes ultra‑low‑cost, infant‑scale compute and a balanced CPU‑GPU stack [113-115][105-107]; Lalit stresses edge compute to keep farmer costs low [144-146]; Sandeep describes a venture‑builder approach that mitigates AI risks and improves cost efficiency [207-226]; Jensen calls for dedicated accelerators to move beyond CPU‑bound workloads [97-104].
Speakers: Renu Raman, Lalit Bhatt, Sandeep Kumar, Jensen Huang
Need to reduce AI inference and data‑processing costs (Renu Raman) Local compute lowers inference cost for agriculture AI (Lalit Bhatt) Venture‑builder model solves AI challenges including cost (Sandeep Kumar) Accelerated data processing needed to improve performance (Jensen Huang)
All agree on the importance of keeping AI workloads close to the data source. Renu describes Proximal’s model of bringing compute nearer to data and memory [130-133]; Michael highlights AI factories that keep data on‑prem to cut costs [118]; the audience member emphasizes that most data resides on‑prem and advocates software solutions that operate there [244-247].
Speakers: Renu Raman, Michael Dell, Audience
Compute should be brought close to data/on‑prem (Renu Raman) AI factories bring AI to the data, reducing costs (Michael Dell) 90 %+ data is on‑prem; focus on on‑prem AI solutions (Audience)
Takeaways
Key takeaways
AI is expected to affect ~95% of work, creating a massive demand for compute and productivity gains. India requires ultra‑low‑cost, infant‑scale compute infrastructure to serve its large population at scale. Current data‑processing workloads are CPU‑centric; accelerating these workloads with GPUs or custom silicon is essential. On‑premise AI factories (Proximal Cloud) enable data locality, reduce latency, and lower inference costs for enterprises. Hardware‑software co‑design is critical; partnership with AMD provides a balanced CPU + GPU stack with high‑capacity memory for LLMs. Custom silicon can offload LLM inference, improving performance and efficiency (as highlighted by ZetaVault). Key application domains demonstrated: agriculture (PharmEx), education and health sciences (UC San Diego), semiconductor fab analytics, and enterprise AI agents. Model selection and inference cost/quality are major challenges; Divium offers a quality‑first inference layer that auto‑optimizes model choice and upgrades. Latency benchmarks (≈20 ms for Google, ≈120 ms proposed for population‑scale services) are crucial targets; sub‑second response at low cost remains a long‑term engineering goal. Sustained, multi‑decade investment (potentially $250 B for 10 GW AI‑ready power) is needed to build a sovereign Indian AI hardware ecosystem. Open‑source models will play a role analogous to Linux in the next “distributed computing 3.0” era, co‑existing with closed models. Standardized evaluation metrics for Gen‑AI pilots are lacking, leading to unpredictable costs and low production rates.
Resolutions and action items
Proximal Cloud will continue its partnership with AMD to deliver a balanced CPU/GPU platform with high‑capacity memory for LLM inference. Proximal will work with UC San Diego and other Indian partners (e.g., CDAC, VVDN) to develop infant‑scale compute nodes for education, health, and agriculture use cases. Divium will be offered as the inference layer for partners, with ongoing deployments that have already demonstrated 30‑60% cost reductions. Instant System (venture‑builder) will support startups in addressing hallucinations, disambiguation, data‑privacy, and reliability challenges. Renu invited interested parties to engage for further collaboration; follow‑up meetings are implied but not formally scheduled. Collaboration with Indian OEMs (VVDN, Sanmina) is planned to develop domestic chassis and board manufacturing capabilities.
Unresolved issues
How to achieve sub‑second (or ~120 ms) query latency for 1.5 billion users at a price point of ~200 ₹ per month. Securing sufficient deep‑tech venture capital and government funding to bridge the gap between modest startup grants and the billions needed for large‑scale AI hardware development. Detailed roadmap and financing plan for building 1 GW to 10 GW of AI‑ready power capacity in India. Establishing industry‑wide standardized metrics for evaluating model quality, cost, and suitability across diverse use cases. Whether India will produce globally competitive AI‑hardware companies comparable to NVIDIA, SAP, or Palantir, and what business models will enable that.
Suggested compromises
Adopt a mixed CPU + GPU architecture (AMD partnership) rather than relying solely on GPU or CPU solutions. Support both closed‑source and open‑source model ecosystems, allowing innovation on both fronts. Leverage existing hyperscaler hardware while simultaneously nurturing domestic OEM and silicon design capabilities, rather than waiting for a fully indigenous supply chain. Use incremental, population‑scale latency targets (e.g., 120 ms) as a stepping stone toward the ideal 20 ms benchmark.
Thought Provoking Comments
We, as humanity, underestimate what can be done in 10 years but overestimate what can be done in two years. The big technology shifts happen every 30, 15, 7 years.
Sets a macro‑historical perspective that frames the entire discussion about long‑term planning versus short‑term hype, reminding listeners to think beyond immediate product cycles.
Established the thematic backdrop for the talk, prompting later speakers (e.g., Jensen Huang, Arya Bhattacharjee) to position their initiatives as part of a longer‑term wave rather than a fleeting trend.
Speaker: Renu Raman
The similarity between the microprocessor era of the 90s and today’s foundation‑model era: only a handful of ~150‑person teams can build world‑class models, and it now costs billions of dollars in GPUs.
Draws a concrete parallel that highlights the concentration of talent and capital required for cutting‑edge AI, making the abstract ‘model race’ tangible.
Shifted the conversation from generic market optimism to a realistic assessment of barriers, leading participants like Lalit Bhatt and Bharat to stress the need for specialized platforms (Divium) and cost‑effective inference solutions.
Speaker: Renu Raman
AI will impact 95 % of work – the blast radius is far larger than the SaaS era’s productivity gains.
Quantifies AI’s potential economic impact, turning a vague promise into a measurable claim that justifies massive infrastructure investment.
Prompted the audience to consider scale (population‑scale compute) and set the stage for later discussions on India’s 10 GW power target and the $250 B hardware spend.
Speaker: Renu Raman
Data processing (structured & unstructured) still runs on CPUs. We will soon announce a big initiative of accelerated data processing.
Highlights a blind spot in the AI hype – the massive, CPU‑bound data‑processing workload that will need acceleration, thereby expanding the scope of required hardware beyond GPUs.
Triggered Renu’s explanation of a “happy blend” of CPUs and GPUs with AMD, and reinforced the narrative that AI infrastructure must serve both traditional data workloads and new LLM workloads.
Speaker: Jensen Huang
90 % of Gen‑AI pilots never make it to production. The three killers are undefined quality, unpredictable costs, and constantly shifting model selection.
Identifies the practical, operational failure points that most enterprises face, moving the conversation from visionary tech to actionable challenges.
Led to a deeper dive into Divium’s solution (model‑quality evaluation, cost‑optimal routing) and sparked interest from the audience about real‑world deployment, influencing the subsequent Q&A focus.
Speaker: Lalit Bhatt (Divium)
India’s future in the AI/semiconductor wave will be driven by software and AI, not by building chips from scratch. On‑prem AI can cut fab productivity by 25 % – $10 M per day on a 7 nm line.
Provides a concrete national‑strategy viewpoint, aligning the discussion with India’s policy priorities and emphasizing immediate, high‑impact use cases over long‑term chip design.
Shifted the dialogue toward sovereign, low‑cost, infant‑scale compute for India, prompting Renu to discuss power‑to‑hardware economics and the role of local OEMs.
Speaker: Arya Bhattacharjee (Infosys)
Google’s 20 ms query‑response benchmark defined the modern web. For India we should aim for ~120 ms for any query at population scale.
Uses a historic performance target to set a concrete, aspirational metric for the Indian market, turning an abstract “scale” discussion into a measurable engineering goal.
Guided the conversation toward latency‑focused system design, influencing later remarks about network upgrades (800 Gbps Ethernet) and the need for specialized inference hardware.
Speaker: Renu Raman
Models are the new abstraction layer, just as hypervisors separated physical machines from VMs and OSes separated hardware from applications.
Frames the rise of open/closed AI models in familiar systems‑architecture terms, making the concept of “distributed computing 3.0” accessible and highlighting future innovation pathways.
Prompted Abhishek Singh’s question about open‑source models and led to a broader discussion on the ecosystem of open vs closed models, reinforcing the theme of layered abstraction.
Speaker: Renu Raman
If India can build a $250 B hardware stack (10 GW power), it can spawn its own SAP‑like, Palantir‑like companies – but the business model must achieve ~50 % gross margin, not the 30 % typical in India.
Links macro‑economic investment to concrete entrepreneurial outcomes, challenging Indian firms to aim for higher‑margin, globally competitive software businesses.
Steered the conversation toward the viability of Indian “unicorns” in the AI stack, encouraging participants to think about business models, not just technology, and setting up the final discussion on venture funding.
Speaker: Renu Raman
Venture funding mismatch: 20 crore for many startups vs $100 M for a single engineer. Do we have deep enough capital to build Nvidia‑scale companies in India?
Raises a systemic financing issue that underpins all technical ambitions, questioning whether the ecosystem can sustain the massive capital needs identified earlier.
Created a turning point where Renu turned the question back to the asker, highlighting the need for self‑reflection among founders and investors, and concluding the session with a focus on ecosystem‑wide collaboration.
Speaker: Abhishek Singh
Overall Assessment

The discussion was driven forward by a series of high‑level framing statements (Renu’s long‑term tech cycles, AI’s 95 % work impact) and concrete pain‑point revelations (Jensen’s CPU‑bound data processing, Lalit’s 90 % pilot failure). Each of these sparked new sub‑threads—hardware‑software blend, sovereign Indian compute, latency benchmarks, and financing challenges—that deepened the dialogue from visionary hype to actionable strategy. The most pivotal moments were when participants shifted from abstract potential to real‑world constraints, prompting the audience to consider not only what technology is possible, but how it can be built, funded, and scaled within India’s unique ecosystem.

Follow-up Questions
What is the future of India in AI and semiconductor? How can India capitalize and make a mark?
Understanding strategic pathways for India to become a leader in AI and semiconductor ecosystems is crucial for policy, investment, and talent development.
Speaker: Arya Bhattacharjee
What will open models do for distributed computing? Will we see Distributed Computing 3.0?
Exploring the impact of open‑source AI models on the next generation of distributed systems helps anticipate architectural shifts and ecosystem opportunities.
Speaker: Abhishek Singh
Is sub‑millisecond/sub‑second query processing at population scale (≈1.5 billion users) feasible at low cost (≈200 rupees/month)?
Achieving ultra‑low latency at massive scale is key for consumer‑facing AI services in India; feasibility analysis informs infrastructure and algorithm design.
Speaker: Abhishek Singh
What kinds of corporations can emerge from India’s AI/semiconductor push? Could we see companies akin to NVIDIA, Palantir, SAP, or Oracle?
Identifying potential new industry champions guides ecosystem building, talent pipelines, and investment focus.
Speaker: Abhishek Singh
Do Indian venture capitalists and private equity have sufficient deep pockets to fund massive AI/semiconductor ventures comparable to global players?
Funding depth determines whether India can sustain the multi‑billion‑dollar hardware and software investments needed for a sovereign AI stack.
Speaker: Abhishek Singh
How will a 10 GW AI infrastructure business materialize in India? What are the pathways for that business to come to India?
Clarifying the supply‑chain, manufacturing, and financing routes for large‑scale AI compute capacity is essential for national planning and private sector participation.
Speaker: Audience (unidentified)
What is the optimal memory hierarchy (number and types of memory) for AI inference systems?
Memory architecture directly affects performance, power, and cost; research is needed to decide between single vs. multiple memory types for inference workloads.
Speaker: Renu Raman
How should inference‑only distributed systems be architected differently from training systems?
Inference workloads have distinct scalability and latency requirements; defining a dedicated architecture could improve efficiency and cost.
Speaker: Renu Raman
How will the coexistence of open and closed AI models shape the abstraction layer analogous to hypervisors in cloud computing?
Understanding this dynamic will inform standards, interoperability, and competitive strategies for model providers.
Speaker: Renu Raman
What is the optimal deployment strategy for AI‑ready geolocal data centers in India?
Regional data centers are critical for sovereignty and latency; research is needed on location, capacity, and partnership models.
Speaker: Renu Raman
What latency benchmark (e.g., 120 ms) should be targeted for query responses at national scale, and what resources are required to meet it?
Setting realistic performance targets guides infrastructure investment and algorithmic optimization for mass‑market AI services.
Speaker: Renu Raman
What does the investment and manufacturing ecosystem need to look like to build 10 GW of AI compute capacity in India?
Analyzing capital requirements, OEM participation, and domestic fab capabilities is vital for achieving sovereign AI compute at scale.
Speaker: Renu Raman
How can Indian AI companies achieve higher gross margins (e.g., 50 %) compared to current averages (~30 %) and approach models like Palantir’s 95 %?
Exploring business‑model innovations and cost structures can make Indian AI firms globally competitive and financially sustainable.
Speaker: Renu Raman
How can AI be applied in semiconductor fabs for real‑time defect detection, yield improvement, and design feedback?
AI‑driven fab analytics could dramatically reduce costs and improve yields; research is needed on data pipelines, edge inference, and integration with design tools.
Speaker: Renu Raman
How can graph databases be leveraged to organize enterprise data (email, documents, Teams) for AI applications?
Effective data graphing underpins many AI use cases; studying methods to build and maintain such graphs at scale is essential for enterprise AI adoption.
Speaker: Renu Raman
What are the details and implications of the upcoming accelerated data processing initiative announced by Jensen Huang?
The initiative could shift a large portion of data‑processing workloads from CPUs to accelerated hardware, impacting software stacks, cost models, and market dynamics.
Speaker: Jensen Huang

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Survival Tech Harnessing AI to Manage Global Climate Extremes

Survival Tech Harnessing AI to Manage Global Climate Extremes

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to explore how artificial intelligence can be applied to India’s climate challenges, especially extreme weather and sustainability [1]. Amit Sheth explained that the Indian Research Organisation (IRO) was created after a December 2023 meeting with the Prime Minister to develop original, small and agile AI models tailored to Indian needs rather than relying on large foundational models, focusing on weather, health and pharma verticals [20-24][26-34][35-38]. He emphasized building hyper-local models for extreme weather that integrate spatial-temporal dynamics without the “baggage” of pre-trained large language models [27-30][31-33].


M. Ravichandran highlighted that traditional physics-based forecasts capture spatial patterns but miss fine-grained temporal rhythms, requiring a fusion of numerical models with AI to predict high-impact events such as cloudbursts [47-61][62-66]. Shivkumar Kalayanaraman added that low-cost cameras, multimodal sensors and low-Earth-orbit satellites can provide real-time visual data that, when combined with generative AI, enable short-term cloud forecasting and insight-level fusion across modalities [76-84][85-89]. Praful Chandra pointed out that achieving such specificity depends on “small-data fine-tuning” of large foundation models, questioning how minimal a dataset can be while still delivering accurate domain performance [104-108]. Karthik Kashinath argued that transfer learning and benchmark datasets-similar to ImageNet’s role in computer vision-are essential to adapt global models to India’s hyper-local, data-sparse regions [110-114][115-119].


Shivkumar described the ANRF’s dual funding streams-a grant programme for non-profit research and a one-lakh-crore RDI capital fund for private-sector translation-along with targeted initiatives such as the AI-for-Weather & Climate track and the Leapfrog Demonstrators for Societal Innovation [193-210][216-224][225-232]. He noted recent hackathons in partnership with IBM and IIT Delhi that provide curated datasets to accelerate prototype development, while urging collaboration with agencies like NDMA and MoES [218-224]. Akshara and Sandeep emphasized that public-private partnerships, open IP licensing, and industry-academia consortia are being promoted to move solutions from TRL 1-2 to operational readiness and to attract both government and philanthropic capital [233-242][280-287].


Manish Bhardwaj illustrated how AI-enhanced early-warning systems that fuse terrestrial, satellite and sensor data can improve evacuation planning for multi-hazard events such as cloudbursts, landslides and flash floods, thereby reducing mortality [161-169][174-186]. Praful Chandra gave a concrete example of AI-driven hyper-local solar generation forecasts feeding into India’s digitized grid (India Energy Stack) to enable demand-flexibility and better load balancing [291-298]. Dev Niyogi argued that weather services must become decision-oriented “digital twins” that translate forecasts into actionable, monetizable products-such as insurance pricing-rather than generic climate data [313-322][327-330][337-342]. The participants agreed that building trustworthy, validated models and establishing robust data sharing, funding, and partnership mechanisms are critical steps toward operational AI solutions for climate resilience in India [144-147][155-158][233-242].


Overall, the discussion concluded that coordinated AI research, targeted funding, and cross-sector collaboration can transform climate prediction into actionable services that protect vulnerable populations and support sustainable economic growth [156-160][331-336].


Keypoints


Major discussion points


Building purpose-built, hyper-local AI models instead of relying on large foundational models – IRO is focusing on “very agile, small, specific models” for extreme-weather use-cases and deliberately avoiding the “baggage” of big language models [26-30]. Panelists stressed the need to fuse physics-based numerical forecasts with AI time-series methods to capture both “elephant”-scale and “ant”-scale phenomena and to improve prediction of events such as cloudbursts [47-66].


Data availability, open-access and interdisciplinary collaboration as the backbone of trustworthy AI forecasts – India’s massive historical weather archives provide a “huge” data resource that must be opened up for broader use, and young talent should be mobilised to “interpret the data differently” and reduce error and uncertainty [126-138]. Experts also highlighted the creation of benchmark datasets and metrics (e.g., the ECMWF ERIF set) as essential for achieving operational quality at hyper-local scales [263-270].


Funding mechanisms and public-private partnerships to move from research to operational products – ANRF’s grant programmes, the one-lakh-crore RDI fund, hackathons, and “Leapfrog Demonstrators for Societal Innovation” are being deployed to catalyse AI-for-weather projects and to ensure industry-academia collaboration [193-232]. Venture-capital perspective reinforced that startups must partner with government, segment markets, and identify monetisable pathways (e.g., insurance, enterprise services) while leveraging public-private capital [280-287].


Concrete AI-driven applications for climate resilience – Early-warning dissemination through trusted DPG systems, multimodal sensor networks (including low-cost cameras and LEO satellites), voice-assistant tools for household-level action, digital twins for decision-specific forecasting, and AI-enhanced renewable-energy grid management were all cited as high-impact use cases [70-75][76-84][91-100][291-298][337-342].


Technical challenges that must be solved to realise these applications – Small-data fine-tuning of foundation models, transfer learning across data-rich and data-sparse regions, and establishing validation/verification pipelines were identified as research frontiers that will determine trust and adoption [104-108][109-114][263-270].


Overall purpose / goal of the discussion


The panel was convened to map India’s strategic roadmap for leveraging artificial intelligence to tackle climate-related challenges-particularly extreme weather and sustainability-by (i) defining the scientific and technical directions (hyper-local modelling, data fusion, validation), (ii) identifying institutional and funding levers (ANRF, RDI, public-private consortia), and (iii) pinpointing immediate, high-impact applications that can be piloted and scaled across the country.


Tone of the discussion


The conversation began with an optimistic, visionary tone, emphasizing the promise of AI-driven breakthroughs for climate science. As the dialogue progressed, it became increasingly pragmatic, focusing on concrete hurdles (data openness, benchmark creation, trust) and concrete mechanisms (funding programmes, partnership models). The closing remarks retained the collaborative spirit but shifted toward a call-to-action, urging stakeholders to translate ideas into operational solutions. Overall, the tone remained constructive and forward-looking throughout.


Speakers

Akshara Kaginalkar – Panel moderator/host of the AI Summit discussion.


Amit Sheth – Founder/CEO of IRO (Institute for Research in AI for climate and sustainability); leads development of small, agile AI models for extreme weather and health applications.


M. Ravichandran – Secretary, Ministry of Earth Sciences, Government of India; oversees weather, climate and sustainability initiatives. [S16]


Manish Bhardwaj – Secretary, National Disaster Management Authority (NDMA), India; responsible for disaster preparedness and early-warning systems. [S15]


Shivkumar Kalayanaraman – AI researcher and speaker on multimodal models for weather forecasting and climate-impact applications.


Sandeep Singhal – Venture capitalist; manages investment portfolios in energy transition, mobility and climate-tech startups. [S1]


Dev Niyogi – Professor, University of Texas at Austin; affiliated with IIT Roorkee; member of the founding team of IRO. [S2][S3]


Praphul Chandra – Professor; Head of the Center for Excellence for Data Sciences and Dean R & D, Atria University, Bangalore. [S8][S9]


Karthik Kashinath – Director, Center for Excellence for Data Sciences; Distinguished Scientist at NVIDIA. [S10]


Audience – Audience participant who raised a question on insurance and climate risk.


Additional speakers:


Dr. Shiv Kumar – CEO, NRF (National Research Foundation); champion of AI for science and supporter of the panel discussion.


Full session reportComprehensive analysis and detailed insights

The panel convened to chart a national roadmap for applying artificial intelligence to India’s climate-related challenges, with a particular focus on extreme weather, disaster resilience and sustainability [1-13]. The moderator, Akshara Kaginalkar, introduced a cross-sectoral audience that included the Secretary of the Ministry of Earth Sciences, a venture-capitalist, university professors, the NRF CEO and the NDMA secretary, underscoring the breadth of expertise required for the task [2-14][15-17]. Akshara also referenced the “dew effect” as an illustration of how micro-scale phenomena can influence larger weather patterns, highlighting the need for models that operate across scales [260-262].


Dr. Amit Sheth explained that the Indian Research Organisation (IRO) was created after a direct meeting with the Prime Minister in December 2023, where the leader asked for home-grown AI solutions that would not simply imitate Western or Chinese models [20-24]. IRO’s strategy is to develop “very agile, small, specific models” for hyper-local extreme-weather problems, deliberately avoiding large foundational models whose training data and computational baggage are opaque [26-31]. The institute also plans to extend this approach to health and pharma verticals, leveraging partnerships with the Indian Pharma Alliance and health organisations [32-38].


Dr. M. Ravichandran, Secretary, Ministry of Earth Sciences, highlighted the limits of conventional physics-based forecasts, which capture broad spatial patterns but miss fine-grained temporal rhythms. He used the metaphor of “the elephant plus the ant” to argue that both spatial (physics-driven) and temporal (AI-driven) components must be fused to predict high-impact events such as cloudbursts [47-66]. He also emphasized that robust validation and verification frameworks are essential to build confidence in AI-augmented forecasts [145-147]. This hybrid vision was echoed by Manish Bhardwaj, who called for a trusted, low-cost early-warning system that blends AI with terrestrial sensors, satellite feeds and existing alert-generation agencies, thereby improving granularity even where sensor coverage is sparse [70-75][175-180].


Prof. Shivkumar Kalayanaraman described multimodal AI pipelines that combine time-series models with visual data from inexpensive cameras, infrared or multispectral sensors, and low-Earth-orbit satellites. He argued that the focus should shift from raw data-fusion, which is “painfully complex”, to “insight-level fusion” that can deliver now-casting forecasts of clouds a few hours ahead [76-84][85-89]. Such multimodal approaches could be integrated into existing now-casting and forecasting systems to amplify impact.


Data was identified as both a strength and a bottleneck. Ravichandran noted that India possesses “hundreds of 150-year-old” weather records, but these archives are not yet fully exploitable because they remain siloed [126-128]. He called for open-access policies that would allow the nation’s “young brains” to interpret the data in diverse ways, reduce model error, improve initial conditions and enhance down-scaling to kilometre-scale forecasts [129-144][145-147]. The need for benchmark datasets was reinforced by Prof. Karthik Kashinath, NVIDIA, who likened the situation to the ImageNet breakthrough: creating standard data-sets and metrics (e.g., ECMWF’s ERA5-based weather-bench) would drive operational quality at hyper-local resolutions [263-270]. He also pointed to super-resolution techniques already used in the Earth2 programme, suggesting that generative-AI methods could further shrink the resolution gap within the next two to three years [271-274].


The question of how to cope with data scarcity generated divergent views. Praful Chandra asked how small a dataset could be while still fine-tuning a large foundation model for a specific climate task, framing this as a potential breakthrough [104-108]. By contrast, Sheth argued for building original, lightweight models from scratch, avoiding the “baggage” of pre-trained large models altogether [26-31]. A related disagreement concerned transfer learning: while Chandra advocated re-using knowledge from data-rich regions to India’s data-sparse locales [110-114], Sheth’s approach favours locally-engineered models that do not depend on external pre-training [26-31].


Funding and translation pathways were outlined by Prof. Shivkumar Kalayanaraman on behalf of the National Research Foundation (NRF). The NRF runs an AI-for-Science & Engineering programme with a dedicated AI-for-Weather & Climate track, which collaborates with MoES on the Mission Morrison programme [196-199], a one-lakh-crore RDI capital fund for private-sector scaling, and a forthcoming “Leapfrog Demonstrators for Societal Innovation” scheme that rewards high-impact, non-incremental solutions [193-210][216-224]. Recent hackathons, co-organised with IBM and IIT Delhi, already provide curated datasets to accelerate prototype development [218-224]. NRF is also partnering with the Ministry of Earth Sciences on the “Mission Morrison” programme and has launched Translation Research Centres that require joint industry-academic participation to move prototypes toward commercial deployment [190-193][225-227].


Public-private partnership (PPP) models were repeatedly stressed as essential for moving from research (TRL 1-2) to market-ready services (TRL 5-6). Akshara highlighted the government’s push for consortium-based proposals, open IP licensing and hub-spoke collaborations, which would allow startups to pick up academic IP and translate it quickly [233-242][245-251]. Sandeep Singhal added that successful scaling requires clear market segmentation-distinguishing public-good services from monetisable private-good offerings such as insurance or enterprise risk tools-and that both government capital and emerging philanthropic funds are ready to back such ventures [280-287][340-345].


Concrete application domains emerged across the discussion. Bhardwaj described AI-enhanced early-warning pipelines that could predict cascading hazards (cloudburst → landslide → flash flood) and enable timely evacuations, thereby reducing mortality [161-169][174-186]. Praful Chandra illustrated how hyper-local solar generation forecasts, fed into the India Energy Stack, can support demand-flexibility and peer-to-peer energy trading, turning weather predictions into direct grid-management value [291-298]. Dev Niyogi introduced the concept of decision-specific “box models” and digital twins that translate raw forecasts into actionable recommendations-ranging from long-term hedging to immediate shade-seeking decisions-thereby turning weather into a monetisable product and addressing the “tragedy of the commons” [313-322][327-330][318-322].


From the discussion, four recurring themes emerge: (1) hybrid AI-physics or AI-sensor systems for hyper-local forecasting; (2) open, benchmarked data and collaborative consortia to build trustworthy models; (3) robust PPP frameworks with open IP to accelerate translation; and (4) a preference for lightweight, domain-specific models or fine-tuned foundations that can be deployed rapidly [47-66][70-75][126-144][233-242][104-108][263-270]. Remaining points of contention-whether to prioritise bespoke small models versus fine-tuning large foundations, and how expansive digital-twin architectures should be-highlight the need for coordinated research agendas that accommodate both approaches [26-31][104-108][313-322][331-333].


In closing, participants identified a set of immediate research and policy actions: develop benchmark hyper-local datasets; explore small-data fine-tuning and transfer-learning pipelines; establish validation and verification protocols for AI-augmented forecasts; open legacy weather archives to the broader community; design multimodal insight-fusion frameworks; launch voice-based personal resilience assistants; pilot AI-driven climate-risk insurance products; and embed AI forecasts within the India Energy Stack for renewable-grid optimisation. These steps, underpinned by the NRF’s funding mechanisms and the IRO’s model-building agenda, aim to transform India’s climate prediction ecosystem from a purely physics-driven service into an actionable, decision-oriented platform that safeguards vulnerable populations while supporting sustainable economic growth [263-274][291-298][313-322][193-224][233-242].


Session transcriptComplete transcript of the session
Akshara Kaginalkar

top -down approaches in terms of finding the AI solutions, India’s critical problems and weather and climate is a major vertical. So welcome, sir. We have Dr. Ravichandran, he doesn’t need any introduction, but he’s the Ministry of Earth Sciences Secretary and everything and anything under weather and climate and sustainability, sir, is heading it and we look forward to your contribution. We have Mr. Singhal, who is a venture capitalist and he will give a very, very important aspect about how funding and economy is going to drive the solutions in AI for climate. Professor Dev Niyogi, he is professor from UT Austin, that is University of Texas at Austin. Also, he’s affiliated to IIT Roorkee and now one of the founding team of IRO.

Again, sir doesn’t need any introduction. We have Dr. Shiv Kumar is NRF CEO and very, very great supporter of now AI for science. And we look forward to your support as well as your inputs on how can we proceed on this. And we have Mr. Manish Bharadwaj, who has a very critical role in India as the secretary of disaster NDMA. And we have Professor Praful Chandra. He’s heading the Center for Excellence for Data Sciences, as well as he’s dean R &D, Atria University, Bangalore. And we have Dr. Kartik, who is the director of the Center for Excellence for Data Sciences. He’s a distinguished scientist and engineer, NVIDIA. And he has played a major role in the very famous AI models, which all of us are hearing.

And they are, you know. changing the scenario of modeling and the way science is going to happen. So welcome. So I look forward to your contribution. Oh, okay. Can we stand just here? Okay. So before we open up the panel, I just wanted to have a very quick question to Professor Seth in terms of what was the objective, what we are looking when you started IRO as a, you know, in India, we wanted to have this type of a research organization. So if you can quickly tell us about what was the thought process behind IRO and what do you foresee?

Amit Sheth

So the idea of IRO kind of. was initiated when I had a chance to meet the PM in December of 2023. I was asked to come and discuss with him. He is always very curious in technology and so he wanted to hear about the ideas on AI. Since I had multiple interactions on research and AI with him during his CM time, this was a fantastic opportunity for me to meet and kind of discuss where India can shine and not necessarily follow the West or China in what we need to do. And so I presented both the core foundational AI focus on enterprises, not necessarily consumer and web, and some of the areas of where we can make big economic and social impact, as well as we can support the startup ecosystem where AI can empower deep AI technology that drives the global products from India.

So that was a broad idea. And so IRO currently is developing original work on building very agile, small, specific models. In this context, for example, if you want to make a model for serving extreme weather related issue that is hyper local, then all the spatial temporal aspects, all the relative modeling aspects, all the prediction algorithms, those are the things that we will bring in. But we will not be building on the top of large language models or so -called foundational model, which come with a lot of baggage. We don’t know what kind of data it has been trained on, many other things. So original research in creating new, small, agile models. And so it will be a platform on the top of India AI structure to be able to create models.

And one area in which we would love to create models, we have technology. expertise here, Dave and many other people. And we can, you know, so earth science, including disaster, including, you know, sustainability issues is one of the vertical. Other two are health and pharma. Pharma, we have very strong partnership with Indian Pharma Alliance and the 23 major pharma, which is 80 % of India’s pharma, you know, kind of output. And similarly, we are working with some health partners and all. But here you see the potential partners that we could have in making impact into the sustainability and health area. So thank you.

Akshara Kaginalkar

We would like to now start with one open questions and then we’ll have an individual question because I’m very sorry, the time is very short. The whole format is actually we had a one day full workshop and we had to squeeze it in. to start. Yeah, so one disclaimer that it’s not my personal thing, but I may request you to finish it in time. Definitely would like to hear a lot from all of you, but due to constraint of time. So first one, opening questions, what we’d like to have is all of us would like you to say is what would be one AI application or a discovery that would excite you about AI helping in this domain of climate as well as extreme events and sustainability as a broader thing, because everything is driven by weather and climate.

We have energy, we have health, we have economics and we have agriculture, many, many aspects of it. So we’d like to see what do you foresee and how do you would like to say that which one development will help us. And we’ll start with you, sir.

M. Ravichandran

When you talk about the weather, of course, it is now depends on various applications. So when we are doing the weather forecast, earlier we just to tell that in suppose how the elephant is going, I’m able to see that elephant, how it is going. I’m able to tell that tomorrow it will come here. But now the problem is whether because of the climate change and other things, the space and time has changed. Now, we have to see on the elephant some ant is sitting. That ant, how it is going, we want to know. So we want to see the elephant plus ant. So I want to see two things. One is time series. Other one is a spatial.

If anything on spatial, I think the physics based numerical model is doing better job. But if you want to go for time series, local rhythm, then A is better. So we need to do. Integrate or we need to fuse both together in order to understand the local weather in a fine scale. And you want to go suppose cloudburst is there. So you cannot do only with. numerical model and with AI also. So, we need to blend both. That is more important. So, we want to go for high impact weather events, how to predict, especially cloud burst and other thing. We do not know how to predict. So, that is why we are looking at whether AI can help or not.

That is one of the objectives. Thank you.

Manish Bhardwaj

I fully agree with what Ravichandran sir has just said. From the early warning point of view, the idea is to have DPG sort of asset for the public so that we are able to disseminate early warning for all. So, idea is to have trusted early warning for all to be given to the citizens. at low cost and this is where AI can definitely play a supporting role. It cannot be purely an AI. It has to be a hybrid model which has to be connected with the physical systems of the various sensor fabric and the satellite data which is available to us from various alert generating agencies but to have a source of a trusted and reliable and resilient early warning systems wherein I definitely foresee AI playing a great, great role.

Thank you. Yeah, I

Shivkumar Kalayanaraman

think I’ll just double down on the multimodal models that are coming out. I mean one is the time series model. There are special models and I will also mention that today with generative AI you can just put a camera pointed to the sky and then you can actually not only see the patterns of clouds, you can forecast one hour ahead. Two hours ahead, even four hours ahead. make it an IR camera or make it some other multispectral camera when all the costs are dramatically dropping. So you can imagine a network of sensors that complements also the great work that’s being done in Mission Mouse and so on. And plus now with the low Earth orbit satellites going up and also having much more Earth observability, I think the opportunity to fuse insights as opposed to fusing data.

I mean, data fusion is a painfully, you know, mind -bogglingly complex, unnecessary and complex as a thing. But now there’s an opportunity to take insights from A, insights from B and fuse it across modes and also forecasting across these modes. I think that’s a wonderful opportunity. I think that’ll have a huge thing. And once you integrate that into, you know, sort of now casting and other systems, I think we can have a great amount of impact. The other dimension is, of course, AI helping in discoveries and of new materials and you know, sort of simulations and so on. I think these have wonderful opportunities. And of course, as you know, the Nobel Prize for Chemistry went to somebody from an AI background.

Sure.

Sandeep Singhal

So I will put a consumer lens to this. Sirs have brought up the point around what is the technology needed. I think with what is happening with the voice agents right now, I think there is a need to have a simple voice framework or a voice sort of app which allows you to send not just information, but actually create a resilience approach for the person who is who can literally just click a button and say, OK, in the next week, these are the things that you need to do to survive the whatever is happening from a climate perspective. Right. Or what do you need to do in the next month? So there is a there is a forecasting aspect to it.

But more importantly, how it integrates with my life. Do I need to stay at home? If I’m a farmer, what do I do? If I’m a, you know, liberal, what do I do? So that ability to bring that to my day to day life and allow me to actually act a certain way because of what I expect, what I expect to see in the environment around me. And that includes daily air. I’ll

Dev Niyogi

just add one term you guys know this word Jugaad so this is a very India thing Jugaad we can so there is a framework that is mathematically feasible that we can model very well that follows equations that follows laws of nature and then there is a human element that we always beat the system and make that happen mapping that has been very difficult in a predictive models and this is where I think AI is coming into play that it brings the human dimensions and it brings the societal aspect with the physical constraints and this is what is most exciting about it into a way that it will be becoming much more accessible is where I think we’ll be going we had heard also about the agentic AI now I heard about the ant AI thanks to you so

Praphul Chandra

I’m going to pick up where Professor Neogi and Dr. Shiv Kumar left you know we work across several AI foundation models in biology in materials and we have looked at foundation models in weather I think the breakthrough that I am most anxious to look for is what we call small data fine tuning. What that means is that when you look at these large foundation models they are fairly general in their applicability and as Professor Sheth was saying when you have to fine tune them for a specific use case you still need data. How small can that data be? Can you use small data to fine tune large foundation models? I think if you are able to have that breakthrough it has applications across multiple domains that we talked about.

Karthik Kashinath

I think a lot has already been shared which is very exciting on many different fronts. One thing I would like to see more used in practice is transfer learning which of course some regions of the world are data rich and some others are data sparse. Problems are shared across the planet. The physics of weather and climate are the same no matter where you are in the planet. But at the same time, there’s uniqueness at hyperlocal scales. But if we can transfer learn efficiently from one region to another with constraints of what exactly we’re trying to transfer learn, I think that would be very impactful.

Akshara Kaginalkar

Thank you, Dr. Kalpik. I think we have a mic here. We saw right from the spatial, as sir said, it’s like from Akashse and Tak, we can see everything. And I think that matters. I remember once I think I was discussing with sir, he said even the dew effect you have on the immediate temperature and that can affect your surrounding and everything. So from small to big is definitely there. And AI also from small to big we should see. And that leads to now I will ask the next round of question is very, very specific to. areas in which all of you are working as well as having a lot of influence and that’s where we would like to hear from you and to have a direction in what way we can go.

At the end of this panel, that’s what, you know, can we all consolidate and can we look at, you know, what are the three to four immediate things which we can do it. And with that respect, I would like to ask Dr. Avichandran, how can India’s national capabilities in AI research, technology development, and very importantly, human resource also, evolve to enable the transition from current physics driven prediction system to AI enabled user specific decision systems. What are the bottlenecks in that and how can we overcome?

M. Ravichandran

So as pointed out, basically we have a capability, basically one of the strengths what we have is basically the data. The data volumes are huge nowadays because we have hundreds of 150 years old old IMD’s things that legacy as well as data available. Now how to utilize this data? And we have young brains of so many young people but we have not fully utilized that one because each one can interpret the data different way. But finally it has to come out into concrete solution. When you talk about AI and weather, if you are talking about, why we want to go for AI first of all because the numerical model, we have a lot of assumptions. Because of that assumptions, the error grows.

Now that error grows whether with the AI we can reduce that is number one. When you are going for initial condition is better, you can predict better. So we have to have a initial condition in better way by reducing the error. So I think many people, even some of the people, many people are working in AI, different people. I think we need to pull the many resource people in our domain so that they can look at data differently and also they can use how to minimize the error and also how to reduce the uncertainties. And also there are various techniques to improve the forecast. So that’s what I, because nowadays the downscaling is one of the important things.

In the large scale model, it defiles. So the AI can downscale better way in the localized, suppose one kilometer resolution weather forecast, we want to forecast how we can do. So we need to have more and more minds and more and more people have to work on it. And I think we need to open up the data so that we have to, that means different people can, can come back and work on that. I have only one important thing is basically this, when you are talking about EIML, the trust is more important, as you pointed out. I think we need to have a better trust in the forecast system. I think where there’s a need for validation and verification, that also very important in EIML can make it.

So our capabilities are huge, but we need to, what is called, utilize them with the data’s strongness. Because now the biology people, even biology people are working in EIML. That same people we can do. One more important point is our people, we are always addicted to the same set. We are thinking only this is the way, but there are multiple ways. That’s why some other discipline people also look at this because this is data driven. Other discipline can look at it differently. We can have some. pathway or way forward. That may be one of the things we can look at.

Akshara Kaginalkar

That’s a very, very important point because we look only weather from maybe only physics angle or weather angle. So, looking at that is very important. And that leads to, you know, what is important for the disaster management service, we would like to ask because highly dependent on the extreme events and the managing that is very difficult. So, how do you foresee adoption of AI for infrastructural preparedness for disaster management and especially reducing the severity impacts on vulnerable population because cities and all maybe and those who have access to many good things they can handle, but we have large vulnerable population. So, how do you see AI helping in the last mile application?

Manish Bhardwaj

Very apt questions. As you all are aware that India is vulnerable to multi multiple hazards, not only cyclones, tsunamis, earthquakes, landslides, flash floods, even gloves, soap, and looking at the vast geography and the population which can be impacted. It is very essential that from the disaster management point of view, we have a system of adequate preparedness and early warning capabilities. Nonetheless, the disaster, and secondly, though the country has made, we have made as a whole of government approach undertaken various mitigation measures to mitigate the disasters, but disasters, we can only mitigate the effect of the disasters. So how do we keep the population? We have to keep the population in a way so that, you know, the early warning system capabilities are of the highest order.

that we are able to minimize lots of lives. Now, this is a very important challenge. And various agencies, particularly, as Ravi Chandran sir has rightly said, the IMD and from several, we have, over the period of time, we have developed enormous capability to predict, say, cyclone path and trajectory very clearly, five days ahead of its landfall. So, in a way, we are able to do timely evacuation, repositioning of the response teams, which helps in minimizing and even achieving zero mortality milestones. But there are other hazards. And secondly, the way the hazard scenario has unfolded in the last few years, it has become a multi -hazard, cascading hazard sort of scenario in which one hazard leads to other hazards.

So, there are incidents of cloudburst. Which are currently cannot be predicted. because there are various technical issues also behind it, but cloud bursts leading to landslides, leading to flash floods are a serious concern. So how do we prepare ourselves given the current state of resources and the developments? This is where AI can definitely pitch in. So the idea is actually to get the various, from the alert generating agencies, all the data which are coming from our terrestrial, the satellite data, the sensory data, and then to be able to use it for predictive forecasting or also to better the now casting to increase the granularity of even the early warning signal because there are limitations of how many satellite systems we can put into place.

It is not possible to map each and every, the hill in the vulnerable areas. So this is where the complications arise. And since development also has to take place in the vulnerable, particularly in the Himalayan zone, so the challenge is here to use technology at the maximum. What I foresee is that the availability of the data from various multiple sources can definitely be analyzed and used for even with the current set of sensor network capabilities to predict or rather to pinpointedly and accurately predict the forecast, the early warning signals for the targeted population. And then it will help the district authorities, the state authorities for timely evacuation and response and relief operations to be carried out.

So this is one field where NDMA particularly is collaborating with multiple national agencies and IMD. And Mr. War Sciences are playing a very major contributory role in that development of the such DPG. I am very sure that the startup ecosystem in our country definitely carries the agility to provide, to do a collaborative support the efforts of the NDMA and the national agencies in taking this mission forward. So, and this is where I believe that we can definitely reduce the, we can definitely increase our, the early warning capabilities, particularly regarding flash floods and the glacial lake outburst floods, the lightning and the landslides. And we are very hopeful that with the support of the IMD and the Ministry of Earth Sciences, we can definitely also take major and change.

Take different steps towards even predicting or identifying the most vulnerable or the potential cloudburst type situation so that we are able to timely warn the public.

Akshara Kaginalkar

Thank you, sir. And it’s an important point, as Dr. Avichandran has said, and which you have taken into the need of the data and the infrastructure also linking that to and the setup which we have and we have seen it in the expo. So many people are working on climate and sustainability. How can we put that together and how can we have the best out of it? So that leads to a question to Dr. Shokumar. NRF is enabling the research ecosystem as well as the product ecosystem. So we would like to see how NRF is helping in terms of creating AI funds, what advice you can give it to the community and how to be making and developing products and what sort of support we can expect from ANRF on that.

Shivkumar Kalayanaraman

Okay. So for folks who may not be, how many of you know about ANRF? Maybe just I can get a show of hands. Okay. All right. Not too many, but so ANRF is a statutory body of government of India and Dr. Avichandran is on my board as well. So this is a body which is, you know, sort of meant to catalyze research and development funding in India. So we have grant funding, oops, and also we have, you know, a capital fund called RDI, which is a one lakh crore fund, which is meant only for the private sector. The grant funding is typically for the, you know, not -for -profit research sector, which includes academia, labs.

you know, Section 8 companies and others, right? So research entities are recognized by SARU, DSIR and so on. So our thinking is that we not only have broad -based funding for, you know, like what National Science Foundation does, but we also have more focused funding in a mission mode. So we have a couple of programs that might be of interest. One is our AI for Science and Engineering is a program we have currently underway. And one of the tracks of that is AI for Weather and Climate. So it’s already there. And in addition, we are going to be launching a major program in about a month called Leapfrog Demonstrators for Societal Innovation. Leapfrog Demonstrators for Societal Innovation.

So the idea is that you take a societal problem, then rather than talking about it, let’s do something about it, okay? And then not do just incremental thing. It should be a leapfrog demonstrator. And it should be a demonstrator, not just a theoretical thing. So these are kinds of things we’re doing. And alongside it, we are also doing challenges, sir. We’ll be introducing more challenge mode. you know sort of things that we don’t see come bottom up in our proposal formats. So as part of that we are also collaborating deeply. Our AI for Science and Engineering, the Weather and Climate, we are actually collaborating with MOES and with their Mission Morrison program. So we are linking, we are getting you know both the expertise as well as the data and you know so that we can put together the AI expertise along with the sensor expertise and data and we hope to similarly collaborate with other parts of the government and you know I would strongly urge collaboration from NDMA also at this stage.

So that’s the general approach and then in the, so that accelerates and also I just want to mention that just two days back we have announced a hackathon also, AI for Science and Engineering hackathon for you know Weather and Climate actually. So it’s currently open it’s done in partnership with IBM and IIT Delhi. So we put out data set and also in partnership with MOAS and others. So we have data sets and we are encouraging some of the work there. But in addition, we’ll be doing more, as I said, there’s a societal innovation program, which can also admit of newer types, where you bring together disciplines. We actually then go to solve real problems and so on.

So I think that’s the nature of what we’ll try to do. And then the RDI fund is meant for translation and scaling. In addition, we also have translation centers. We have a program that is open right now and so on. So these are various programs and mechanisms we plan to do. But the goal of all of this is to always focus on impact and working backwards, rather than doing some undirected research. So we want to drive research in a more directed way towards impact. But at the same time, we do support curiosity -based, broad -based research as well. So that’s the balance we’re trying to strike.

Akshara Kaginalkar

we are doing, if we would like to have consistent solutions and not only as a demonstration product, but as operational, where we have every day some services coming out of it. How do you see the public -private partnership coming out? In all our mission mode programs, the goal is to accelerate things from a lower technology readiness, like TRL 1 or 2, to its mid -range, like 506 and so on. That is the purpose of that. And as part of all of those programs, we are supporting programs at a critical scale. So we are encouraging consortiums to come and bid for it, or a hub -and -spoke type setup. We are explicitly saying, don’t make it individual proposals. It has to be collaborative proposals.

In some of our programs, we have put out open IP licensing so that when you have a company or a startup and so on, they can actually partner with academia, pick up the IP and quickly translate. That will also encourage rapid translation. So we are introducing, you know, IP and other innovations to drive translation. So we are going to be doing this in a few more programs. Plus, we have this Translational Research Centers program, which has mandates partnership with industry as well. So we are using different mechanisms. All of them are driving collaborations. Plus the RDI fund, which is a one lakh crore fund. By the time it hits the market, it will become three or four lakh crores.

It is only for industry, but the industry, if they don’t have capabilities, they must collaborate with academia and so on. So there’ll be a demand for industry academic collaboration coming from that side as well. So we are attacking the problem from multiple directions. And, you know, all of these are meant to encourage collaboration for impact, collaboration for impact. So that collaboration leads us to, you know, industry. As we know, NVIDIA is… very much into and pioneering in terms of many models coming in and Dr. Karthik is part of the model development. So foundational AI weather models and climate models such as Earth 2, GraphCast and AIFS and many more are now demonstrating good performance at a global scale.

So what further development do you see basically the physics, how can we interpret the physics coming in the AI models and the validation is very, very important as sir has said that very, very local scale. We are talking about even air quality at a 400 meter or floods at 10 meters or something like that we are talking about. So how do you see what is more to be done in terms of models operationally robust at a hyper local scale? Thank you.

Karthik Kashinath

Yeah, that’s a rich question but I’m going to keep it fairly brief because it could take the next 30 minutes to get through that. So I’ll touch on three things. One is I think creating the benchmark data sets and the benchmark metrics that are needed to achieve operational quality. And if you look at what has led to the developments at the global scale at 25 kilometer resolution is the ERIF data set from ECMWF and the benchmark problems that they’ve defined on that data set, like the weather bench for example. So I think if we want to get down to the hyperlocal scales, which of course depends on the region that you’re talking about and the types of metrics that you care about, it would be very helpful to create the benchmark data sets and the associated benchmark metrics that can drive towards that.

And if we just wind the clock back, the whole AI revolution in deep learning began because of ImageNet. And that was 15, well 12 years ago. And they defined benchmark data sets and benchmark metrics that drove the revolution in AI. So I think we can do the same thing if we take it down to the hyperlocal level. The second is to leverage the superization techniques that AI has shown to be very powerful. We’re already doing that right now in the Earth2 program with taking 25 kilometer data and super resolving it to one kilometer. Also, we’ve been doing this in weather and climate for decades with downscaling the process of taking coarse resolution simulations and making high resolution. So if we can stretch that even further to go down to these hyperlocal scales, I’m fairly confident that the technologies needed in generative AI to get us to that scale either already exist or will be invented in the next two to three years.

So I’m hopeful that that will help us get there. Thank you.

Akshara Kaginalkar

I think that’s important. We look forward to and that’s where public -private partnership comes in picture because when we see it very specific to India and within India also very specific to a region which we’ll have to, you know, because we have a very different climate all across. Right from north, south, east, west. So I think having maybe small models for a region also can be a future maybe in the. so that once we have this system in terms of you know what is to be done and we have the modeling in place we need a computational power for that because all these models still we need a lot of so that comes to the investments and that’s where we would like to ask Mr.

Sandeep Singhal your investment portfolios have energy transition mobility because see when we speak weather and climate it’s not just weather and climate it’s broadly everything in terms of cloud in terms of energy in terms of health all those things so when you look at your portfolios what advice would you like to give to startups to be able to successfully scale up all these individual domains as well as integrated domains

Sandeep Singhal

so I think in terms of scale up the first thing that at least in the climate space is very clear is that partnership with the government is critical because that’s where all the discussion we are having on data all the discussion we are having on deployment the government is the one that’s driving it. So I think any of our portfolio companies that are working in this space, we end up involving government institutions that they would work with, and we build those relationships with ministries at the fund level also so that we can introduce them to the various government programs. Beyond that, the other advice is that you have to start thinking about segmenting the market that you’re targeting.

So there is the general population, and that goes to the government. There is that funding, I think, as Dr. Shukman said, has to come in a public -private partnership because collaboration, I think, is an important word you used. And I think that collaboration is both on the deployment side but also on the funding side. So it’s great to see what the government has done with ANRF, with RDI, and that capital that is becoming available. And there’s also philanthropic capital that is actually now becoming available in this space. So there are philanthropists that are looking at… programs at scale and saying okay if this program can scale we’ll put money behind it so that’s one part but the other segment is that you have to also think about where is monetization possible and there are enough segments where core business is getting impacted because of weather or other events right and that core business is willing to pay so you have to therefore segregate the two in some ways if you think about it you are building for a public good but the distillation of that allows you to build something for private good and charge for it

Akshara Kaginalkar

because now climate is linked very much to the economics absolutely climate and economics is one and the same thing and it’s not just short term we have to worry about next 10 years 20 years 30 years you know everything so that’s a very very important point so that leads to like how are we preparing ourselves and that comes to Dr. Praful, a key challenge for India is balancing economic growth while protecting our natural ecosystem. So can you give an example of real world application where AI can enable this transition as well as the creation of solutions which balance

Praphul Chandra

I am going to pick up on something that Dr. Karthik said and Manish also mentioned which is the intersection of weather and energy. You know India is transitioning from a fossil fuel based economy to a renewable energy power based economy and renewable energy is dominated by solar right. Now if you look at the kind of models that are becoming available for hyper local forecasting they are also giving us much more predictive power in terms of how much energy will one rooftop solar panel generate which is critical for managing the grid right. India’s grid needs to be digitized and in fact we have a team from the University here which is doing a demo on combining digital public infrastructure from the Ministry of Power, which is India Energy Stack, combined with AI models, which use weather forecasting and do forecasting about grid loads to be able to trade energy between consumers and producers.

Or to do demand flexibility. Now, demand flexibility is, again, something that I see critically important as we talk about sustainable AI. When you move to a data center economy, which is huge consumption of energy, you need to be able to support dynamic demand flexibility using a combination of AI and public infrastructure. So I think the intersection of AI energy is something that deserves quite a bit of attention, and I think we are there to kind of address that.

Akshara Kaginalkar

Thank you. See, we have data in place. We have policies in place. We have science in place. Now, what? Money in place. So what is important is how do you give these solutions to the stakeholders and end users, and that leads the question to Professor. Professor Dave, because he. He has an experience of connecting the science to the governance to the actual stakeholders. And you have been leading the digital twin and AI driven modeling frameworks. So what opportunities do you see? You have done it in Austin, but in India, we are all aware of our different types of cities we have. So what opportunities do you see in building digital twins that support climate extremes and disaster management goals, goals which all of us have just now deliberated upon?

The challenges are there. The solutions are there. How do you link it?

Dev Niyogi

Right. I have two minutes, looks like, before we end the session. So this is a course I take over two semesters. But what I’ll say is that weather is the tragedy of commons. Everyone is affected by it, but no one can pay for it. And the same way when we have to have institutional investments, the question comes up, how do you make this into a monetizable product? And this is where the issues like, you know, today morning, the Director General Mahapatra mentioned that We can create some box models which are very simple, scalable, and transferable. And we can create digital twins which are very decision -specific. We don’t need to predict every variable at every scale for everything to try to do that.

So if we define why we are creating models, what decision we are going to guide based on that data -to -decision framework, we can make that into a very intelligent, scalable modeling system. And that, I think, is where the joy of bringing AI and physics and human decisions and dimensions come into picture. People don’t need weather. They need weather that can help them make a decision. And this is where we need to move from simply creating the weather output to adding something which is going to help me make an intelligent decision, whatever that may be. It could be a long -term hedging against something or a short -term decision of whether I walk inside or in the shade.

And if we achieve that, I think we are going to make this into something. Which could transform the manner in which we are predicting, which is not for a variable of interest, but a decision. that we want to make. That is where I think digital twins come into picture. I’ll stop there.

Akshara Kaginalkar

So I think digital twin can be one of our first you know, we can look into the complete AI spectra right from monitoring to processing to modeling to reaching out to the end users. We can have a complete you know, portfolio of AI applications. So this leads to now the end of the session and we would like to open just for half a minute. I’m very sorry for this format. Disclaimer, it’s not my doing. Yeah.

Audience

One word I didn’t hear too much of was insurance and climate risk typically climate risk typically reflects in insurance rates either becoming so high or just your house goes uninsured which is happening. In Northern California and Florida. I’m not sure in India how . predominant this is, but how can you kind of marry, I mean, ultimately people have to stay there, it’s difficult to move. So how do you marry the two?

Sandeep Singhal

Yeah, so that I sort of somehow sort of refer to it in this notion of translating the work that you’re doing on the DPI side and bringing that technology into sort of more monetizable projects. And insurance actually ends up being one of the first monetizable product that comes out of this.

Akshara Kaginalkar

We can have just one question maybe and we can always discuss it outside because this is a very good opportunity. We have the experts here and we would definitely like I have a few questions, but I’ll ask you outside. I just want to quickly also mention that we’ve announced in this AI Summit partnerships with NVIDIA, with Google and Qualcomm as well as we’re doing other things at the Gates Foundation. So there’s many things happening so I invite my colleagues here to work with us more and focus on India as well in addition to the world. thank you sir so would like to thank and it was a great great listening to all of you and we forward to you yeah and see I will tell you don’t get me wrong I was thinking you know there are 8 people and I am the only one then I was thinking it should be equal number and I was disturbed you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The moderator introduced a cross‑sectoral audience that included the NRF CEO and the NDMA secretary.”

The knowledge base lists Dr. Shiv Kumar as the NRF CEO and Manish Bharadwaj as a key figure from NDMA, confirming their presence on the panel [S2].

Confirmedhigh

“Dr. Amit Sheth is the founder of the Indian AI Research Organization (IRO) and promotes compact, custom models rather than large foundational models.”

Source S5 explicitly describes Dr. Amit Sheth as the founder of IRO and his advocacy for small, explainable neurosymbolic models, matching the report’s description.

Confirmedhigh

“IRO’s strategy is to develop very agile, small, specific models for hyper‑local extreme‑weather problems, deliberately avoiding large foundational models whose training data and computational baggage are opaque.”

Both S5 and S49 emphasize IRO’s focus on “small AI” – practical, affordable, locally-relevant models that avoid the opacity of large foundation models [S5] and [S49].

Confirmedhigh

“Manish Bhardwaj called for a trusted, low‑cost early‑warning system that blends AI with terrestrial sensors, satellite feeds and existing alert‑generation agencies.”

The knowledge base notes Manish Bhardwaj’s emphasis on reliable, trusted, and accessible early-warning systems for disaster preparedness, aligning with the report’s statement [S1].

Additional Contextmedium

“Dr. Amit Sheth’s approach emphasizes explainability, safety and alignment in AI models for specific problems.”

S5 adds that IRO’s models are designed with explainability, safety, and alignment as core qualities, providing additional nuance to the report’s description of the institute’s strategy.

External Sources (73)
S1
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Sandeep Singhal- Venture capitalist with investment portfolios in energy transition and mobility
S2
https://dig.watch/event/india-ai-impact-summit-2026/survival-tech-harnessing-ai-to-manage-global-climate-extremes — Again, sir doesn’t need any introduction. We have Dr. Shiv Kumar is NRF CEO and very, very great supporter of now AI for…
S3
Survival Tech Harnessing AI to Manage Global Climate Extremes — – Amit Sheth- Praphul Chandra- Dev Niyogi – M. Ravichandran- Dev Niyogi- Akshara Kaginalkar
S4
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Akshara Kaginalkar- Panel moderator/host This panel discussion at an AI Summit brought together leading experts from g…
S6
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Professor Seth- Referenced in transcript but appears to be referring to Amit Sheth
S7
Survival Tech Harnessing AI to Manage Global Climate Extremes — – Shivkumar Kalayanaraman- Sandeep Singhal
S8
https://dig.watch/event/india-ai-impact-summit-2026/survival-tech-harnessing-ai-to-manage-global-climate-extremes — Again, sir doesn’t need any introduction. We have Dr. Shiv Kumar is NRF CEO and very, very great supporter of now AI for…
S10
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Dr. Kartik- Mentioned in introduction as director of Center for Excellence for Data Sciences, distinguished scientist a…
S11
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S12
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S13
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S14
https://dig.watch/event/india-ai-impact-summit-2026/survival-tech-harnessing-ai-to-manage-global-climate-extremes — Again, sir doesn’t need any introduction. We have Dr. Shiv Kumar is NRF CEO and very, very great supporter of now AI for…
S15
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Manish Bhardwaj- Secretary of NDMA (National Disaster Management Authority), disaster management
S16
Survival Tech Harnessing AI to Manage Global Climate Extremes — -M. Ravichandran- Ministry of Earth Sciences Secretary, leading weather, climate and sustainability initiatives
S17
The Foundation of AI Democratizing Compute Data Infrastructure — Focus on developing domain-specific, smaller models that require less computational power and infrastructure
S18
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — India’s unique position—combining technical talent, diverse datasets, a vibrant startup ecosystem, and supportive policy…
S19
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Thank you, Prime Minister, for having us. As my colleagues have said, India will no doubt be a powerhouse in AI in many …
S20
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S21
India harnesses AI for advanced weather forecasting amid climate challenges — India is leveraging AI to enhance its weather forecasting capabilities in response to the escalating challenges posed by…
S22
Regulating Open Data_ Principles Challenges and Opportunities — I think we now need to look at what data sets are needed for research, which could be academia and research students and…
S23
Connecting open code with policymakers to development | IGF 2023 WS #500 — Helani Galpaya:And I agree with the minister. Some of the solutions are technical. We’ve certainly worked with different…
S24
The AI revolutionizing weather forecasting — The European Centre for Medium-Range Weather Forecasts (ECMWF)has teamed up with Huawei to develop an AI-based forecasti…
S25
AI for agriculture Scaling Intelegence for food and climate resiliance — A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srima…
S26
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — In addition to supporting climate action, AI is expected to play a significant role in digitally managed energy systems….
S27
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Waqas Hassan:I’d like to add one thing to say, we would just start, and I said, she’s spoken about global cooperation as…
S28
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Focus on smaller, task-specific models while not neglecting progress made with large language models
S29
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Shri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the r…
S30
Advancing Scientific AI with Safety Ethics and Responsibility — -Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI…
S31
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development The speaker argues that public-private partnerships are not optional but …
S32
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of reso…
S33
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Pedro Ivo Ferraz da Silva: Yeah, thank you very much, José Renato, Alexandra, and also other colleagues in the panel. It…
S34
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — It is argued that understanding the environmental consequences can catalyse more efficient methods for reducing and mana…
S35
Building Climate-Resilient Systems with AI — “we are quite privileged to work with the Grail team and, of course, global experts to start to now quantify, both in te…
S36
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — In conclusion, sandboxes are valuable tools for testing and implementing regulatory policies. The Brazil case highlights…
S37
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S38
Survival Tech Harnessing AI to Manage Global Climate Extremes — “One thing I would like to see more used in practice is transfer learning which of course some regions of the world are …
S39
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — Marie Granis suggests that instead of building a pan-African LLM, each country should develop small models for their spe…
S40
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — When facing limited datasets for minor Indian languages, India launched crowd-sourcing initiatives that allowed people t…
S41
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S42
World Meteorological Organization — WMO recognises the potential power of Artificial Intelligence to revolutionise weather forecasts and early warnings. WMO…
S43
AI: Lifting All Boats / DAVOS 2025 — Dowidar mentioned ongoing work with UNDP on AI-powered early warning systems. Further research on implementation and sca…
S44
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — The conversation touched on artificial intelligence, with a call for proactively shaping AI policies to reflect regional…
S45
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Public-private partnerships and global cooperation essential for sharing applications, datasets, and expertise
S46
The Foundation of AI Democratizing Compute Data Infrastructure — Focus on developing domain-specific, smaller models that require less computational power and infrastructure
S47
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Balance between large foundational models and small specialized models Development | Infrastructure | Economic Ioanna …
S48
Part 7: ‘Converging realities: Embedding governance through digital twins’ — Digital twin governance begins at the intersection of technical design and responsibility. To function effectively withi…
S49
How AI Drives Innovation and Economic Growth — Central to Zutt’s analysis was the concept of “small AI”—practical, affordable, locally relevant applications that addre…
S50
DIGITAL DIVIDENDS — As digital development proceeds from emerging to transitioning and then to transforming, policy reforms beco…
S51
Survival Tech Harnessing AI to Manage Global Climate Extremes — “And so IRO currently is developing original work on building very agile, small, specific models”[1]. “So original resea…
S52
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Focus on smaller, task-specific models while not neglecting progress made with large language models
S53
OPENING SESSION | IGF 2023 — Ema Arisa:Thank you, Ms. Wan. I would like to move on to the next question. So the guiding principles and code of conduc…
S54
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Shri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the r…
S55
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — The aim is for GPI to have an independent identity, similar to that of the World Health Organization (WHO) in the field …
S56
AI and Data Driving India’s Energy Transformation for Climate Solutions — If wrong data will be fed to the tool, the wrong decisions will be indicated. So as it has been told by my colleague, we…
S57
Agenda item 6 — Establishing grant programs and public-private partnerships as potential funding mechanisms
S58
https://dig.watch/event/india-ai-impact-summit-2026/survival-tech-harnessing-ai-to-manage-global-climate-extremes — It has to be collaborative proposals. In some of our programs, we have put out open IP licensing so that when you have a…
S59
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development Innovation Ecosystems and Practical Implementation The speaker argues th…
S60
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Pedro Ivo Ferraz da Silva: Yeah, thank you very much, José Renato, Alexandra, and also other colleagues in the panel. It…
S61
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — Antonia Gawel:I mean, I think very much a focus on decarbonization of the power sector is a critical input and a signifi…
S62
Building Climate-Resilient Systems with AI — Artificial intelligence | Environmental impacts
S63
HIGH LEVEL LEADERS SESSION IV — The deployment of emerging technologies, such as artificial intelligence, is seen as promising in addressing climate cha…
S64
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — Building trust is highlighted as a fundamental requirement for data governance in multilateral environments. Trust can b…
S65
Can we test for trust? The verification challenge in AI — Industry Standards and Regulatory Approaches Legal and regulatory | Infrastructure Trager identifies the need for a st…
S66
The Final Frontier: Emerging Tech and Space Economy for Sustainable Earth — Moderator encourages audience to introduce themselves and ask questions to any panelist Moderator invites audience ques…
S67
GUIDE ON THE APPLICATION OF NEW TECHNOLOGY AND RESEARCH TO PUBLIC WEATHER SERVICES — – Long-range forecasting (from 30 days up to two years): – -monthly outlook: description of averaged weather parameters …
S68
Manual on the Global Data-processing and Forecasting System — – ( c ) Areas of showers Large shower symbols distributed over the area with the symbol for rain, snow or hail added as…
S69
www.ssoar.info — (1) The tendency in global health to focus pri -marily on controlling and treating specific diseas -es (in developing …
S70
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Geralyn Miller:A couple of things, I think, from the pandemic, and that’s a really great question, because as a society,…
S71
[Tentative Translation] — 202 Currently under the consideration of the Integrated Innovation Strategy Promotion Council as of March 2021. fundin…
S72
Scaling AI for Billions_ Building Digital Public Infrastructure — A critical concern emerged around the fragility of existing digital infrastructure and organisations’ readiness for AI i…
S73
Quantum hype and predictions for the future of technology — He illustrated his uncertainty using parallels with aviation:
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Amit Sheth
2 arguments141 words per minute394 words166 seconds
Argument 1
Focus on building small, agile, domain‑specific models rather than large foundational models (Amit Sheth)
EXPLANATION
IRO intends to create original, lightweight AI models that are tailored to specific tasks such as hyper‑local extreme‑weather prediction. The strategy deliberately avoids relying on large foundation models because of their opaque training data and computational baggage.
EVIDENCE
He explained that IRO is developing original work on building very agile, small, specific models for hyper-local extreme weather issues, and that they will not be built on top of large language or foundational models which come with a lot of baggage and unknown training data [26-31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
IRO’s focus on lightweight, domain-specific models is described in the discussion overview and aligns with calls for smaller models in AI democratization literature [S1][S17][S18].
MAJOR DISCUSSION POINT
National AI research vision for climate
AGREED WITH
Praphul Chandra, Dev Niyogi
DISAGREED WITH
Praphul Chandra
Argument 2
Position India to lead in AI‑driven climate solutions and support startups in health, pharma, and sustainability (Amit Sheth)
EXPLANATION
The vision presented to the Prime Minister highlighted AI as a lever for large economic and social impact, especially by empowering startups in health, pharma, and sustainability sectors. Partnerships with industry bodies are meant to translate research into global products originating from India.
EVIDENCE
He recounted presenting to the PM a broad idea that includes a core foundational AI focus on enterprises, supporting the startup ecosystem, and specific partnerships such as with the Indian Pharma Alliance covering 80 % of India’s pharma output and health partners, aiming to make impact in sustainability, health and pharma [24-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The vision to position India as a leader in AI-driven climate and startup support is highlighted in the opening remarks and in a plenary noting India’s AI ambitions [S1][S19][S18].
MAJOR DISCUSSION POINT
National AI research vision for climate
M
M. Ravichandran
4 arguments169 words per minute744 words263 seconds
Argument 1
Fuse physics‑based numerical models with AI to capture both spatial and temporal dynamics for hyper‑local forecasts (M. Ravichandran)
EXPLANATION
Ravichandran argues that accurate local weather prediction requires a hybrid approach that combines the spatial strength of physics‑based numerical models with the temporal pattern‑recognition ability of AI. This integration is essential for forecasting high‑impact events like cloudbursts at fine scales.
EVIDENCE
He used the analogy of needing to see both the elephant (large-scale) and the ant (small-scale) and stated that while physics-based models handle spatial aspects, AI is better for time-series, so both must be fused to understand local weather and predict events such as cloudbursts [47-61].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ravichandran’s call to fuse physics-based models with AI mirrors recommendations for an optimal AI-physics blend and the need to integrate both spatial and temporal aspects [S1][S2][S20].
MAJOR DISCUSSION POINT
AI‑enhanced weather forecasting and modeling
AGREED WITH
Manish Bhardwaj
Argument 2
Use AI to improve prediction of extreme events such as cloudbursts, enabling timely evacuations and response (M. Ravichandran)
EXPLANATION
He points out that current forecasting systems cannot reliably predict cloudbursts, which cascade into landslides and floods. AI is being explored as a tool to fill this gap and support early‑warning and evacuation decisions.
EVIDENCE
He noted that they do not know how to predict cloudbursts and are investigating whether AI can help, and later elaborated on multi-hazard scenarios where cloudbursts trigger landslides and flash floods, emphasizing AI’s role in improving early-warning signals for targeted populations [65-68][161-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s efforts to use AI for extreme-event forecasting, such as floods and cloudbursts, are reported in recent climate-forecasting initiatives [S21][S1].
MAJOR DISCUSSION POINT
Early warning and disaster management
AGREED WITH
Manish Bhardwaj
Argument 3
Open data policies and collaborative consortia are needed to turn research into deployable systems (M. Ravichandran)
EXPLANATION
Ravichandran stresses that India possesses vast historical weather data, but its full potential can be realized only if the data are openly shared and multidisciplinary teams are engaged. Open data would enable diverse researchers to reduce model errors, improve initial conditions, and build trustworthy forecasts.
EVIDENCE
He highlighted the huge volumes of legacy IMD data, the need to utilize it, the importance of opening up the data so many minds can work on it, and the necessity of validation, verification, and trust in AI-enabled forecast systems [126-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of open data and collaborative consortia is discussed in policy guidance on open data principles and in calls for shared datasets for AI research [S22][S23][S1].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
AGREED WITH
Karthik Kashinath
Argument 4
AI can be used for downscaling coarse‑resolution weather models to hyper‑local (kilometer‑scale) forecasts, improving prediction accuracy for localized events.
EXPLANATION
Ravichandran explains that while large‑scale numerical models provide broad forecasts, AI techniques can refine these outputs to much finer spatial resolutions, enabling accurate local weather predictions such as one‑kilometer forecasts.
EVIDENCE
He notes that AI can downscale better, mentioning the need for one-kilometer resolution forecasts and that AI can achieve this downscaling, thereby improving localized weather prediction [140-142].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-based downscaling approaches like Pangu-Weather demonstrate the feasibility of converting coarse forecasts to kilometer-scale predictions [S24][S20][S1].
MAJOR DISCUSSION POINT
AI‑enhanced weather forecasting and modeling
S
Shivkumar Kalayanaraman
4 arguments178 words per minute924 words311 seconds
Argument 1
Deploy multimodal AI (time‑series, vision, multispectral cameras) for ultra‑short‑term weather prediction from low‑cost sensors (Shivkumar Kalayanaraman)
EXPLANATION
Kalayanaraman envisions using inexpensive cameras and multispectral sensors to capture sky patterns and generate forecasts a few hours ahead. By fusing insights across modalities rather than raw data, AI can deliver rapid, localized predictions.
EVIDENCE
He described pointing a camera at the sky, using IR or multispectral cameras to forecast one to four hours ahead, noting the dramatic cost drop of sensors and the opportunity to fuse insights across modes instead of complex data fusion [76-84].
MAJOR DISCUSSION POINT
AI‑enhanced weather forecasting and modeling
Argument 2
NRF’s targeted funding programs accelerate AI solutions for weather, climate, and disaster risk (Shivkumar Kalayanaraman)
EXPLANATION
Kalayanaraman outlines how the National Research Foundation (NRF) provides both grant and capital funding, mission‑mode programs, and challenge‑driven initiatives to fast‑track AI research in weather, climate and disaster risk. These mechanisms aim to move from basic research to demonstrable, impact‑oriented solutions.
EVIDENCE
He explained that NRF is a statutory body offering grant funding, a one-lakh-crore RDI capital fund, the AI for Science & Engineering program (including a track for Weather and Climate), the Leapfrog Demonstrators for Societal Innovation, a hackathon with IBM and IIT Delhi, and collaborations with MoES and other agencies [193-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
NRF’s funding mechanisms, including grant programmes and the one-lakh-crore RDI fund, are outlined as accelerators for AI weather solutions [S2][S1].
MAJOR DISCUSSION POINT
Early warning and disaster management
AGREED WITH
Akshara Kaginalkar, Sandeep Singhal
Argument 3
NRF’s AI for Science & Engineering program and Leapfrog Demonstrators provide mission‑mode funding and challenge‑driven collaboration (Shivkumar Kalayanaraman)
EXPLANATION
He emphasizes two flagship NRF initiatives: the AI for Science & Engineering program, which funds weather and climate AI research, and the Leapfrog Demonstrators, which focus on rapid, high‑impact societal solutions. Both are designed to encourage collaborative, outcome‑focused projects.
EVIDENCE
He detailed the AI for Science & Engineering program’s weather and climate track, the upcoming Leapfrog Demonstrators for Societal Innovation, and the emphasis on working backwards from impact while still supporting curiosity-driven research [203-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI for Science & Engineering programme and Leapfrog Demonstrators are described as mission-mode, challenge-driven funding streams in NRF’s strategy documents [S2].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
Argument 4
The AI for Science & Engineering hackathon, organized with IBM and IIT Delhi, provides curated datasets and a collaborative platform to accelerate AI research for weather and climate.
EXPLANATION
Kalayanaraman highlights that the hackathon releases specific weather and climate datasets to participants, fostering rapid prototyping and community engagement, which speeds up the development of AI solutions for societal challenges.
EVIDENCE
He mentions a hackathon conducted in partnership with IBM and IIT Delhi that provides data sets for weather and climate AI research and encourages participants to develop solutions in this domain [218-222].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI for Science & Engineering hackathon, co-organized with IBM and IIT Delhi, provides curated datasets for rapid prototyping [S2].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
K
Karthik Kashinath
1 argument166 words per minute436 words157 seconds
Argument 1
Develop benchmark datasets and super‑resolution techniques to achieve operational hyper‑local models (Karthik Kashinath)
EXPLANATION
Kashinath proposes creating benchmark datasets and metrics analogous to ImageNet to drive AI progress at hyper‑local scales. He also suggests leveraging super‑resolution methods to downscale coarse climate data to kilometer‑level resolution, enabling operational local forecasts.
EVIDENCE
He cited the ERIF dataset from ECMWF and the Weather Bench as examples of benchmarks that spurred global-scale AI models, argued for similar hyper-local benchmarks, and described using super-resolution (e.g., Earth2 program) to transform 25 km data to 1 km resolution, expecting generative AI to fill remaining gaps [261-274].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Benchmark datasets and super-resolution methods are central to recent AI weather projects such as Pangu-Weather, supporting operational hyper-local modelling [S24][S20].
MAJOR DISCUSSION POINT
AI‑enhanced weather forecasting and modeling
AGREED WITH
M. Ravichandran
D
Dev Niyogi
3 arguments193 words per minute448 words139 seconds
Argument 1
Incorporate human and societal dimensions into models to make forecasts decision‑oriented (Dev Niyogi)
EXPLANATION
Niyogi stresses that purely physical models miss the human and societal factors that affect outcomes. AI can bridge this gap by embedding the ‘Jugaad’ mindset and societal constraints, making forecasts more accessible and actionable.
EVIDENCE
He introduced the Indian concept of ‘Jugaad’, explained that while equations capture natural laws, the human element is often missing, and argued that AI brings societal dimensions into predictive models, making them more usable [103-108].
MAJOR DISCUSSION POINT
AI‑enhanced weather forecasting and modeling
Argument 2
Build decision‑specific digital twins that turn raw weather data into actionable insights for users (Dev Niyogi)
EXPLANATION
Niyogi proposes creating digital twins that are tailored to specific decisions rather than generic weather outputs. By defining the decision context, these twins can provide scalable, transferable models that directly support user actions.
EVIDENCE
He described ‘box models’ that are simple, scalable, and transferable, and explained that decision-specific digital twins focus on why a model is created and what decision it informs, turning raw data into intelligent, actionable insights for users ranging from long-term hedging to short-term shade decisions [313-330].
MAJOR DISCUSSION POINT
Digital twins and decision‑oriented AI
Argument 3
Develop simple, scalable “box models” that can be transferred across cities for disaster and climate decision support (Dev Niyogi)
EXPLANATION
He suggests building modular ‘box models’ that can be quickly adapted to different urban contexts, providing a common decision‑support framework for disaster and climate management. Such models avoid the need to predict every variable at every scale.
EVIDENCE
He referenced the creation of simple, scalable box models and decision-specific digital twins, emphasizing that defining the decision-to-data framework allows for transferable solutions across cities [318-322].
MAJOR DISCUSSION POINT
Digital twins and decision‑oriented AI
AGREED WITH
Amit Sheth, Praphul Chandra
M
Manish Bhardwaj
2 arguments125 words per minute804 words384 seconds
Argument 1
Create hybrid AI‑sensor systems that deliver trusted, low‑cost early warnings to the public (Manish Bhardwaj)
EXPLANATION
Bhardwaj advocates for an early‑warning architecture that combines AI analytics with physical sensor networks and satellite data to provide reliable alerts at minimal cost. The system must be hybrid, not purely AI, to ensure trust and resilience.
EVIDENCE
He emphasized the need for a trusted early-warning asset for the public, describing a hybrid model that integrates AI with sensor fabrics, satellite data, and alerts from various agencies, positioning AI as a supporting role rather than the sole solution [70-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hybrid AI-sensor early-warning architectures are part of India’s AI-enhanced disaster management pilots, emphasizing trusted low-cost alerts [S21][S1].
MAJOR DISCUSSION POINT
Early warning and disaster management
AGREED WITH
M. Ravichandran
Argument 2
AI can increase the granularity of early‑warning signals by fusing terrestrial, satellite, and sensor data, even when sensor coverage is limited.
EXPLANATION
Bhardwaj argues that integrating multiple data streams through AI enables more precise, localized warnings for hazards such as flash floods and glacial‑lake outburst floods, supporting targeted evacuations and response actions.
EVIDENCE
He describes using data from alert-generating agencies, terrestrial sensors, satellite observations, and other sensory inputs to improve the granularity of early-warning signals despite limitations in satellite coverage, thereby enhancing targeted early warnings [175-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fusing satellite, sensor and agency data to improve warning granularity is highlighted in national AI-for-weather initiatives [S21].
MAJOR DISCUSSION POINT
Early warning and disaster management
S
Sandeep Singhal
2 arguments171 words per minute580 words203 seconds
Argument 1
Public‑private partnerships, clear market segmentation, and monetization pathways (e.g., insurance, enterprise services) are critical for scaling AI climate solutions (Sandeep Singhal)
EXPLANATION
Singhal highlights that collaboration with government agencies is essential for data access and deployment, while startups must segment their markets (public vs. private) and identify revenue streams such as insurance or enterprise services. He also notes the growing role of philanthropic capital.
EVIDENCE
He stated that partnership with the government is critical for data and deployment, advised startups to segment markets (general public vs. government), mentioned public-private partnership funding, philanthropic capital, and identified insurance as an early monetizable product [280-288].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public-private partnership models and market-segmentation guidance are emphasized in NRF’s RDI fund and in broader discussions of India’s AI strategy [S2][S19].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
AGREED WITH
Akshara Kaginalkar, Shivkumar Kalayanaraman
Argument 2
Insurance is a natural first monetizable product for AI‑driven climate risk assessments (Sandeep Singhal)
EXPLANATION
He points out that climate‑risk AI outputs can be directly packaged into insurance products, providing a clear commercial route for early‑stage AI solutions in the climate domain.
EVIDENCE
He explicitly said that insurance ends up being one of the first monetizable products that emerges from AI-driven climate risk work [341-342].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Insurance as an early monetisation route for climate-risk AI is noted in reports on AI-driven risk-assessment products [S21].
MAJOR DISCUSSION POINT
Climate risk insurance and monetization
P
Praphul Chandra
5 arguments156 words per minute373 words142 seconds
Argument 1
Achieve domain‑specific performance by fine‑tuning large foundation models with very small datasets (Praphul Chandra)
EXPLANATION
Chandra questions how little data is needed to fine‑tune large foundation models for specific climate applications, proposing that breakthroughs in small‑data fine‑tuning could unlock cross‑domain utility.
EVIDENCE
He described the concept of ‘small data fine tuning’, asking how small the dataset can be for effective fine-tuning of large foundation models and noting its potential across multiple domains [104-108].
MAJOR DISCUSSION POINT
Data challenges, small‑data fine‑tuning, and transfer learning
AGREED WITH
Amit Sheth, Dev Niyogi
DISAGREED WITH
Amit Sheth
Argument 2
Leverage transfer learning to share knowledge between data‑rich and data‑sparse regions, reducing data requirements (Praphul Chandra)
EXPLANATION
He suggests using transfer learning to apply models trained in data‑rich regions to data‑sparse areas, acknowledging that while physics is universal, hyper‑local uniqueness requires careful constraint handling.
EVIDENCE
He noted that some regions are data-rich while others are data-sparse, the physics of weather is the same globally, but hyper-local uniqueness exists, and efficient transfer learning could be impactful [110-114].
MAJOR DISCUSSION POINT
Data challenges, small‑data fine‑tuning, and transfer learning
Argument 3
Hyper‑local solar generation forecasts enable precise grid load balancing and demand‑flexibility mechanisms (Praphul Chandra)
EXPLANATION
Chandra explains that AI models capable of forecasting rooftop solar output at a hyper‑local level can help grid operators balance loads and implement demand‑flexibility, crucial for a renewable‑energy‑driven grid.
EVIDENCE
He highlighted that hyper-local forecasts can predict how much energy a rooftop solar panel will generate, which is critical for managing the grid, and linked this to demand-flexibility and data-center energy consumption [292-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled hyper-local solar forecasts for grid balancing are highlighted in discussions of AI for energy management and digital marketplaces [S26].
MAJOR DISCUSSION POINT
AI for energy and grid management
Argument 4
Combine AI weather forecasts with the India Energy Stack to create digital marketplaces for energy trading (Praphul Chandra)
EXPLANATION
He describes integrating AI‑driven weather forecasts with the India Energy Stack—a digital public infrastructure—to enable a marketplace where consumers and producers can trade energy based on predictive load information.
EVIDENCE
He mentioned a team combining the India Energy Stack with AI weather models to forecast grid loads, facilitating energy trading between consumers and producers and supporting demand flexibility [294-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Integration of AI weather forecasts with the India Energy Stack to enable energy trading platforms is described in AI-energy system sessions [S26].
MAJOR DISCUSSION POINT
AI for energy and grid management
Argument 5
AI‑driven demand‑flexibility solutions can reduce data‑center energy consumption by aligning workloads with weather‑informed grid load forecasts.
EXPLANATION
Chandra points out that using AI weather forecasts together with the India Energy Stack enables dynamic adjustment of data‑center demand, supporting sustainable AI operations and improving overall grid stability.
EVIDENCE
He notes that demand flexibility for data centers can be supported by AI and public infrastructure, linking weather forecasts to dynamic energy management to reduce consumption [296-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Demand-flexibility for data-centers using weather-informed grid forecasts is mentioned in AI-energy management contexts [S26].
MAJOR DISCUSSION POINT
AI for energy and grid management
A
Audience
1 argument154 words per minute75 words29 seconds
Argument 1
Climate‑risk AI outputs can be packaged into insurance products, providing a viable commercial avenue (Audience)
EXPLANATION
An audience member highlighted that climate risk directly influences insurance pricing and availability, and suggested that AI‑derived risk assessments could be integrated into insurance offerings.
EVIDENCE
He noted that climate risk affects insurance rates and leads to situations where houses become uninsured, asking how AI risk assessments could be married to insurance products [337-340].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Audience comment aligns with documented use cases of AI in climate-risk insurance offerings [S21].
MAJOR DISCUSSION POINT
Climate risk insurance and monetization
A
Akshara Kaginalkar
3 arguments146 words per minute2193 words897 seconds
Argument 1
Public‑private partnership models with open IP licensing and consortium‑based approaches are essential to translate AI research into operational climate and disaster services.
EXPLANATION
Kaginalkar stresses that scaling AI solutions requires collaborative proposals, hub‑and‑spoke consortia, and open IP licensing so startups can partner with academia. She highlights multiple mechanisms such as translational research centres and the large RDI fund that mandate industry‑academic collaboration to move prototypes to market.
EVIDENCE
She describes encouraging consortium bids, hub-and-spoke setups, open IP licensing for startups to partner with academia, translational research centres mandating industry partnership, and the one-lakh-crore RDI fund that pushes industry-academia collaboration for scaling AI solutions [236-251].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
PPP models with open IP licensing and consortium structures are advocated in NRF’s RDI fund and collaborative policy frameworks [S2][S18].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
Argument 2
Strategic partnerships with global technology leaders (NVIDIA, Google, Qualcomm) and foundations (Gates) are critical to accelerate AI‑driven climate and disaster solutions in India.
EXPLANATION
Kaginalkar notes that the AI Summit has secured collaborations with major tech companies and philanthropic foundations, indicating that leveraging their expertise, platforms, and resources will fast‑track the development and deployment of AI applications for weather, energy, and disaster management.
EVIDENCE
She mentions announced partnerships with NVIDIA, Google, Qualcomm, and the Gates Foundation as part of the AI Summit initiatives, underscoring their role in supporting AI for climate and disaster projects [345-346].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Strategic collaborations with NVIDIA, Google, Qualcomm and the Gates Foundation are announced as part of AI Summit partnerships supporting climate solutions [S2][S1].
MAJOR DISCUSSION POINT
Funding, public‑private partnership, and translation to products
Argument 3
Digital twins should be integrated across the entire AI pipeline—from monitoring to user‑facing applications—to deliver end‑to‑end climate services.
EXPLANATION
Kaginalkar proposes that digital twins can link data acquisition, processing, modeling, and delivery, creating a comprehensive portfolio of AI applications that support climate extremes and disaster management at the user level.
EVIDENCE
She states that digital twins can cover the whole AI spectrum from monitoring to reaching end users, suggesting a holistic approach to climate services [331-333].
MAJOR DISCUSSION POINT
Digital twins and decision‑oriented AI
Agreements
Agreement Points
Hybrid AI‑physics and sensor approaches are needed for accurate hyper‑local weather forecasts and early warnings.
Speakers: M. Ravichandran, Manish Bhardwaj
Fuse physics‑based numerical models with AI to capture both spatial and temporal dynamics for hyper‑local forecasts (M. Ravichandran) Use AI to improve prediction of extreme events such as cloudbursts, enabling timely evacuations and response (M. Ravichandran) Create hybrid AI‑sensor systems that deliver trusted, low‑cost early warnings to the public (Manish Bhardwaj) AI can increase the granularity of early‑warning signals by fusing terrestrial, satellite, and sensor data, even when sensor coverage is limited (Manish Bhardwaj)
Both speakers stress that AI alone is insufficient; it must be combined with physics-based numerical models or physical sensor networks to capture spatial patterns and temporal rhythms, thereby enabling reliable, fine-scale forecasts of extreme events such as cloudbursts and delivering trusted early warnings [47-61][65-68][70-75][175-180].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the UK Met Office’s strategic plan to blend AI with physics-based forecasting and the World Meteorological Organization’s call for AI-enhanced early warning systems [S41], [S42].
Open data, benchmark datasets and collaborative consortia are essential to develop operational hyper‑local AI weather models.
Speakers: M. Ravichandran, Karthik Kashinath
Open data policies and collaborative consortia are needed to turn research into deployable systems (M. Ravichandran) Develop benchmark datasets and super‑resolution techniques to achieve operational hyper‑local models (Karthik Kashinath)
Ravichandran highlights India’s massive legacy weather archives and calls for open access to enable many researchers to improve model error and trust, while Karthik proposes creating benchmark data sets and metrics-akin to ImageNet-to drive progress at kilometer-scale resolution, both underscoring the centrality of shared data for AI advancement [126-144][261-274].
POLICY CONTEXT (KNOWLEDGE BASE)
Crowd-sourcing data initiatives (e.g., India’s language data effort) illustrate the push for open, community-generated datasets, while international forums stress collaborative consortia for sharing weather AI resources [S40], [S45].
Public‑private partnerships and dedicated funding mechanisms are critical to translate AI research into scalable climate and disaster solutions.
Speakers: Akshara Kaginalkar, Sandeep Singhal, Shivkumar Kalayanaraman
Public‑private partnership models with open IP licensing and consortium‑based approaches are essential to translate AI research into operational climate and disaster services (Akshara Kaginalkar) Public‑private partnerships, clear market segmentation, and monetization pathways (e.g., insurance, enterprise services) are critical for scaling AI climate solutions (Sandeep Singhal) NRF’s targeted funding programs accelerate AI solutions for weather, climate, and disaster risk (Shivkumar Kalayanaraman)
All three speakers emphasize that government-backed funding (NRF grants, RDI capital fund), collaborative consortium structures, and open IP licensing are needed to move AI prototypes to market-ready services, with startups advised to align with ministries and investors urged to support these PPP models [236-251][280-288][193-224].
POLICY CONTEXT (KNOWLEDGE BASE)
UNDP collaborations on AI-powered early warning systems and multiple forum discussions highlight the necessity of public-private partnerships and earmarked funding for scalable climate AI solutions [S43], [S45].
Emphasis on small, domain‑specific, agile models (or fine‑tuned foundation models with minimal data) and simple transferable ‘box’ models for climate applications.
Speakers: Amit Sheth, Praphul Chandra, Dev Niyogi
Focus on building small, agile, domain‑specific models rather than large foundational models (Amit Sheth) Achieve domain‑specific performance by fine‑tuning large foundation models with very small datasets (Praphul Chandra) Develop simple, scalable “box models” that can be transferred across cities for disaster and climate decision support (Dev Niyogi)
The three speakers converge on the need for lightweight, purpose-built AI solutions: Amit advocates original small models; Praphul explores fine-tuning large models with tiny data; Dev proposes modular box models that are easy to transfer, all aiming for rapid, context-aware climate services [26-31][104-108][318-322].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on “small AI” advocate for domain-specific, low-compute models and a balanced use of large foundation models for climate tasks [S46], [S47], [S49].
Similar Viewpoints
Both speakers advocate leveraging inexpensive visual or multispectral sensors combined with AI to produce very short‑term forecasts or early warnings, stressing cost‑effectiveness and the need for AI to interpret sensor outputs rather than replace physical infrastructure [76-84][70-75].
Speakers: Shivkumar Kalayanaraman, Manish Bhardwaj
Deploy multimodal AI (time‑series, vision, multispectral cameras) for ultra‑short‑term weather prediction from low‑cost sensors (Shivkumar Kalayanaraman) Create hybrid AI‑sensor systems that deliver trusted, low‑cost early warnings to the public (Manish Bhardwaj)
Both highlight strategies to overcome data scarcity: Praphul through transfer learning across regions, and Karthik via benchmark datasets and super‑resolution that repurpose coarse data for fine‑scale modeling, indicating a common focus on data‑efficient model development [110-114][261-274].
Speakers: Praphul Chandra, Karthik Kashinath
Leverage transfer learning to share knowledge between data‑rich and data‑sparse regions, reducing data requirements (Praphul Chandra) Develop benchmark datasets and super‑resolution techniques to achieve operational hyper‑local models (Karthik Kashinath)
Unexpected Consensus
Insurance as an early monetizable product for AI‑driven climate risk assessments.
Speakers: Audience, Sandeep Singhal
Climate‑risk AI outputs can be packaged into insurance products, providing a viable commercial avenue (Audience) Insurance is a natural first monetizable product for AI‑driven climate risk assessments (Sandeep Singhal)
While most participants focused on technical and policy aspects, both an audience member and the venture-capitalist Sandeep independently identified insurance as the first marketable application of climate-AI, revealing an unanticipated convergence on a concrete commercial pathway [337-340][341-342].
Overall Assessment

The panel shows strong convergence on four pillars: (1) hybrid AI‑physics/sensor systems for hyper‑local forecasting, (2) the necessity of open, benchmarked data and collaborative consortia, (3) public‑private partnership and dedicated funding as the engine for translation, and (4) a shared preference for small, agile, or fine‑tuned models that are easy to deploy. These agreements span technical, institutional, and economic dimensions, indicating a cohesive national roadmap for AI‑enabled climate resilience.

High consensus – the majority of speakers align on the same strategic directions, suggesting that India’s AI‑climate agenda is likely to move forward with coordinated policy support, funding structures, and a focus on lightweight, data‑efficient models.

Differences
Different Viewpoints
Model development strategy – building small, agile, domain‑specific models from scratch versus fine‑tuning large foundation models with very small datasets
Speakers: Amit Sheth, Praphul Chandra
Focus on building small, agile, domain‑specific models rather than large foundational models (Amit Sheth) Achieve domain‑specific performance by fine‑tuning large foundation models with very small datasets (Praphul Chandra)
Sheth argues that IRO will create original lightweight models and explicitly avoid large foundation models because of unknown training data and computational baggage [26-31]. Chandra, in contrast, asks how small a dataset can be to fine-tune large foundation models for climate applications, seeing this as a breakthrough that could serve many domains [104-108]. The two positions conflict on whether to rely on new small models or to adapt existing large models.
POLICY CONTEXT (KNOWLEDGE BASE)
Expert discussions note the trade-off between efficient small models and the capabilities of large foundation models, urging a balanced, context-driven approach [S46], [S47], [S49].
Approach to data scarcity – transfer learning from data‑rich regions versus building new small models locally
Speakers: Praphul Chandra, Amit Sheth
Leverage transfer learning to share knowledge between data‑rich and data‑sparse regions (Praphul Chandra) Develop original small, agile models for each specific task without depending on large pre‑trained models (Amit Sheth)
Chandra proposes using transfer learning to apply models trained where data are abundant to data-sparse Indian contexts, emphasizing efficient reuse of knowledge [110-114]. Sheth prefers to create original, locally-tailored models from the ground up, avoiding reliance on existing foundation models [26-31]. This reflects a methodological disagreement on how to handle limited data.
POLICY CONTEXT (KNOWLEDGE BASE)
Research on climate extremes emphasizes transfer learning from data-rich to data-sparse regions, while regional recommendations favor locally built small models and shared protocols [S38], [S39].
Scope of digital‑twin development – decision‑specific, minimal “box” models versus end‑to‑end AI pipelines covering monitoring to user delivery
Speakers: Dev Niyogi, Akshara Kaginalkar
Build decision‑specific digital twins that are simple, scalable, and transferable, focusing on the decision‑to‑data framework (Dev Niyogi) Integrate digital twins across the whole AI spectrum—from monitoring to processing to user‑facing applications (Akshara Kaginalkar)
Niyogi suggests creating lightweight “box models” and digital twins that are tied to a specific decision context, avoiding the need to predict every variable [318-322]. Kaginalkar envisions digital twins as a holistic layer that links data acquisition, modeling, and delivery to end users, covering the full AI pipeline [331-333]. The disagreement lies in the breadth and complexity of the twin architecture.
POLICY CONTEXT (KNOWLEDGE BASE)
Digital twin governance literature stresses the need to define scope and integration within broader systems, informing the debate between lightweight decision-specific twins and full-stack pipelines [S48].
Unexpected Differences
AI model strategy – small bespoke models vs fine‑tuning large foundation models
Speakers: Amit Sheth, Praphul Chandra
Focus on building small, agile, domain‑specific models rather than large foundational models (Amit Sheth) Achieve domain‑specific performance by fine‑tuning large foundation models with very small datasets (Praphul Chandra)
Both speakers are senior AI experts, yet they propose opposite technical routes for climate AI: Sheth rejects large foundation models altogether, while Chandra sees them as the core asset to be adapted with minimal data. This contrast was not anticipated given the shared goal of rapid climate impact.
POLICY CONTEXT (KNOWLEDGE BASE)
Sustainable AI policy analyses call for a balance between small, specialized models and large foundational models to ensure efficiency and impact [S46], [S47], [S49].
Breadth of digital‑twin implementation – minimal decision‑specific twins vs full‑stack AI‑driven twins
Speakers: Dev Niyogi, Akshara Kaginalkar
Build decision‑specific digital twins that are simple, scalable, and transferable (Dev Niyogi) Integrate digital twins across the whole AI pipeline from monitoring to end‑user services (Akshara Kaginalkar)
Niyogi’s emphasis on lightweight, purpose‑built twins contrasts with Kaginalkar’s vision of comprehensive, end‑to‑end twin ecosystems. The divergence in scope and complexity was not overtly discussed elsewhere, making it an unexpected point of contention.
POLICY CONTEXT (KNOWLEDGE BASE)
Governance frameworks for digital twins discuss choices between lightweight decision-specific twins and comprehensive AI-driven twins, highlighting policy implications of each approach [S48].
Overall Assessment

The panel largely converged on the need for hybrid AI‑physics solutions, public‑private collaboration, and open data to improve early‑warning and hyper‑local forecasting. The most pronounced disagreements centered on the technical route for model development (small bespoke models vs fine‑tuned large foundations) and the architectural scope of digital twins. These methodological splits reflect differing risk appetites and resource strategies rather than fundamental opposition to AI’s role in climate resilience.

Moderate – while core objectives (enhanced forecasting, disaster preparedness, and scalable deployment) are shared, the divergent views on model architecture and digital‑twin scope could slow consensus on research funding priorities and implementation road‑maps, requiring explicit coordination to align technical pathways.

Partial Agreements
All three stress that AI alone is insufficient; a hybrid approach that combines physical models or sensor networks with AI analytics is needed to produce reliable, fine‑grained early‑warning and forecasting systems [47-61][70-75][175-180].
Speakers: M. Ravichandran, Manish Bhardwaj, Shivkumar Kalayanaraman
Fuse physics‑based numerical models with AI to capture spatial and temporal dynamics for hyper‑local forecasts (M. Ravichandran) Create hybrid AI‑sensor systems that deliver trusted, low‑cost early warnings (Manish Bhardwaj) Use AI to increase granularity of early‑warning signals by fusing terrestrial, satellite, and sensor data (Shivkumar Kalayanaraman)
While the focus differs (PPP mechanisms vs technical benchmarks), all agree that coordinated structures—whether through funding programmes, open IP, or shared benchmark data—are required to move AI prototypes to scalable, operational services [236-251][280-288][261-274].
Speakers: Akshara Kaginalkar, Sandeep Singhal, Karthik Kashinath
Public‑private partnership models with open IP licensing and consortium‑based approaches are essential to translate AI research into operational climate and disaster services (Akshara Kaginalkar) Public‑private partnerships, clear market segmentation and monetisation pathways (including insurance) are critical for scaling AI climate solutions (Sandeep Singhal) Develop benchmark datasets and super‑resolution techniques to achieve operational hyper‑local models (Karthik Kashinath)
Takeaways
Key takeaways
The Indian Research Organisation (IRO) will focus on building small, agile, domain‑specific AI models rather than relying on large foundational models, targeting climate, health, and pharma verticals. Effective weather and climate forecasting requires a hybrid approach that fuses physics‑based numerical models with AI to capture both spatial and temporal dynamics, especially for hyper‑local events. Multimodal AI (time‑series, vision, multispectral sensors) and low‑cost sensor networks can enable ultra‑short‑term predictions and improve now‑casting. Benchmark datasets, super‑resolution techniques, and transfer learning are critical to achieve operational, hyper‑local AI models across data‑rich and data‑sparse regions. Early‑warning systems must be trusted, low‑cost, and integrated with existing sensor and satellite data; AI can enhance prediction of extreme events such as cloudbursts, flash floods, and landslides. NRF’s AI for Science & Engineering program, the upcoming Leapfrog Demonstrators, and recent hackathon provide mission‑mode funding and challenge‑driven pathways for AI‑climate solutions. Public‑private partnerships, open data policies, and open IP licensing are essential to translate research into deployable products and scale startups. Small‑data fine‑tuning of large foundation models and Jugaad‑style integration of human/social dimensions can make AI solutions more accessible and decision‑oriented. AI‑driven hyper‑local solar generation forecasts can support grid load balancing, demand‑flexibility, and digital energy marketplaces (India Energy Stack). Decision‑specific digital twins and simple “box models” can turn raw weather data into actionable insights for disaster management and everyday user decisions. Climate‑risk insurance is identified as a natural first monetizable product for AI‑generated risk assessments.
Resolutions and action items
NRF will continue and expand the AI for Weather and Climate track within its AI for Science & Engineering program and launch the Leapfrog Demonstrators for Societal Innovation. NRF announced an AI for Science & Engineering hackathon (partnering with IBM and IIT Delhi) to provide datasets and stimulate solutions. IRO will develop original, small, agile AI models for extreme‑weather use cases, avoiding reliance on large foundational models. Stakeholders agreed to open up weather and climate data to broader research communities to enable diverse AI approaches. Create benchmark datasets and metrics for hyper‑local forecasting, modeled after the ECMWF ERA5 benchmark, to drive operational quality. Promote transfer learning and small‑data fine‑tuning techniques to leverage knowledge from data‑rich regions for data‑sparse Indian locales. Encourage public‑private consortia and hub‑spoke collaborations, with open IP licensing, to accelerate translation of AI models into products. Integrate AI weather forecasts with the India Energy Stack to enable digital energy marketplaces and demand‑flexibility mechanisms. Develop voice‑based consumer applications that translate forecasts into actionable resilience recommendations for end‑users. Explore insurance‑linked monetization pathways for AI‑driven climate risk assessments.
Unresolved issues
Establishing robust validation and verification frameworks to build trust in AI‑augmented forecasts. Defining concrete mechanisms for sustained open data sharing while protecting privacy and security. Detailing business models and market segmentation for scaling AI climate solutions beyond pilot projects. Operationalizing hyper‑local models at scale, including computational resource requirements and deployment pipelines. Integrating AI outputs into insurance underwriting processes and determining regulatory implications. Addressing multi‑hazard cascading events (e.g., cloudburst → landslide → flash flood) within AI prediction frameworks. Clarifying the role of human/social dimensions (Jugaad) in model design and how to quantify them. Finalizing IP and licensing terms for collaborative projects between academia, startups, and industry.
Suggested compromises
Adopt a hybrid modeling approach that combines physics‑based numerical models with AI to leverage strengths of both. Use large foundation models as a starting point but fine‑tune them with minimal domain‑specific data to reduce dependence on massive datasets. Balance mission‑mode, impact‑driven funding with curiosity‑driven, broad‑based research to ensure both innovation and applicability. Public‑private partnership model where government provides data, validation, and policy support while private sector supplies agility and capital. Open data access coupled with rigorous validation protocols to maintain trust while encouraging diverse AI development.
Thought Provoking Comments
IRO will focus on building very agile, small, specific models for hyper‑local extreme‑weather issues, rather than building on top of large foundational models that come with a lot of baggage.
Challenges the prevailing trend of using massive foundation models and proposes a fundamentally different, India‑centric research strategy that emphasizes domain‑specific, lightweight AI.
Set the agenda for the panel by framing the discussion around bespoke, small‑scale models; prompted other speakers to consider how such models could be integrated with existing physics‑based systems and opened the conversation to data efficiency and deployment challenges.
Speaker: Amit Sheth
We need to see the elephant plus the ant – i.e., combine spatial (physics‑based numerical) models with fine‑grained time‑series AI to predict high‑impact events like cloudbursts.
Uses a vivid metaphor to illustrate the necessity of hybridizing traditional weather models with AI, highlighting a gap in current forecasting capabilities.
Shifted the discussion toward hybrid modeling approaches; other participants (Manish Bhardwaj, Shivkumar) expanded on multimodal data fusion and early‑warning systems, deepening the technical focus.
Speaker: M. Ravichandran
Multimodal models can fuse insights from cameras, IR, low‑cost sensors, and low‑Earth‑orbit satellites, moving from data‑fusion (which is complex) to insight‑fusion for now‑casting and forecasting.
Introduces a concrete, technology‑driven pathway for real‑time weather sensing and forecasting, emphasizing the practical deployment of AI at scale.
Prompted the panel to discuss sensor networks, cost reductions, and the role of generative AI in operational forecasting; influenced later remarks on hyper‑local modeling and public‑private collaborations.
Speaker: Shivkumar Kalayanaraman
A simple voice framework/app that lets a user say ‘OK, what should I do next week to stay safe from climate impacts?’ – turning forecasts into actionable personal resilience advice.
Brings the consumer perspective into the conversation, highlighting the need to translate AI outputs into everyday decision‑making tools.
Shifted the tone toward end‑user engagement; led to discussions about personalization, market segmentation, and the monetization of climate AI services.
Speaker: Sandeep Singhal
Jugaad – AI can bring the human and societal dimensions into predictive models, bridging the gap between mathematically feasible equations and real‑world human behavior.
Introduces a culturally resonant concept to argue for socio‑technical integration, expanding the scope beyond pure technical accuracy.
Encouraged participants to consider social factors in model design; influenced later comments on decision‑specific digital twins and the ‘tragedy of the commons’ framing.
Speaker: Dev Niyogi
The breakthrough we need is small‑data fine‑tuning of large foundation models – how little data can we use to adapt a model for a specific use case?
Raises a critical research question about data efficiency, directly relevant to India’s data‑sparse regions and the feasibility of deploying AI solutions.
Spurred discussion on transfer learning and benchmark datasets (later echoed by Karthik Kashinath); highlighted a concrete research direction for the community.
Speaker: Praphul Chandra
Transfer learning across data‑rich and data‑sparse regions, leveraging the universal physics of weather while adapting to hyper‑local uniqueness, can dramatically accelerate AI adoption.
Offers a practical solution to the data disparity problem and connects it to proven AI techniques, providing a roadmap for scaling models nationwide.
Guided the conversation toward methodological strategies (benchmark datasets, super‑resolution) and reinforced the need for collaborative, cross‑regional efforts.
Speaker: Karthik Kashinath
ANRF’s Leapfrog Demonstrators for Societal Innovation and challenge‑mode funding aim to move from incremental research to high‑impact, operational solutions, with open IP licensing to accelerate industry‑academia collaboration.
Outlines concrete funding mechanisms and policy levers that can turn ideas into scalable products, addressing the earlier identified bottlenecks of data access and translation.
Provided a clear pathway for turning technical ideas into funded projects; prompted participants to discuss partnerships, IP strategies, and the role of government in de‑risking innovation.
Speaker: Shivkumar Kalayanaraman (ANRF)
Weather is the tragedy of the commons – everyone is affected but no one can pay for it. We must create decision‑specific digital twins that turn weather data into monetizable, actionable products.
Reframes the entire problem from raw forecasting to decision support and economic sustainability, linking technical, societal, and business dimensions.
Served as a concluding turning point, steering the discussion toward productization, market models, and the need for a decision‑centric approach; resonated with earlier consumer‑focus and funding comments.
Speaker: Dev Niyogi
AI‑enabled hyper‑local solar generation forecasts can be combined with the India Energy Stack to enable demand flexibility and grid trading, turning weather predictions into direct energy market value.
Connects climate AI directly to a critical economic sector (energy), illustrating a tangible use‑case where AI adds measurable value.
Bridged the climate‑AI discussion with the broader economic transition narrative; reinforced the earlier point about AI’s role in renewable integration and attracted interest from investors and policymakers.
Speaker: Praphul Chandra
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved the conversation from a high‑level vision of AI for climate to concrete, actionable pathways. Amit Sheth’s emphasis on small, domain‑specific models set the strategic tone, which was deepened by Ravichandran’s hybrid‑model metaphor and Shivkumar’s multimodal sensor vision. Consumer‑centric ideas from Singhal and societal integration from Niyogi broadened the scope to end‑user impact. Technical breakthroughs around data efficiency (Chandra) and transfer learning (Kashinath) offered feasible research directions, while the ANRF funding framework provided the necessary policy and financial scaffolding. Finally, the framing of weather as a tragedy of the commons and the push for decision‑specific digital twins unified the technical, social, and economic threads, steering the panel toward a roadmap that links AI research, public‑private partnerships, and marketable solutions.

Follow-up Questions
Develop benchmark datasets and metrics for hyperlocal weather AI models to drive operational quality
Benchmark datasets have historically accelerated AI progress (e.g., ImageNet). Creating similar standards for hyperlocal scales will enable consistent evaluation and rapid improvement of models.
Speaker: Karthik Kashinath
Investigate small‑data fine‑tuning techniques for large foundation models in weather and climate applications
If large models can be effectively adapted with minimal domain‑specific data, AI solutions become feasible in data‑sparse regions, expanding impact across India.
Speaker: Praphul Chandra
Research efficient transfer learning methods to adapt models from data‑rich regions to data‑sparse regions while preserving local specificity
Weather physics is universal, but hyperlocal nuances matter; transfer learning can leverage global knowledge and reduce the need for extensive local data collection.
Speaker: Karthik Kashinath
Establish robust validation and verification frameworks to build trust in AI‑enabled weather forecasts
Decision makers require confidence in AI predictions; systematic V&V will address concerns about model reliability and facilitate adoption.
Speaker: M. Ravichandran
Create open data platforms that provide broad access to historical and real‑time weather datasets for the research community
Open data enables diverse teams to experiment, fostering innovation and preventing siloed efforts in AI for weather.
Speaker: M. Ravichandran
Develop multimodal insight‑level fusion approaches that combine satellite, sensor, and AI outputs without raw data overload
Insight‑level fusion simplifies integration, reduces computational complexity, and can improve forecast accuracy across modalities.
Speaker: Shivkumar Kalayanaraman
Design hybrid AI‑physics early warning systems that integrate sensor networks, satellite data, and AI predictions for extreme events
Combining physical models with AI can enhance the timeliness and reliability of warnings, especially for cascading hazards like cloudbursts.
Speaker: Manish Bhardwaj
Create voice‑based personal resilience assistants that translate climate forecasts into actionable recommendations for individuals (e.g., farmers, citizens)
A voice interface can bridge the gap between technical forecasts and everyday decision‑making, increasing user adoption and safety.
Speaker: Sandeep Singhal
Explore AI‑driven climate‑risk insurance products and pricing models to address affordability and coverage gaps
Insurance is a critical monetization pathway for climate AI; developing accurate risk models can make coverage sustainable and protect vulnerable populations.
Speaker: Audience (unspecified) and Sandeep Singhal
Build decision‑specific digital twins that convert weather data into actionable insights for various stakeholders
Digital twins focused on decisions (rather than raw variables) can provide tailored guidance, making AI weather outputs directly useful for planning and response.
Speaker: Dev Niyogi
Integrate AI with hyperlocal renewable energy forecasting (e.g., rooftop solar) and grid demand‑flexibility using digital public infrastructure
Accurate local generation forecasts are essential for grid stability and efficient energy trading as India scales its renewable portfolio.
Speaker: Praphul Chandra
Establish effective public‑private partnership frameworks, open IP licensing, and translation centers to move AI climate solutions from early research (TRL 1‑2) to operational deployment (TRL 5‑6)
Coordinated collaboration and clear IP policies accelerate commercialization and ensure that innovations reach end‑users.
Speaker: Shivkumar Kalayanaraman
Coordinate disparate climate and sustainability initiatives across agencies and stakeholders to avoid duplication and maximize national impact
A unified strategy is needed to align research, funding, and deployment efforts, ensuring resources are used efficiently.
Speaker: Akshara Kaginalkar
Develop AI models capable of predicting cloudburst events and associated cascading hazards such as landslides and flash floods
Current models cannot predict cloudbursts, a critical gap for disaster preparedness; targeted AI research could fill this void.
Speaker: M. Ravichandran
Apply AI for the discovery of new sustainable materials and chemicals to accelerate climate mitigation technologies
AI‑driven material discovery can speed up the development of greener alternatives, supporting broader sustainability goals.
Speaker: Shivkumar Kalayanaraman

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.