Responsible AI in India Leadership Ethics & Global Impact

Responsible AI in India Leadership Ethics & Global Impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session examined how Indian corporations can move responsible AI from abstract principles to provable practice, emphasizing that trust, transparency and accountability are now foundational ( [1][34] ). Andy Parsons argued that responsible AI must become an operational discipline rather than a mere compliance slide, noting a shift toward “provable practice” ( [33][34] ). He warned of a trust crisis caused by the massive scale of generative AI and said enterprises need to prove how content was created, by which models and tools ( [38-44] ). Parsons introduced the Content Authenticity Initiative and the C2PA open standard, which embeds provenance metadata directly into media files and is backed by a cross-industry coalition including Adobe, Microsoft, BBC and others ( [55-62][66-68] ). He stressed that open, interoperable, non-proprietary standards must be implemented in working code, a point especially relevant for India’s huge digital population ( [70-74] ).


Prativa Mohapatra explained Adobe’s “ART” (accountability, responsibility, transparency) philosophy, describing how provenance checks are baked into products such as Firefly and Acrobat Assistant so that inputs are licensed and outputs can be audited ( [196-204][208-220][224-228] ). She added that coordinated legal, compliance and ethical teams are essential, and that neglecting any pillar threatens future readiness ( [235-239] ). Amol Deshpande highlighted that responsible AI must be orchestrated across all five AI layers, involve people, processes and guardrails, and cannot be a one-size-fits-all solution; instead organisations should offer a “bring-your-own-AI” framework ( [162-170][176-181][187-190] ). Vishal Anand Kanwati described NPCI’s transparent transaction-decline explanations via a language model and affirmed that governance principles such as transparency are non-negotiable for trust in payment systems ( [287-293][295-298] ). Satya Ramaswamy shared Air India’s generative-AI virtual assistant that handles millions of queries, with safety “knobs” and continuous human-in-the-loop monitoring to satisfy global aviation regulations ( [258-262][261-264] ). He argued that complying with diverse international regulations does not hinder innovation, citing the airline’s ability to launch the industry’s first AI assistant while remaining within regulatory bounds ( [341-345][350-354] ).


The panel debated whether industry-led governance can replace regulation; Amol and Vishal stressed the need for standards, awareness and industry partnerships, while both agreed that regulatory frameworks are essential to prevent AI misuse at scale ( [322-329][360-366] ). Sarika Guliani concluded that responsible AI is a commitment beyond compliance, requiring shared human values, cross-sector collaboration and alignment with the “people, planet, progress” agenda, and announced that FICCI will continue to drive the dialogue into action ( [370-376][382-383] ). The discussion underscored that responsible AI must be embedded in products, governed by open standards, and supported by both industry initiatives and regulatory oversight to realize its potential in India’s digital future.


Keypoints


Major discussion points


From principles to provable practice – the need for concrete standards and transparency


Andy emphasized that responsible AI must move beyond slide-deck principles to “provable practice” and that “you need standards, not just principles” [34-35][109-112]. He presented the C2PA open standard as a concrete example of an interoperable, non-proprietary framework that can embed provenance information directly into content [62-70].


Content provenance (C2PA) as a concrete case study for responsible AI


The Coalition for Content Provenance and Authenticity (C2PA) provides “content credentials” that travel with media, enabling users to see the full genealogy of an asset - what model created it, which tools were used, etc. [55-66]. The initiative rests on three pillars – transparency, accountability and inclusivity – likened to “nutrition labels” for digital content [75-86].


Enterprise-level implementation challenges and the “ART” governance model


Amol described responsible AI as an “orchestration of all layers” – technology, people, process and governance – and warned against a one-size-fits-all approach, stressing the need for guardrails and scalable templates [162-166][170-181]. Prativa echoed this with Adobe’s “ART” (Accountability, Responsibility, Transparency) framework, citing product-level examples such as Firefly’s built-in provenance and Acrobat Assistant’s safe-by-design workflow [196-210][221-228].


Regulation as both catalyst and requirement, balanced with industry-led standards


Andy framed regulation (EU AI Act, US state laws, India’s IT rules) as a “catalyst for good practices” [107-108], while Vishal highlighted the necessity of transparency in transaction decisions and referenced the RBI’s responsible-AI guidelines [286-293]. Satya explained how Air India complies with multiple global aviation regulators while still innovating with a generative-AI virtual assistant [341-354].


Ecosystem collaboration to bridge large enterprises and MSMEs


The panel repeatedly stressed that industry bodies (FICCI, C2PA, etc.) must disseminate frameworks so smaller players can adopt them. Amol called for “awareness → action → demonstration” and for industry partnerships to cascade guardrails downstream [322-336]. Prativa warned that without such shared standards, a stark divide will emerge between “big guys” and “MSMEs” [291-300].


Overall purpose / goal of the discussion


The session aimed to move the conversation on responsible AI in India from abstract principles to actionable, enterprise-level practices. By showcasing Adobe’s C2PA model, sharing governance approaches from Air India, RPG Group, and NPCI, and debating the interplay of regulation and industry standards, the participants sought to equip Indian corporates with concrete tools, frameworks, and collaborative pathways for deploying trustworthy, inclusive AI at scale.


Overall tone and its evolution


– The opening remarks were formal and aspirational, stressing the urgency of responsible AI [4-6].


– Andy’s presentation adopted an optimistic, solution-focused tone, highlighting a successful open-standard initiative [58-66].


– The panel discussion shifted to a pragmatic and candid tone, acknowledging real-world challenges (uneven adoption, cost, governance complexity) [90-101][162-181].


– As the conversation progressed, the tone became collaborative and constructive, with participants emphasizing shared responsibility, ecosystem support, and the need for balanced regulation [322-336][341-354].


– The closing remarks returned to an hopeful, call-to-action tone, urging continued dialogue and industry commitment [370-384].


Overall, the tone remained constructive throughout, moving from high-level inspiration to grounded, actionable discussion and ending with a collective commitment to advance responsible AI in India.


Speakers

Announcer – Event announcer/moderator


Vishal Anand Kanwati – Chief Technology Officer, National Payments Corporation of India (NPCI) – expertise in payments infrastructure and AI-driven fraud detection [S4][S5]


Sarika Guliani – Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI – expertise in AI policy and industry collaboration [S6][S7]


Dr. Satya Ramaswamy – Chief Digital and Technology Officer, Air India Limited – expertise in aviation AI applications and safety-critical systems [S8][S9][S10]


Shantari Malaya – Editor, Economic Times – expertise in technology journalism and AI policy coverage [S11][S12]


Prativa Mohapatra – Vice President and Managing Director, Adobe India – expertise in responsible AI product development and content authenticity [S13][S14]


Andy Parsons – Global Head for Content Authenticity, Adobe (runs the Content Authenticity Initiative) – expertise in content provenance and AI transparency [S15][S16]


Amol Deshpande – Chief Digital Officer and Head of Innovation, RPG Group – expertise in enterprise AI strategy and governance [S18][S19]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

Opening & Context – Adobe, in partnership with FICCI, opened the session on “Responsible AI from Principles to Practice in Corporate India” [1-2]. The moderator emphasized that India’s current digital moment demands not only rapid AI adoption but responsible deployment, with trust, transparency and accountability described as “foundational” rather than optional [4-6].


Andy Parsons – From Principles to Proven Practice


Parsons framed the central challenge: responsible AI must move from a slide-deck concept to an auditable discipline [33-35]. He warned that 2026 will be the year responsible AI becomes both a duty and an innovation opportunity [21-22] and that organisations will soon be asked not whether they are responsible, but whether they can prove it [31-32]. He highlighted the need to consider implementation cost and day-to-day operational overhead when adopting responsible-AI practices [384-386].


The regulatory backdrop he outlined included the EU AI Act, California law, and India’s new IT rules on Self-Generated-Content (SGI) [387-389]. He positioned regulation as a catalyst for good practice rather than a barrier [107-108].


Parsons described the trust crisis created by the massive scale of generative AI, noting that enterprises now produce or consume AI-generated content at “extraordinary” volumes [44-45] and that the crisis is “real … happening every day to our children” [390-392]. In India’s “world’s largest digital population” [47-50], synthetic media and misinformation are operational risks for businesses [51-52]. Without the ability to demonstrate what was made, how, and by which models, companies cannot meet corporate responsibility obligations [53-55].


To illustrate a concrete solution, Parsons introduced the Coalition for Content Provenance and Authenticity (C2PA). This cross-industry body-including Adobe, Microsoft, BBC, Sony, Qualcomm and others-has created an open, free, non-proprietary standard that embeds “content credentials” directly into media files [396-398]. The C2PA badge is already visible on LinkedIn posts, signalling provenance to viewers [393-395]. Its three pillars-transparency, accountability and inclusivity-are likened to “nutrition labels” for digital content, providing provenance information such as the generating model, tools used and camera metadata [75-86][70-74].


Parsons acknowledged practical challenges: many social-media platforms strip metadata, undermining provenance [90-98]; consumer awareness of the C2PA symbol remains low [92-95]; and the business case for provenance is challenging because it does not directly generate revenue [100-101].


Panel Introduction – Shantari Malaya – The moderator introduced the panelists (Andy Parsons, Amol Deshpande, Prativa Mohapatra, Satya Ramaswamy, Vishal Anand Kanwati).


Amol Deshpande – Orchestrating Responsible AI


Deshpande argued that responsible AI must be orchestrated across the five layers of the AI lifecycle as understood by the panel and cannot be reduced to a single checklist [162-166]. He stressed the importance of people, processes and guardrails, describing a “bring-your-own-AI” model where each function can adopt suitable templates while the enterprise provides common guardrails [176-183][187-190]. He warned against a “one-size-fits-all” solution, insisting that scalable, sector-specific templates are needed for enterprises ranging from manufacturing to services [180-183][186-188].


Prativa Mohapatra – Adobe’s ART Framework & Product Embedding


Mohapatra explained Adobe’s internal ART (Accountability, Responsibility, Transparency) governance model [196-198]. She said the first pillar of Adobe’s AI governance is “ART”: accountability, responsibility and transparency. Every new Adobe product follows a rigorous, multi-step methodology that embeds provenance at the core. For example, Firefly, Adobe’s generative-AI tool, automatically attaches a “nutrition-label” style provenance tag to every output, guaranteeing that inputs are licensed and that the resulting content can be audited for compliance [208-212][214-220]. Similarly, the Acrobat Assistant inherits the trusted PDF workflow, allowing users to trace the origin of any generated document and ensuring that high-stakes outputs are traceable and legally sound [224-228]. She emphasized that legal and compliance teams must be integrated into AI governance, otherwise an organisation may fall short of future regulatory and risk requirements [235-239].


Satya Ramaswamy – Air India’s Generative-AI Virtual Assistant


Ramaswamy shared Air India’s experience with a generative-AI virtual assistant launched in May 2023, which has handled over 13.5 million queries with a 97 % autonomous resolution rate [258-262]. The system employs “safety knobs” that can be dialled to balance user convenience against the risk of inappropriate responses; AI monitors its own performance, and customers are prompted to rate the answer’s appropriateness [260-262]. Satya explained that Air India uses generative-AI models to monitor the performance of its own virtual assistant [263-264]. The airline works with partners such as Adobe to obtain indemnity against failures [263-264] and complies with multiple international aviation regulators (EU, US FAA, Indian DGCA) without letting compliance constrain Indian innovation [341-345][350-354].


Vishal Anand Kanwati – NPCI’s Transparent Fraud-Detection


Kanwati illustrated how transparency can be operationalised in payments. NPCI has built a small language model that explains, in real time, why a transaction was declined, giving consumers clear, understandable reasons for fraud-related decisions [287-291]. He linked this practice to the RBI’s responsible-AI guidelines, stating that “the principles have to be adopted – there is absolutely no choice for us” [295-298]. For him, such transparency is essential to maintain trust in the nation’s digital payments ecosystem [286-293].


Discussion on MSMEs & Ecosystem – Both Amol and Prativa stressed that the first step for MSMEs is awareness of responsibility, followed by actionable frameworks disseminated through bodies such as FICCI [322-326][328-336]. Amol warned that large enterprises must create reusable compliance frameworks because smaller firms lack dedicated legal or AI-ethics teams [304-307]; Prativa echoed that without shared standards a stark divide will emerge between “big guys” and “MSMEs” [291-300]. The panel agreed that industry consortia should cascade templates and best-practice guidance to lower-resource organisations [328-336][316-321].


Global vs Indian Regulatory Alignment – Satya noted that complying with EU, US and Indian DGCA regulations does not stifle Indian innovation [341-345]. Vishal argued that mandatory safeguards are essential to prevent AI from “going berserk” in critical financial systems [322-327][360-366].


Regulation vs Self-Governance – A broad consensus emerged that regulation is inevitable and can act as a catalyst for good practice, but “principles alone are insufficient” – concrete, interoperable standards are required [107-108][109-112][328-336][360-366]. Tension remained between Andy’s advocacy for a universal open standard (C2PA) [62-66][70-71] and Amol’s view that industry-specific templates are necessary [180-183][328-336].


Closing – Sarika Guliani – Guliani framed responsible AI as a value-driven commitment that goes beyond a compliance checkbox, linking it to the broader “people, planet, progress” agenda [370-376]. She thanked the panelists and announced that FICCI will continue to facilitate dialogue and drive collaborative actions to translate the discussed principles into concrete industry initiatives [382-383].


Key take-aways


– Shift from abstract AI principles to provable, operational practice across people, process, technology and governance [31-35][162-170].


– Importance of open, interoperable, non-proprietary standards such as C2PA content credentials for building trust in AI-generated media [75-86][70-74][396-398][393-395].


– Adobe’s ART framework shows how accountability, responsibility and transparency can be baked into product lifecycles (Firefly, Acrobat Assistant) [196-210][221-228].


– Continuous human-in-the-loop monitoring and adjustable safety guardrails are critical for high-risk deployments (Air India, NPCI) [260-264][287-291].


– Regulation is viewed as a catalyst, not a constraint, and must be complemented by industry standards to avoid fragmented compliance [107-108][328-336][360-366].


– Ongoing challenges include metadata preservation, consumer awareness of provenance symbols, and the resource gap for MSMEs[90-98][304-307].


– Sector-specific implementations provide practical road-maps for responsible AI at scale.


Unresolved issues – Raising widespread consumer awareness of provenance symbols; providing affordable, reusable compliance toolkits for MSMEs; balancing “light-touch” regulation with mandatory safeguards; and designing detailed human-in-the-loop processes for safety-critical AI systems. The panel suggested a combined approach: baseline regulatory safeguards, open-standard adoption (e.g., C2PA), and industry-led dissemination of sector-specific templates to ensure both interoperability and flexibility [328-336][360-366][322-329].


Thought-provoking remarks – Andy’s 2026 prediction; his challenge to prove responsible AI; the description of C2PA credentials as an open, free, cross-industry standard; Amol’s “one size doesn’t fit all” reminder; Prativa’s “nutrition-label” analogy; Satya’s use of generative AI to monitor its own assistant; Vishal’s language model that explains transaction declines; and the consensus that regulation can be a catalyst rather than a hindrance [21-22][31-32][58-59][62-66][70-71][180-183][186-188][78-82][260-262][287-291][107-108][341-345].


The panel left with a shared commitment to embed open standards, sector-specific guardrails, and regulatory compliance into AI products, ensuring that responsible AI becomes a practical, measurable capability across India’s corporate ecosystem.


Session transcriptComplete transcript of the session
Announcer

Welcome to this session titled, Responsible AI from Principles to Practice in Corporate India, presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite a guest. Andy Parsons, our Global Head for Content Authenticity at Adobe.

Andy, over to you.

Andy Parsons

Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.

I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity for innovation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.

And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point, but can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.

So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.

But it’s made the trust problem absolutely impossible to address. Ignore. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that.

Hundreds of millions of people consuming digital content every day. In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to provide some leadership in the C2PA, which hopefully many of you have heard of. have heard about this week, which is the Coalition for Content Provenance and Authenticity.

And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free. So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others.

And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials. I have encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others.

And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company. It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy, I think, is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency, provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used.

Simple ideas like knowing that a photograph is actually a photograph and not generated. These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.

Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.

Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I’ve often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m sure that you will see it. I’m sure that you will see it. I’m sure that you will see it. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.

What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.

And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excited to go first, that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again, and in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things, and that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries because we have representatives from all of them on our excellent panel today.

So we’re fortunate to have leaders from. Air India, PCI, RPG Group, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.

Shantari Malaya

Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.

So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.

Right. So Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.

So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.

Amol Deshpande

Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.

It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise. It has to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.

We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.

You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple levels.

Shantari Malaya

Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Pratibha here. Welcome, Pratibha. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe has constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally and at the same time as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?

So all yours here.

Prativa Mohapatra

Okay, thank you. So I think Andy set the context and since we are here not to learn about the principles but the practices, I think everybody should go back with certain practices. So the first practice of AI governance which we practice is art, which is accountability, responsibility and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course we are in the business of content for a very, very long time and now the same content is becoming available. It’s becoming the currency which everybody’s debating. So our principles have been there for a while, but how it is actualized.

And let me say how it’s translated into our products. And by the way, it’s in our products. It’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law. You will not be getting into any liability issues. Because how you do it is by feeding.

Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines. So Acrobat has this new feature called Acrobat Assistant.

It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.

So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to… Already Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI ensure that you tick all the three. If you miss any one you might not be ready for the future.

So that’s how I see it.

Shantari Malaya

Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Prithiva. I’ll circle back to you time permitting. Let’s see how best we can get back. Dr. Satya calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.

How do these things really fall in place in terms of vision and metrics?

Dr. Satya Ramaswamy

Thank you, Shantiri. Since the audience is international, real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.

So it was a global first in the whole airline industry. Today it handles. It has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to a human agent the remaining 50 % of the contact volume comes to A .G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when we were the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.

At the same time we don’t want any jailbreak to happen we don’t want problem injection to happen we don’t want any inappropriate thing to happen so we are watching the whole performance of the Virtual Assistant A .G as we call it all the time. So we use in fact Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer so at the end of the day when we send a response we also ask the customer did it answer your question and also allow them to give their reactions is it appropriate, inappropriate and thankfully over the last 20 years it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it but now we are as the technologies are maturing for example we now have interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem.

That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.

Shantari Malaya

Excellent. And given the kind of scale you’re operating at, I think every day is a new day.

Dr. Satya Ramaswamy

Yes, it is. We face challenges. There is new, brand new every day.

Shantari Malaya

Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here or rather sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you. How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts?

One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be? Fair at the same time proactive and detective when it comes to looking at fraud. So how what are the aspects that you look at? keenly here

Vishal Anand Kanwati

I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with this success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.

Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised

Shantari Malaya

absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you something but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?

Is it getting risked?

Prativa Mohapatra

Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.

So… I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.

So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?

Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is a… similar to electricity and steam, it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem.

Shantari Malaya

Absolutely. Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face, you know, when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem, the industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit?

Amol Deshpande

Shantari, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?

Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.

Mind you, this would change. It’s not like this one guardrails construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is, I think, very, very critical.

Shantari Malaya

Than you for that, Amol. Dr. Satya, if, you know, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean, the principles, so on and so forth. And India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony

Dr. Satya Ramaswamy

Absolutely, I think taking Air India, we are an international airline so we operate in many countries for example we go to North America US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world so by nature we are geared to looking at the regulation in all parts of the world in all parts of the world and we are looking at the regulation and we are looking at the regulation and we are looking at the regulation and being in compliance.

And our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right? So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control.

And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing. the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is, this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation.

For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it

Shantari Malaya

So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts here.

Vishal Anand Kanwati

Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions, you know. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important.

While. All of us realize. it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high

Shantari Malaya

Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please

Sarika Guliani

first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content. Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done. Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken thank you so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of Vicky and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.

That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Chantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.

So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (41)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“2026 will be the year responsible AI becomes both a duty and an innovation opportunity.”

The knowledge base notes that by 2026 questions of AI responsibility and trust will move from after-thoughts to central concerns, and AI is expected to reshape management and organisational design that year, confirming the report’s view of 2026 as a pivotal moment [S104] and [S105].

Confirmedhigh

“India’s new IT rules on Self‑Generated‑Content (SGI) require transparency in AI‑generated content.”

India’s Synthetic and Generated Intelligence (SGI) regulations have been announced, mandating transparency so users can distinguish AI-generated content, matching the report’s description of the SGI rules [S108].

Confirmedmedium

“The regulatory backdrop includes the EU AI Act.”

The EU AI Act is identified in the knowledge base as a key piece of AI regulation, confirming its presence in the regulatory landscape referenced by the speaker [S109].

Additional Contextmedium

“Trust, transparency and accountability are foundational for responsible AI deployment in India.”

Other sources stress that trust infrastructure is as critical as technical infrastructure and that accountability, transparency, rule of law and explainability are essential for AI governance, providing additional context to the claim [S59] and [S102].

Additional Contextlow

“Responsible AI must move from a slide‑deck concept to an auditable discipline.”

Discussion of AI governance in 2026 highlights the need for clear accountability mechanisms and auditable practices, adding nuance to the report’s framing of responsible AI as an auditable discipline [S104].

External Sources (113)
S1
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S2
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S4
Responsible AI in India Leadership Ethics & Global Impact part1_2 — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S5
Responsible AI in India Leadership Ethics & Global Impact — -Vishal Anand Kanwati- Chief Technology Officer, National Payments Corporation of India (NPCI)
S6
Responsible AI in India Leadership Ethics & Global Impact part1_2 — The discussion concluded with Sarika Guliani from FICCI emphasising that “responsibility is not anymore a compliance che…
S7
Responsible AI in India Leadership Ethics & Global Impact — The session concluded with FICCI’s commitment to continue translating discussions into actionable frameworks. Sarika Gul…
S8
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact-part1_2 — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S9
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Dr. Satya Ramaswamy- Vishal Anand Kanvaty – Vishal Anand Kanvaty- Dr. Satya Ramaswamy Dr. Satya focuses on balancing…
S10
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S11
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact-part1_2 — So we’re fortunate to have leaders from Air India and PCI, RPCI, and the U.S. Department of Defense, and Adobe. each of …
S12
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — So we’re fortunate to have leaders from. Air India, PCI, RPG Group, and Adobe. each of whom is navigating and translatin…
S13
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S14
Responsible AI in India Leadership Ethics & Global Impact — -Prativa Mohapatra- Vice President and Managing Director of Adobe India
S15
Driving U.S. Innovation in Artificial Intelligence — 2. Amy Cohen – Executive Director, National Association of State Election Directors 3. Andy Parsons – Senior Director of…
S16
Responsible AI in India Leadership Ethics & Global Impact — -Andy Parsons- Global Head for Content Authenticity at Adobe, runs the Content Authenticity Initiative at Adobe
S17
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excit…
S18
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S19
Responsible AI in India Leadership Ethics & Global Impact — – Andy Parsons- Amol Deshpande – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S20
Opening of the session — This position was supported by multiple delegations (Switzerland, Australia, Canada) and created a clear divide with cou…
S21
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges – **Charlie Hal…
S22
Certifying humanity: Labeling content amid AI flood — These debates are no longer theoretical. Provenance-based initiatives such as theContent Authenticity Initiative (C2PA),…
S23
High-level AI Standards panel — Need for Enhanced Collaboration Among Standards Organizations The UK government advocates for an open, inclusive, multi…
S24
Closing the Governance Gaps: New Paradigms for a Safer DNS — Although regulation in the DNS industry is inevitable, it should aim to avoid fragmented jurisdictional approaches. If t…
S25
https://dig.watch/event/india-ai-impact-summit-2026/why-science-metters-in-global-ai-governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S26
Global Enterprises Show How to Scale Responsible AI — High level of consensus on core principles with nuanced differences in implementation approaches. This suggests a maturi…
S27
Closing remarks – Charting the path forward — ### From Principles to Practice A central theme was the need to move beyond abstract principles toward concrete impleme…
S28
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S29
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Clara Neppel:Thank you for having me here, it’s a pleasure. So as it comes to the polls, the question is of course what …
S30
WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder — These key comments shaped the discussion by highlighting the complexity of implementing ethical AI across different regu…
S31
AI That Empowers Safety Growth and Social Inclusion in Action — Despite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap betwee…
S32
Laying the foundations for AI governance — Low to moderate disagreement level. The speakers largely agreed on problem identification but differed on solutions and …
S33
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Auke Pals: Thank you very much, Lisa. So I hope at this point, the EU AI Act could steer us in the right direction a…
S34
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — Boosting standardization process can establish a strong lay of requirements By focusing on education, industry collabor…
S35
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — – Moe Ba- Ke Wang- Li Tian- John OMO Bocar Ba emphasized the necessity of creating unified policy frameworks that work …
S36
Unlocking potential: Addressing inclusivity barriers in e-commerce trade to deliver sustainable impact in communities everywhere (United Kingdom) — To effectively support MSMEs, GSMA emphasizes the need for greater coordination between the private and public sectors. …
S37
Enabling trade inclusion for MSMEs, women and underrepresented communities through the postal network (UPU)- UPU TradePost Forum — However, women’s representation and empowerment in MSMEs are still limited. Currently, women are sole owners of only aro…
S38
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S39
Building Indias Digital and Industrial Future with AI — This comment shifted the discussion from abstract policy concepts to concrete technical and operational realities. It pr…
S40
WS #123 Responsible AI in Security Governance Risks and Innovation — These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, a…
S41
Responsible AI in India Leadership Ethics & Global Impact part1_2 — This set the foundational tone for the entire panel discussion, moving away from abstract principles to practical implem…
S42
Main Topic 3 –  Identification of AI generated content — A pervasive sentiment of distrust could potentially undermine democratic integrity by challenging its intrinsic structur…
S43
Certifying humanity: Labeling content amid AI flood — As a result, trust is no longer formed through close inspection. Few readers have the time, expertise, or tools to verif…
S44
Skilling and Education in AI — “Five second response, I think the one action that we need to take is improve the trust infrastructure and make sure tha…
S45
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — This comment shifted the discussion from high-level policy frameworks to concrete, relatable concerns. It established a …
S46
Responsible AI in India Leadership Ethics & Global Impact — Regulation should be viewed as a catalyst for good practices rather than just reactive compliance
S47
Keynote by Uday Shankar Vice Chairman_JioStar India — Policy frameworks should reflect India’s unique ambitions and avoid wholesale adoption of Western regulatory constructs,…
S48
Can we test for trust? The verification challenge in AI — Moderate to high disagreement with significant implications. The fundamental disagreement between Yampolskiy’s pessimist…
S49
Artificial intelligence (AI) – UN Security Council — Moreover, the lack of transparency can erode public trust. If people cannot see or understand how decisions affecting th…
S50
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Jibu Elias: Thank you very much, Yoichi-san. It’s an honor to be here to share my experience building a responsible AI e…
S51
Host Country Open Stage — Context-specific solutions are essential rather than one-size-fits-all approaches
S52
Building the Next Wave of AI_ Responsible Frameworks & Standards — Moderate disagreement level with significant implications for AI deployment strategies. While all speakers agreed on the…
S53
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S54
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — – Ioanna Ntinou- Mark Gachara Example of energy efficiency passes for houses in Germany and EU that are obligatory, mak…
S55
[Parliamentary session 1] Digital deceit: The societal impact of online mis- and disinformation — As users increasingly access information through AI, it’s essential to help them critically assess these tools and under…
S56
Laying the foundations for AI governance — – Industry attitudes toward regulation: whether companies genuinely want regulation or resist it due to power concentrat…
S57
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — The discussion revealed surprisingly few direct disagreements among speakers, with most conflicts being implicit rather …
S58
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Amandeep Singh Gill: Thank you so much, Jovan, and thank you to you, Diplo Foundation, and its partners for convening th…
S59
Driving Indias AI Future Growth Innovation and Impact — “Investment also includes energy infrastructure, because without energy, there is really no compute infrastructure you c…
S60
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S61
Closing remarks – Charting the path forward — ### From Principles to Practice A central theme was the need to move beyond abstract principles toward concrete impleme…
S62
High Level Session 1: Losing the Information Space? Ensuring Human Rights and Resilient Societies in the Age of Big Tech — This provides a concrete, real-world example of how radical transparency can work in practice, moving beyond theoretical…
S63
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S64
WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder — These key comments shaped the discussion by highlighting the complexity of implementing ethical AI across different regu…
S65
Responsible AI in India Leadership Ethics & Global Impact — “So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for …
S66
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges – **Charlie Hal…
S67
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Because how you do it is by feeding. Because AI is all about the input and output. So the input has to be something whic…
S68
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — Nadja Blagojevic: Yes, very happy to. And thank you so much for having Google here. We’re very happy to be speaking with…
S69
Global Enterprises Show How to Scale Responsible AI — The implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani empha…
S70
AI That Empowers Safety Growth and Social Inclusion in Action — Despite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap betwee…
S71
Laying the foundations for AI governance — Low to moderate disagreement level. The speakers largely agreed on problem identification but differed on solutions and …
S72
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Galia :Three minutes? One minute each, three minutes together. But just to say, I think this has been a really, really r…
S73
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Auke Pals: Thank you very much, Lisa. So I hope at this point, the EU AI Act could steer us in the right direction a…
S74
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — Boosting standardization process can establish a strong lay of requirements By focusing on education, industry collabor…
S75
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — ### SME Criticality and Transformation Urgency Bocar Ba: I think I don’t have time to be controversial, but I don’t lik…
S76
Unlocking potential: Addressing inclusivity barriers in e-commerce trade to deliver sustainable impact in communities everywhere (United Kingdom) — Collaboration is emphasized as crucial for progress in Africa, specifically in facilitating cross-border payments, which…
S77
Enabling trade inclusion for MSMEs, women and underrepresented communities through the postal network (UPU)- UPU TradePost Forum — However, women’s representation and empowerment in MSMEs are still limited. Currently, women are sole owners of only aro…
S78
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S79
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S80
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S81
Building the Next Wave of AI_ Responsible Frameworks & Standards — The discussion maintained a consistently collaborative and solution-oriented tone throughout. It began with an authorita…
S82
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S83
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S84
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S85
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S86
WS #6 Bridging Digital Gaps in Agriculture & trade Transformation — The tone was largely optimistic and solution-oriented. Speakers were enthusiastic about the potential of the Internet Ba…
S87
Day 0 Event #257 Enhancing Data Governance in the Public Sector — The discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities …
S88
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — The discussion began with a technology-focused, optimistic tone about AI’s transformative potential but gradually shifte…
S89
Quantum Technologies: Navigating the Path from Promise to Practice — The discussion unfolded against a backdrop of significant global investment exceeding $40 billion in quantum technologie…
S90
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S91
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The …
S92
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S93
WS #106 Promoting Responsible Internet Practices in Infrastructure — The discussion maintained a collaborative and constructive tone throughout, with participants showing mutual respect and…
S94
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — The discussion maintained a collaborative and optimistic tone throughout, with panelists demonstrating mutual respect an…
S95
Dynamic Coalition Collaborative Session — The discussion maintained a collaborative and constructive tone throughout, with participants showing mutual respect and…
S96
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S97
High-level SIDS Ministerial Dialogue: Key Challenges and Opportunities — Concluding the address, the speaker alluded to further information that remained unshared due to time constraints. They …
S98
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S99
Open Forum #58 Safety of journalists online — The tone of the discussion was initially somber when describing the serious threats journalists face, but became more co…
S100
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S101
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S102
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — The accountability mechanisms, transparency, rule of law, and explainability are crucial
S103
Open Forum #30 High Level Review of AI Governance Including the Discussion — Melinda Claybaugh: Thank you so much for the question, and thank you for the opportunity to be here. As you were giving …
S104
AI in 2026: Learning to live with powerful systems — Early deployments of AI were often marked by ambiguity. Who is responsible when an automated system produces an error? H…
S105
How AI in 2026 will transform management roles and organisational design — In 2026, AI will transform management structures and automate tasks as companies strive to demonstrate real value. By 20…
S106
Keynote-Alexandr Wang — “We publish model cards and evaluation benchmarks and data so you can see how they work, their intended use, and how we …
S107
Toward Collective Action_ Roundtable on Safe & Trusted AI — Professor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with go…
S108
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — India’s regulatory approach has gained unexpected international acceptance, with the new Synthetic and Generated Intelli…
S109
Stricter rules and prohibited practices: Unveiling the EU AI Act’s regulatory framework — The AI Act, legislation aimed at regulating the use of AI and preventing its harmful effects,has received approval from …
S110
WS #203 Protecting Children From Online Sexual Exploitation Including Livestreaming Spaces Technology Policy and Prevention — ## Alarming Statistics on Self-Generated Content Key themes that emerged included the need for better age assurance mec…
S111
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — I mean, I think that it’s exacerbated according to the data. The only thing that I can tell you is that trust has been e…
S112
Generative AI presents the biggest data-risk challenge in history — Cybersecurity specialistswarnthat generative AI systems, such as large language models, are creating a data risk frontie…
S113
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Andy Parsons
5 arguments191 words per minute2021 words632 seconds
Argument 1
Shift to provable practice rather than abstract principles
EXPLANATION
Andy emphasizes that responsible AI must move from high‑level principles to demonstrable, operational practices that can be verified. He frames this shift as essential for enterprises to prove they are acting responsibly.
EVIDENCE
He notes that the discussion theme is “shift from principles to provable practice” and asks whether systems can actually prove responsible AI, highlighting the need for evidence rather than just policy statements [34-35][31]. He also stresses that “you need standards, not just principles” to move beyond theory [109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes that “you need standards, not just principles” and frames regulation as a catalyst for moving from high-level principles to demonstrable practice [S5]; this supports the shift to provable practice.
MAJOR DISCUSSION POINT
From principles to provable practice
AGREED WITH
Prativa Mohapatra, Sarika Guliani, Amol Deshpande
Argument 2
C2PA content credentials provide an open, interoperable standard for provenance and authenticity
EXPLANATION
Andy describes the C2PA content credentials as an open, cross‑industry standard that attaches provenance metadata to media, enabling anyone to verify authenticity across platforms.
EVIDENCE
He explains that five years of work resulted in an open standard called the C2PA content credentials, visible as a symbol on LinkedIn, which provides transparent context for videos, audio, or images and is built on a cross-industry coalition [62-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources describe C2PA as an open, free, cross-industry standard for attaching cryptographically signed provenance metadata to media [S4] and highlight its broader adoption in the ecosystem [S21][S22].
MAJOR DISCUSSION POINT
Open standard for content provenance
AGREED WITH
Prativa Mohapatra, Vishal Anand Kanwati, Amol Deshpande
DISAGREED WITH
Amol Deshpande
Argument 3
Regulation acts as a catalyst; standards are needed beyond mere statements of intent
EXPLANATION
Andy argues that regulation should stimulate good practices, but merely publishing a responsible AI commitment is insufficient; concrete standards are required to achieve real impact.
EVIDENCE
He states that regulation, such as that in India, serves as a catalyst for good practices [107] and that “you need standards, not just principles” to move beyond a website commitment [109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulation is portrayed as a catalyst for good practices rather than reactive compliance, emphasizing the need for concrete standards [S5]; collaboration among standards bodies is also urged [S23].
MAJOR DISCUSSION POINT
Regulation as catalyst for standards
AGREED WITH
Amol Deshpande, Vishal Anand Kanwati, Shantari Malaya, Dr. Satya Ramaswamy
Argument 4
Metadata stripping by platforms and low consumer awareness hinder provenance adoption
EXPLANATION
Andy points out that many social media platforms remove metadata, reducing transparency, and that consumer awareness of provenance symbols is still very low, limiting adoption.
EVIDENCE
He notes that many platforms strip metadata when content is uploaded, and that consumer awareness is early, with users unfamiliar with the provenance pin and UI elements still developing [92-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Implementation challenges are highlighted, including platforms stripping metadata and low consumer awareness of provenance symbols [S5][S4].
MAJOR DISCUSSION POINT
Barriers to provenance adoption
Argument 5
Adobe’s Content Authenticity Initiative demonstrates baked‑in provenance across creative tools
EXPLANATION
Andy highlights Adobe’s approach of integrating provenance capabilities directly into core products rather than as add‑ons, creating a foundation for trusted AI content.
EVIDENCE
He recounts that Adobe decided five years ago to embed responsible AI via content transparency into tools like Photoshop and Premiere at their core, leading to the open C2PA standard now baked into products [58-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Adobe’s integration of provenance capabilities directly into core products like Photoshop and Premiere is documented, with the C2PA standard baked into these tools [S4]; Andy’s role as Global Head for Content Authenticity at Adobe is also noted [S5].
MAJOR DISCUSSION POINT
Baked‑in provenance in Adobe tools
A
Amol Deshpande
5 arguments180 words per minute758 words251 seconds
Argument 1
Responsible AI must be orchestrated across all AI layers with people, process, and governance
EXPLANATION
Amol stresses that responsible AI cannot be isolated to a single component; it must be coordinated across the five AI layers and involve people, processes, and governance structures.
EVIDENCE
He explains that responsibility spans all five AI layers and requires orchestration of technology, people, and governance, noting the need for agility, skill-building, and guardrails across the enterprise [162-177].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for orchestration of technology, people, and governance across AI layers is emphasized in multiple sources discussing enterprise-wide AI orchestration [S8][S10] and Deshpande’s framework for scalable playgrounds, people development, and governance [S4].
MAJOR DISCUSSION POINT
Holistic orchestration of responsible AI
AGREED WITH
Shantari Malaya
Argument 2
Industry bodies must disseminate and adopt such standards to avoid fragmented compliance
EXPLANATION
Amol argues that industry consortia should share and promote open standards so that compliance does not become siloed or inconsistent across sectors.
EVIDENCE
He emphasizes that industry partnership is key for disseminating frameworks, sharing learnings through bodies like FICCI, and preventing fragmented compliance, noting that templates must be adapted per industry but shared widely [328-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for enhanced collaboration among standards organisations to prevent fragmented approaches are made in the standards-collaboration report [S23] and in discussions about avoiding fragmented jurisdictional regulation [S24].
MAJOR DISCUSSION POINT
Standard dissemination via industry bodies
AGREED WITH
Andy Parsons, Vishal Anand Kanwati, Shantari Malaya, Dr. Satya Ramaswamy
DISAGREED WITH
Vishal Anand Kanwati
Argument 3
Industry consortia and bodies (e.g., FICCI, C2PA) should share frameworks and best‑practice templates
EXPLANATION
Amol calls for collaborative ecosystems where industry groups provide reusable templates and best‑practice guides, enabling smaller players to adopt responsible AI.
EVIDENCE
He describes a demand-supply model where suppliers provide guard-rails and frameworks, which are then shared across the value chain through industry bodies, ensuring diverse sectors receive appropriate guidance [330-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of multi-stakeholder ecosystems for sharing best-practice templates and frameworks is highlighted in the standards collaboration panel [S23] and in the consensus-building analysis that notes nuanced implementation across sectors [S26].
MAJOR DISCUSSION POINT
Sharing best‑practice templates
Argument 4
Need for industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles
EXPLANATION
Amol notes that a single set of guardrails cannot suit every industry; tailored templates are required, but they should all rest on shared responsible‑AI foundations.
EVIDENCE
He repeats that “one size doesn’t fit all” and stresses the need for industry-specific templates that can be cascaded through bodies like FICCI while preserving core principles [180-182][328-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence of industry-specific templates built on shared responsible-AI foundations is provided in the analysis of sector-specific needs and core principle consensus [S26] and in the description of templates that vary by industry while preserving core guardrails [S4].
MAJOR DISCUSSION POINT
Industry‑specific responsible AI templates
DISAGREED WITH
Andy Parsons
Argument 5
RPG Group emphasizes a “bring‑your‑own‑AI” model with scalable, guarded orchestration
EXPLANATION
Amol describes RPG’s approach of allowing each function to adopt its own AI solutions within a common governance framework, ensuring scalability and safety.
EVIDENCE
He uses the phrase “bring your own AI” to illustrate that no single solution fits all, and highlights the need for scalable, safe environments with guardrails that can be practiced across the diverse RPG group [178-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RPG “bring-your-own-AI” approach, allowing business units to adopt AI within a common governance framework, is described in the session summary [S5] and aligns with Deshpande’s scalable playgrounds framework [S4].
MAJOR DISCUSSION POINT
Bring‑your‑own‑AI orchestration
P
Prativa Mohapatra
3 arguments155 words per minute1118 words432 seconds
Argument 1
Embedding “ART” (Accountability, Responsibility, Transparency) into product development and lifecycle
EXPLANATION
Prativa outlines Adobe’s internal AI governance model called ART, which embeds accountability, responsibility and transparency into every product’s development process.
EVIDENCE
She states that the first practice of AI governance at Adobe is “ART”-accountability, responsibility and transparency-and that every new product follows a secure methodology with hundreds of steps embedding these principles [196-207].
MAJOR DISCUSSION POINT
ART governance framework
AGREED WITH
Andy Parsons, Sarika Guliani, Amol Deshpande
Argument 2
Adobe’s Firefly embeds provenance “nutrition labels” to guarantee lawful, traceable outputs
EXPLANATION
Prativa explains that Adobe’s generative AI tool Firefly automatically attaches provenance “nutrition labels” to generated content, ensuring legal compliance and traceability.
EVIDENCE
She notes that Firefly embeds content traditions and nutrition labels, so any output carries provenance information that helps enterprises avoid legal liability and verify the source of data and models used [208-212].
MAJOR DISCUSSION POINT
Provenance nutrition labels in Firefly
AGREED WITH
Andy Parsons, Vishal Anand Kanwati, Amol Deshpande
Argument 3
Small and medium enterprises lack resources for dedicated AI compliance teams; large firms must create reusable frameworks
EXPLANATION
Prativa points out that MSMEs cannot afford dedicated AI compliance structures, so larger enterprises need to develop reusable frameworks that can be shared or adapted by smaller players.
EVIDENCE
She observes that small organizations cannot set up dedicated legal and compliance teams for AI, whereas large firms can shift resources and create frameworks that can be disseminated, highlighting the disparity between big and small players [304-307].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion highlights the disparity between large enterprises that can develop reusable compliance frameworks and smaller firms that lack dedicated AI compliance resources, emphasizing the role of industry bodies in bridging this gap [S5][S26].
MAJOR DISCUSSION POINT
SME resource constraints for AI compliance
D
Dr. Satya Ramaswamy
4 arguments187 words per minute1064 words340 seconds
Argument 1
Deploying generative AI with safety guardrails, continuous monitoring, and human‑in‑the‑loop feedback
EXPLANATION
Satya describes Air India’s generative AI virtual assistant, which operates with adjustable safety settings, real‑time monitoring, and post‑interaction human feedback to ensure safe, reliable service.
EVIDENCE
He details the virtual assistant’s launch in May 2023, handling 13.5 million queries with 97 % autonomous resolution, using safety knobs to balance convenience and risk, and employing AI-based monitoring plus customer feedback on appropriateness to prevent jailbreaks or inappropriate responses [258-262].
MAJOR DISCUSSION POINT
Safety‑guarded generative AI assistant
Argument 2
Compliance with multiple global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation
EXPLANATION
Satya explains that Air India operates across many jurisdictions, complying with each region’s AI‑related regulations while still innovating, showing that regulation need not stifle progress.
EVIDENCE
He notes that Air India flies to North America, Europe and India, complying with the FAA, DGCA and other regulators, and asserts that meeting these rules does not constrain Indian innovation, citing the successful launch of the industry-first generative AI assistant [341-354].
MAJOR DISCUSSION POINT
Global regulatory compliance and innovation
AGREED WITH
Andy Parsons, Amol Deshpande, Vishal Anand Kanwati, Shantari Malaya
Argument 3
Guardrails must be balanced with user convenience; over‑restrictive safety reduces service quality
EXPLANATION
Satya warns that tightening safety controls too much can degrade customer experience, so a balance is needed between protection and usability.
EVIDENCE
He explains that dialing the safety knob too high makes the service inconvenient, and that the system must stay flexible to evolving customer queries while still preventing jailbreaks and inappropriate content [260-262].
MAJOR DISCUSSION POINT
Balancing safety guardrails and user experience
Argument 4
Air India’s generative AI virtual assistant handles millions of queries with 97 % autonomous resolution, backed by safety monitoring
EXPLANATION
Satya provides concrete performance metrics of the virtual assistant, demonstrating its scale, effectiveness, and the safety mechanisms that underpin its operation.
EVIDENCE
He cites that since its launch the assistant has processed about 13.5 million queries, averaging 40 000 per day, with a 97 % autonomous handling rate and continuous safety monitoring to prevent misuse [258-262].
MAJOR DISCUSSION POINT
Scale and effectiveness of AI assistant
V
Vishal Anand Kanwati
3 arguments184 words per minute584 words189 seconds
Argument 1
Ensuring transparency, fairness, and explainability in AI‑driven transaction decisions
EXPLANATION
Vishal stresses that customers should receive clear explanations for declined or flagged transactions, promoting fairness and trust in AI‑based fraud detection.
EVIDENCE
He describes a small language model that can chat with users to explain why a transaction was declined, providing transparency and helping users understand the decision, while also aiming to keep false-positive rates low [287-291].
MAJOR DISCUSSION POINT
Transparent AI decisions in payments
AGREED WITH
Andy Parsons, Prativa Mohapatra, Amol Deshpande
Argument 2
Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments
EXPLANATION
Vishal argues that regulations are necessary to keep AI systems from causing harm, especially given the high stakes of financial transactions.
EVIDENCE
He states that regulations are required because AI can “go berserk,” citing the need for safeguards to limit false positives and protect transaction integrity, and emphasizes that such rules must be embedded across the ecosystem [360-366].
MAJOR DISCUSSION POINT
Regulatory safeguards for payment AI
AGREED WITH
Andy Parsons, Amol Deshpande, Shantari Malaya, Dr. Satya Ramaswamy
DISAGREED WITH
Amol Deshpande
Argument 3
NPCI’s AI‑driven fraud detection provides transparent explanations for declined transactions while minimizing false positives
EXPLANATION
Vishal outlines NPCI’s approach of combining accuracy with explainability, ensuring that customers understand declines and that false‑positive rates remain low.
EVIDENCE
He explains that the system aims for low false-positive rates, improves accuracy over time through data and industry collaboration, and now offers a chat-based interface that tells users why a transaction was declined, reinforcing trust [280-291].
MAJOR DISCUSSION POINT
Transparent fraud detection at NPCI
S
Sarika Guliani
1 argument141 words per minute586 words249 seconds
Argument 1
Responsible AI is a value‑driven commitment, not just a compliance checkbox
EXPLANATION
Sarika asserts that responsible AI should be rooted in shared human values and ethical commitments rather than being treated merely as a regulatory formality.
EVIDENCE
She remarks that responsibility is no longer a compliance check but a technology commitment built on shared human values, emphasizing that decisions now define what we create rather than just following rules [372-376].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion stresses that responsibility is no longer a mere compliance check but a technology commitment built on shared human values [S5].
MAJOR DISCUSSION POINT
AI responsibility as value‑driven commitment
AGREED WITH
Andy Parsons, Prativa Mohapatra, Amol Deshpande
A
Announcer
2 arguments129 words per minute129 words59 seconds
Argument 1
AI is a powerful engine for innovation and productivity in India’s digital journey
EXPLANATION
The Announcer frames AI as a key driver that can accelerate India’s digital transformation and economic growth, positioning it as a defining moment for the nation.
EVIDENCE
The opening remarks state that India stands at a defining moment in its digital journey as AI becomes a powerful engine for innovation and productivity, highlighting the strategic importance of AI for the country [2].
MAJOR DISCUSSION POINT
AI as catalyst for national development
Argument 2
Responsible deployment of AI, grounded in trust, transparency, and accountability, is essential and non‑optional
EXPLANATION
The Announcer emphasizes that the real differentiator is not the speed of AI adoption but the responsibility with which it is deployed, insisting that trust, transparency and accountability must be foundational pillars.
EVIDENCE
The speaker contrasts rapid adoption with responsible deployment, stating that trust, transparency and accountability are no longer optional but foundational for AI initiatives [3-5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session highlights that trust, transparency and accountability are foundational pillars for AI, moving beyond optional compliance [S5].
MAJOR DISCUSSION POINT
Foundational pillars of responsible AI
S
Shantari Malaya
5 arguments160 words per minute1621 words605 seconds
Argument 1
Responsible AI principles must be translated into concrete enterprise strategy frameworks
EXPLANATION
Shantari argues that the value of responsible AI lies in its operationalization within companies, requiring clear strategies that embed fairness, accountability, transparency, privacy and inclusivity into business models.
EVIDENCE
She notes that building trustworthy and inclusive AI will be about how responsible AI principles are realistically translated into enterprise strategy frameworks and how organizations will go about it [144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to operationalise responsible AI within enterprise strategy frameworks is echoed in the orchestration literature and consensus-building reports on translating principles into practice [S8][S26].
MAJOR DISCUSSION POINT
Operationalizing responsible AI in enterprises
Argument 2
Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem
EXPLANATION
Shantari states that in a maturing economy, regulation will inevitably play a role and that stakeholders should view it as a positive catalyst rather than a barrier.
EVIDENCE
She remarks that regulatory intervention is an inevitability that must be welcomed at some level, indicating acceptance of regulation as part of the AI governance landscape [369-371].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulation is described as inevitable and should be viewed as a catalyst rather than a barrier, with recommendations to avoid fragmented regulatory approaches [S24][S23].
MAJOR DISCUSSION POINT
Regulation as a necessary component of AI governance
AGREED WITH
Amol Deshpande
Argument 3
One‑size‑fits‑all solutions are unsuitable; industry‑specific approaches are required
EXPLANATION
Shantari highlights that different sectors have distinct needs, praising the “bring‑your‑own‑AI” concept and emphasizing the necessity for tailored frameworks rather than uniform standards.
EVIDENCE
She acknowledges that “one size doesn’t fit all,” appreciates the “bring your own AI” coinage, and stresses the need for sector-specific solutions [186-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses stress that while core responsible-AI principles are shared, implementation must be sector-specific, reflecting nuanced differences across industries [S26][S4].
MAJOR DISCUSSION POINT
Need for sector‑specific responsible AI models
AGREED WITH
Amol Deshpande, Andy Parsons
Argument 4
The pace of AI adoption must be balanced with the ability to manage operational consequences
EXPLANATION
Shantari points out the tension between moving quickly with AI and ensuring organizations can own the operational risks and consequences of rapid deployment.
EVIDENCE
She reflects on whether the industry is moving too fast to own operational consequences, questioning the balance between speed and responsibility [241-245].
MAJOR DISCUSSION POINT
Balancing speed of AI adoption with operational responsibility
Argument 5
Industry bodies and ecosystems must help MSMEs adopt responsible AI frameworks
EXPLANATION
Shantari stresses that larger enterprises and industry consortia have a duty to create reusable, accessible frameworks so that small and medium businesses can implement responsible AI without prohibitive costs.
EVIDENCE
She asks Amol about the role of the ecosystem in helping responsible AI move from letter to spirit, highlighting the need for industry-led support for smaller players [316-321].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session notes that industry bodies have a duty to create reusable, accessible frameworks for MSMEs, reinforcing ecosystem support for smaller players [S5][S26].
MAJOR DISCUSSION POINT
Ecosystem support for MSME responsible AI adoption
Agreements
Agreement Points
Regulation is essential and can act as a catalyst for good practices, but standards are needed beyond mere principles
Speakers: Andy Parsons, Amol Deshpande, Vishal Anand Kanwati, Shantari Malaya, Dr. Satya Ramaswamy
Regulation acts as a catalyst; standards are needed beyond mere statements of intent Industry bodies must disseminate and adopt such standards to avoid fragmented compliance Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem Compliance with multiple global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation
All speakers agree that regulation will inevitably shape responsible AI and should be viewed as a catalyst rather than a barrier; however, merely publishing commitments is insufficient – concrete, interoperable standards are required to translate principles into practice [107][109][328-336][360-366][369-371][341-354].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with the view that regulation should be a catalyst for good practices rather than mere compliance, as expressed in recent Indian policy discussions and OECD-style frameworks [S46][S47].
Transparency and provenance must be embedded in AI systems and clearly communicated to users
Speakers: Andy Parsons, Prativa Mohapatra, Vishal Anand Kanwati, Amol Deshpande
C2PA content credentials provide an open, interoperable standard for provenance and authenticity Adobe’s Firefly embeds provenance “nutrition labels” to guarantee lawful, traceable outputs Ensuring transparency, fairness, and explainability in AI‑driven transaction decisions Responsible AI must be orchestrated across all AI layers with people, process, and governance
The panel concurs that AI-generated content and decisions need verifiable provenance and explainability; open standards like C2PA and built-in “nutrition labels” are examples, and platforms should avoid stripping metadata while providing clear reasons for AI-driven outcomes [62-66][92-98][208-212][287-291][162-177].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects calls for labeling AI-generated content and ensuring provenance to maintain public trust, echoed in UN and OECD reports on AI transparency [S42][S43][S55][S49].
One‑size‑fits‑all solutions are unsuitable; responsible AI frameworks must be industry‑specific and flexible
Speakers: Amol Deshpande, Shantari Malaya, Andy Parsons
One size doesn’t fit all; industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles One‑size‑fits‑all solutions are unsuitable; industry‑specific approaches are required It should not be owned by any one company; it should be standards‑based
Speakers stress that responsible AI cannot be a single monolithic solution; instead, sector-tailored templates and a “bring-your-own-AI” mindset are needed, underpinned by open, non-proprietary standards [180-183][186-188][70-71].
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with advocacy for context-specific standards rather than universal ones, highlighted in OECD and industry-led governance debates [S51][S57][S56].
Responsible AI must move from abstract principles to provable, operational practice embedded in products and processes
Speakers: Andy Parsons, Prativa Mohapatra, Sarika Guliani, Amol Deshpande
Shift to provable practice rather than abstract principles Embedding “ART” (Accountability, Responsibility, Transparency) into product development and lifecycle Responsible AI is a value‑driven commitment, not just a compliance checkbox Responsible AI must be orchestrated across all AI layers with people, process, and governance
All agree that responsible AI should be concretised through baked-in product features, governance frameworks and measurable provenance, moving beyond policy statements to demonstrable practice [31-35][196-207][372-376][162-177].
POLICY CONTEXT (KNOWLEDGE BASE)
Matches the ‘principles-to-practice’ shift emphasized in multiple panels and policy roadmaps [S38][S41][S40].
Awareness and capacity building are prerequisite steps for responsible AI adoption, especially for MSMEs
Speakers: Amol Deshpande, Shantari Malaya
Responsible AI must be orchestrated across all AI layers with people, process, and governance Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem
Both highlight that the first step is raising awareness and building skills across the ecosystem; industry bodies must help smaller firms acquire the needed capacity to implement responsible AI [322-326][241-245].
POLICY CONTEXT (KNOWLEDGE BASE)
Supported by capacity-building recommendations in AI policy roadmaps and South-South cooperation initiatives [S60][S58][S50].
Similar Viewpoints
Both assert that regulatory compliance does not hinder innovation; instead, regulation can drive the adoption of robust standards and good practices [107][109][341-354].
Speakers: Andy Parsons, Dr. Satya Ramaswamy
Regulation acts as a catalyst; standards are needed beyond mere statements of intent Compliance with multiple global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation
Both emphasize the necessity of sector‑specific responsible AI frameworks rather than a universal solution [180-183][186-188].
Speakers: Amol Deshpande, Shantari Malaya
One size doesn’t fit all; industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles One‑size‑fits‑all solutions are unsuitable; industry‑specific approaches are required
Both view responsible AI as a core value‑driven commitment that must be integrated into product lifecycles, not merely a compliance exercise [196-207][372-376].
Speakers: Prativa Mohapatra, Sarika Guliani
Embedding “ART” (Accountability, Responsibility, Transparency) into product development and lifecycle Responsible AI is a value‑driven commitment, not just a compliance checkbox
Unexpected Consensus
Balancing safety guardrails with user convenience and transparency across very different domains (aviation and digital payments)
Speakers: Dr. Satya Ramaswamy, Vishal Anand Kanwati
Guardrails must be balanced with user convenience; over‑restrictive safety reduces service quality Ensuring transparency, fairness, and explainability in AI‑driven transaction decisions
Despite operating in distinct sectors, both speakers converge on the need to calibrate AI safety controls so that they protect users without degrading experience, and to provide clear explanations for AI-driven outcomes [260-262][287-291].
POLICY CONTEXT (KNOWLEDGE BASE)
Highlights the tension between safety and usability noted in cross-sector discussions on AI governance, such as the aviation-payments analogy used in trust-infrastructure talks [S45][S54].
Overall Assessment

The discussion shows strong convergence among speakers on four pillars: (1) regulation as a catalyst paired with concrete standards; (2) transparency/provenance embedded in AI products; (3) the necessity of industry‑specific, flexible frameworks; (4) operationalising responsible AI through baked‑in product features and capacity building. These alignments cut across AI, data governance, and the enabling environment for digital development, indicating a mature consensus that can drive coordinated policy, standard‑setting and industry collaboration.

High consensus – most speakers echo each other’s positions, suggesting a unified industry stance that can facilitate rapid development of interoperable standards, supportive regulatory frameworks, and ecosystem‑wide capacity initiatives.

Differences
Different Viewpoints
Universality of open standards versus need for industry‑specific templates
Speakers: Andy Parsons, Amol Deshpande
C2PA content credentials provide an open, interoperable standard for provenance and authenticity Need for industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles
Andy advocates a cross-industry, open and free standard (C2PA) that should be adopted universally and not owned by any single company [62-66][70-71][109]. Amol counters that a single set of guardrails cannot suit every sector, insisting that “one size doesn’t fit all” and that templates must be tailored to each industry while still resting on shared principles [180-182][328-336].
POLICY CONTEXT (KNOWLEDGE BASE)
Debate mirrors the split observed in OECD workshops where open standards were weighed against sector-tailored templates [S51][S52].
Feasibility of industry‑led governance versus necessity of mandatory regulation
Speakers: Amol Deshpande, Vishal Anand Kanwati
Industry bodies must disseminate and adopt such standards to avoid fragmented compliance Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments
Amol emphasizes that awareness, action and industry-body partnerships can drive responsible AI without heavy regulatory imposition, proposing a demand-supply model where standards are shared through consortia [322-327][328-336]. Vishal argues that regulations are required because AI can “go berserk”, insisting that safeguards must be embedded across the ecosystem and that regulation is unavoidable [360-366].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects ongoing disagreement between industry-led and government-mandated approaches documented in recent AI governance forums [S56][S57][S46].
Perception of the trust crisis in AI‑generated content
Speakers: Andy Parsons, Dr. Satya Ramaswamy
The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves Air India’s generative AI assistant has not produced any inappropriate response in years, showing that the trust problem can be effectively managed
Andy stresses that a trust crisis is already evident in everyday media and business contexts, citing the proliferation of synthetic content and misinformation as real operational risks [38-40][42-44]. Satya, referencing Air India’s virtual assistant, claims that in over two years the system has never given an inappropriate answer, suggesting that the crisis is not as severe as portrayed [262].
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes concerns about public distrust of AI-generated media raised in UN and OECD panels on misinformation and content labeling [S42][S43][S45].
Unexpected Differences
Severity of the AI trust crisis
Speakers: Andy Parsons, Dr. Satya Ramaswamy
The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves Air India’s generative AI assistant has not produced any inappropriate response in years, showing that the trust problem can be effectively managed
Andy portrays a pervasive trust problem affecting media and businesses, while Satya points to his airline’s AI system that has operated without any inappropriate outputs, suggesting a much less acute crisis than Andy describes [38-40][42-44][262]
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with assessments of a deepening trust crisis in AI, cited in UN Security Council remarks and academic surveys on trust erosion [S49][S45][S42].
Overall Assessment

The panel largely shares a common vision of responsible AI as essential and sees regulation, standards, and industry collaboration as necessary. However, clear points of contention arise around whether a single open standard can serve all sectors versus the need for industry‑specific templates, the extent to which regulation should drive governance versus industry‑led self‑regulation, and how serious the current trust crisis truly is.

Moderate – while there is broad consensus on goals, the differing views on implementation pathways (universal standards vs sector‑specific frameworks; industry‑led governance vs mandatory regulation; perception of trust risk) could affect the speed and coherence of policy and product roll‑outs. These divergences suggest that coordinated multi‑stakeholder dialogue will be needed to reconcile approaches before large‑scale adoption can proceed smoothly.

Partial Agreements
All three concur that regulation is required for responsible AI, but Andy frames it as a catalyst to spur standards, Vishal stresses it as a mandatory safeguard, and Shantari views it as an inevitable component that should be embraced [107][109][360-366][369-371]
Speakers: Andy Parsons, Vishal Anand Kanwati, Shantari Malaya
Regulation acts as a catalyst; standards are needed beyond mere statements of intent Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem
All agree that smaller enterprises need support, but Amol emphasizes industry‑body distribution of templates, Prativa stresses large firms creating reusable frameworks, and Shantari calls for ecosystem‑wide assistance through bodies like FICCI [328-336][304-307][316-321]
Speakers: Amol Deshpande, Prativa Mohapatra, Shantari Malaya
Industry bodies must disseminate and adopt such standards to avoid fragmented compliance Small and medium enterprises lack resources for dedicated AI compliance teams; large firms must create reusable frameworks Industry bodies and ecosystems must help MSMEs adopt responsible AI frameworks
Takeaways
Key takeaways
Responsible AI must move from abstract principles to provable, operational practice across all AI layers (people, process, technology, governance). Open, interoperable standards such as the C2PA content credentials are essential for building trust and provenance in AI‑generated content. Embedding accountability, responsibility, and transparency (the “ART” framework) directly into product development cycles is a practical way to operationalise responsible AI. Continuous safety monitoring, guardrails, and human‑in‑the‑loop feedback are critical for generative AI deployments, especially in high‑risk domains like aviation and payments. Regulation is viewed as a catalyst; compliance with global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation when supported by industry standards. Challenges include metadata stripping, low consumer awareness, and the resource gap for MSMEs to build dedicated AI compliance capabilities. Sector‑specific implementations (Adobe’s Firefly provenance labels, RPG’s “bring‑your‑own‑AI” orchestration, Air India’s virtual assistant, NPCI’s transparent fraud‑detection) illustrate practical pathways.
Resolutions and action items
FICCI will continue to facilitate dialogue and drive collaborative actions on responsible AI among industry participants. Panelists and their organisations committed to share frameworks, templates, and best‑practice guidance through industry bodies (e.g., C2PA, FICCI). Adobe will promote wider adoption of C2PA credentials and embed provenance metadata in its product suite. Air India will maintain and enhance its safety monitoring and feedback loops for the generative AI virtual assistant. NPCI will expand its transparent AI‑driven fraud‑detection explanations and refine false‑positive rates. RPG Group will disseminate its scalable “bring‑your‑own‑AI” governance model to other enterprises and partners.
Unresolved issues
How to effectively raise consumer awareness and UI visibility of provenance symbols at scale. Specific mechanisms for supporting MSMEs in implementing responsible AI without the resources of large enterprises. The precise balance between regulatory mandates and industry‑led self‑governance, especially regarding “light‑touch” versus stricter rules. Details of human‑in‑the‑loop processes for AI systems in domains like aviation and payments were mentioned but not fully defined. Standardisation of industry‑specific templates that satisfy diverse sector requirements while maintaining core responsible‑AI principles.
Suggested compromises
Adopt a regulatory approach that acts as a catalyst—mandatory baseline safeguards combined with flexibility for innovation (light‑touch regulation). Combine industry‑led standards (e.g., C2PA) with regulatory requirements to avoid fragmented compliance and ensure interoperability. Balance safety guardrails with user convenience by calibrating “safety knobs” and providing transparent fallback options (human escalation). Leverage large enterprises to create reusable compliance frameworks that can be shared with MSMEs through industry consortia.
Thought Provoking Comments
2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity for innovation.
Sets a concrete near‑term horizon, turning responsible AI from a vague aspiration into an imminent business imperative and innovation driver.
Created urgency that framed the rest of the discussion; panelists referenced the 2026 timeline when talking about upcoming regulations (EU AI Act, US California law, Indian IT rules) and the need to move from principles to practice.
Speaker: Andy Parsons
Can your systems actually prove that you have been responsible with AI, and how do you go about doing that?
Shifts the conversation from abstract ethics to measurable, auditable evidence of responsibility, introducing the notion of ‘provable practice’.
Prompted multiple speakers to discuss provenance, standards, and concrete mechanisms (C2PA credentials, product‑level metadata) that can demonstrate compliance, steering the dialogue toward technical solutions.
Speaker: Andy Parsons
We decided five years ago that responsible AI via content transparency wasn’t a feature that could be grafted onto our products… it had to be baked into the tools at their very core.
Highlights a strategic product‑development choice—embedding responsibility at the architecture level rather than as an after‑thought—offering a model for other enterprises.
Set the stage for the later discussion of the C2PA standard and inspired other panelists (e.g., Prativa) to cite how their own products embed provenance, reinforcing the theme of deep integration.
Speaker: Andy Parsons
The C2PA content credentials provide transparent context about a piece of media… an open, cross‑industry standard that anyone can adopt for free.
Introduces a tangible, industry‑wide solution that addresses the earlier call for provable responsibility and emphasizes openness and interoperability.
Led to references about adoption challenges (metadata stripping, consumer awareness) and reinforced the argument that standards—not just principles—are essential for scalable trust.
Speaker: Andy Parsons
One size doesn’t fit all. It’s a ‘bring‑your‑own‑AI’ scenario in every function.
Challenges the notion of a single, monolithic governance framework, emphasizing the need for flexible, context‑specific approaches across diverse business units.
Shifted the tone from a uniform solution to a discussion about modularity and the importance of tailoring guardrails, prompting other speakers to talk about industry‑specific templates and the role of ecosystem bodies.
Speaker: Amol Deshpande
Firefly embeds a ‘nutrition‑label’ style provenance; every output carries that nutrition level, guaranteeing compliance and accountability.
Provides a concrete product example that translates abstract principles (accountability, transparency) into a user‑facing feature, making the concept tangible.
Deepened the practical dimension of the conversation, leading to further examples (Acrobat Assistant) and reinforcing the idea that responsibility can be built into the user experience.
Speaker: Prativa Mohapatra
We use generative AI to watch the performance of our generative AI virtual assistant, balancing the safety knob with customer convenience.
Introduces the meta‑use of AI for self‑monitoring, illustrating a sophisticated, real‑world guardrail mechanism that addresses both safety and user experience.
Added a layer of technical complexity, prompting the panel to consider AI‑in‑AI oversight as part of responsible AI strategies and influencing the later discussion on regulation as a catalyst rather than a constraint.
Speaker: Dr. Satya Ramaswamy
We built a small language model that can explain to a customer why a transaction was declined, giving transparent, real‑time reasons for fraud‑related decisions.
Shows a concrete, consumer‑centric implementation of transparency in a high‑stakes domain (payments), extending the provenance concept beyond media to financial services.
Expanded the conversation to the payments ecosystem, illustrating how the same principles can be operationalized across sectors and reinforcing the need for explainability in AI decisions.
Speaker: Vishal Anand Kanwati
Regulation is a catalyst for good practices; compliance does not constrain Indian innovation.
Reframes regulation from a restrictive force to an enabling one, addressing a common fear among enterprises and aligning with India’s rapid digital growth.
Shifted the tone of the regulatory debate, influencing later remarks (Vishal, Amol) that while regulation is inevitable, it can coexist with innovation and industry‑led standards.
Speaker: Dr. Satya Ramaswamy
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the dialogue from abstract ethics to concrete, actionable frameworks. Andy Parsons’ framing of a 2026 deadline and the demand for provable responsibility set a sense of urgency and introduced the need for measurable standards, which anchored the rest of the conversation. Subsequent comments—especially the introduction of the C2PA open standard, Amol’s ‘bring‑your‑own‑AI’ flexibility, Prativa’s product‑level provenance labels, Satya’s meta‑AI monitoring, and Vishal’s transaction‑explanation model—provided tangible examples that illustrated how principles can be embedded across industries. These insights prompted participants to explore implementation challenges, the role of regulation as an enabler, and the importance of industry collaboration. Collectively, the highlighted comments shaped the session into a forward‑looking, solution‑oriented exchange, emphasizing that responsible AI is not merely a compliance checkbox but a strategic, technically grounded capability that can be scaled across India’s diverse enterprise landscape.

Follow-up Questions
How can organizations demonstrably prove that they are using AI responsibly, and what metrics or evidence are needed to show compliance?
Establishing verifiable proof of responsible AI is essential for building trust, meeting regulatory requirements, and differentiating compliant enterprises.
Speaker: Andy Parsons
What are the implementation costs and ongoing operational expenses associated with deploying responsible AI practices?
Understanding financial implications helps businesses budget, justify investments, and assess ROI for responsible AI initiatives.
Speaker: Andy Parsons
How can consumer awareness of content provenance symbols be increased, and what UI/UX designs are most effective for displaying these indicators?
Widespread user recognition of provenance marks is critical for the success of transparency standards and for empowering end‑users to make informed choices.
Speaker: Andy Parsons
What strategies can address the uneven adoption of provenance standards, especially when platforms (e.g., social media) strip metadata?
Ensuring consistent preservation of provenance data across all distribution channels is necessary to maintain the integrity of the transparency ecosystem.
Speaker: Andy Parsons
How can a viable business case be built for provenance and transparency technologies when they do not directly generate revenue?
Demonstrating economic value or indirect benefits (e.g., risk reduction, brand trust) is needed to encourage enterprise investment in responsible AI tools.
Speaker: Andy Parsons
What best‑practice frameworks enable a “bring‑your‑own‑AI” approach that remains scalable, safe, and governed by effective guardrails across diverse functions?
Enterprises need reusable, adaptable models for integrating third‑party AI while maintaining compliance and risk controls.
Speaker: Amol Deshpande
What training and skill‑development programs are most effective for upskilling the entire value chain on responsible AI principles?
People are a critical stakeholder; systematic education ensures consistent application of responsible AI across an organization.
Speaker: Amol Deshpande
How can explainable AI be integrated into payment‑transaction systems so that users receive clear, understandable reasons for declines or fraud flags?
Transparency in financial decisions builds trust and reduces customer friction, especially in high‑volume digital payment ecosystems.
Speaker: Vishal Anand Kanwati
What methods can balance false‑positive fraud detection with fairness, ensuring legitimate transactions are not unduly blocked while still catching fraud?
Optimizing detection accuracy is vital for user experience, financial inclusion, and regulatory compliance in payment systems.
Speaker: Vishal Anand Kanwati
How can responsible‑AI frameworks and tooling be made affordable and accessible for MSMEs that lack large legal or compliance teams?
Ensuring smaller businesses can adopt responsible AI prevents a widening gap between large enterprises and the broader market.
Speaker: Prativa Mohapatra
What mechanisms can industry bodies use to disseminate responsible‑AI standards and templates effectively across varied sectors and company sizes?
Coordinated industry‑wide adoption accelerates standardization and reduces duplication of effort.
Speaker: Amol Deshpande
How can global AI regulatory frameworks (EU AI Act, UNESCO, OECD) be harmonized with India’s emerging policies to create a coherent compliance landscape?
Alignment reduces regulatory friction for multinational operations and ensures Indian innovations remain globally competitive.
Speaker: Dr. Satya Ramaswamy
What metrics and evaluation methods should be used to assess the effectiveness of AI governance frameworks and provenance standards?
Quantitative assessment is needed to track progress, demonstrate impact, and guide continuous improvement.
Speaker: General (multiple participants implied)
What is the optimal balance between industry‑led self‑regulation and formal regulatory intervention for AI, especially in high‑scale ecosystems like payments?
Clarifying the roles of self‑governance versus law helps shape policy that protects users while fostering innovation.
Speaker: Vishal Anand Kanwati
How can a light‑touch regulatory approach be designed that still ensures safety and fairness without stifling AI innovation?
Finding the right regulatory intensity is crucial for encouraging rapid AI adoption while safeguarding public interest.
Speaker: Sarika Guliani
What design principles ensure effective human‑in‑the‑loop mechanisms for AI systems operating at airline‑scale, balancing safety with customer convenience?
Human oversight remains essential in safety‑critical domains; research is needed on scalable, real‑time intervention models.
Speaker: Dr. Satya Ramaswamy
How does the presence of provenance symbols affect user trust and behavior across different cultural and linguistic contexts in India?
India’s diverse user base may respond differently; studying impact informs culturally appropriate rollout strategies.
Speaker: Andy Parsons (implied)
What are the technical challenges and solutions for ensuring interoperability of provenance standards across hardware manufacturers (cameras, smartphones) and software platforms?
Cross‑industry compatibility is key for a universal trust layer; research can identify standards gaps and integration pathways.
Speaker: Andy Parsons

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Responsible AI in India Leadership Ethics & Global Impact part1_2

Responsible AI in India Leadership Ethics & Global Impact part1_2

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with the moderator emphasizing that responsible AI-grounded in trust, transparency and accountability-is now a foundational requirement for Indian enterprises [1-6]. Andy Parsons of Adobe framed the discussion as a shift from abstract AI principles to “provable practice,” noting that 2026 will see responsible AI become both a regulatory duty and a business opportunity [33-34][20-21]. He described Adobe’s leadership in the Coalition for Content Provenance and Authenticity (C2PA), an open, cross-industry standard that embeds provenance metadata directly into media files, enabling anyone to verify how content was created [54-62]. The C2PA’s core principles-transparency, provenance, accountability and inclusivity-are presented as “nutrition labels” for digital content, allowing users to trace models, tools and data behind each asset [74-80][81-84]. Andy also warned of uneven adoption, metadata stripping by platforms, low consumer awareness and the difficulty of building a profitable business case for provenance, arguing that standards, not merely principles, are needed to move forward [90-99][108-110].


In the panel, Amol Deshpande highlighted that responsible AI must be orchestrated across all five AI layers, involve people, processes and technology, and cannot rely on a single “one-size-fits-all” solution, coining a “bring-your-own-AI” approach [162-166][177-180]. Prativa Mohapatra explained Adobe’s internal “ART” framework (accountability, responsibility, transparency) and gave concrete examples such as Firefly, which tags generated outputs with “nutrition” metadata, and Acrobat Assistant, which ensures traceable, lawful document creation [197-199][209-214][224-228]. She stressed that legal and compliance teams must redesign their workflows to embed AI governance throughout the input-output lifecycle, otherwise enterprises risk falling short of future regulatory expectations [235-238].


Satya Ramaswamy described Air India’s generative-AI virtual assistant that has handled 13.5 million queries with a 97 % autonomous success rate, while continuous safety monitoring and customer feedback loops prevent jailbreaks and inappropriate responses [257-263][264-268]. He noted that partnerships with firms like Adobe provide “prompt firewalls” and indemnities that boost confidence in managing AI risk at airline scale [269-271]. Vishal Anand Kanvaty of NPCI emphasized transparency for declined transactions, using a language model to explain reasons to users, and argued that regulatory safeguards are essential to prevent false-positive fraud decisions and maintain trust in the payments ecosystem [293-298][370-376].


Across the discussion, participants agreed that industry-led standards, cross-sector collaboration and regulatory frameworks are all necessary to translate responsible-AI principles into operational practice, especially for MSMEs that lack internal resources [332-340][379-383]. Sarika Guliani of FICCI reiterated that responsible AI is a commitment to shared human values and that the “people, planet, progress” agenda must guide future innovation, with FICCI pledging to advance the dialogue into concrete action [379-383][389-390]. Overall, the dialogue underscored that moving from principle to practice requires open standards, robust governance, and coordinated regulation to ensure trustworthy AI deployment across India’s diverse enterprise landscape [108-110].


Keypoints


Major discussion points


From principles to provable practice – The panel framed responsible AI as moving beyond abstract ethics to demonstrable compliance, driven by new regulations such as the EU AI Act, California law and India’s IT rules, and positioning it as both a leadership imperative and a regulatory requirement [30-33][105-110][108-113].


Open, cross-industry standards for transparency – Adobe highlighted the C2PA (Coalition for Content Provenance and Authenticity) as an open, free standard that embeds provenance metadata directly into media assets; this model is being baked into Adobe products (e.g., Firefly, Acrobat) to give enterprises verifiable “nutrition labels” for AI-generated content [54-66][61-70][209-219].


Implementation challenges and governance needs – Speakers noted uneven adoption, metadata stripping by platforms, low consumer awareness, and the difficulty of building a business case for provenance. They stressed the necessity of robust governance, guardrails, and a shift from “check-list compliance” to operational frameworks [90-99][105-110][158-166].


Sector-specific responsible-AI deployments – Real-world examples were shared: Air India’s generative-AI virtual assistant that balances safety knobs, continuous monitoring, and human-in-the-loop escalation [257-270]; NPCI’s transparent fraud-prevention model that explains transaction declines and leverages AI while insisting on regulatory safeguards [286-301][370-376]; and RPG’s “bring-your-own-AI” approach that stresses orchestration across data, people, process and technology layers [162-180][185-190].


Overall purpose / goal


The session aimed to translate high-level responsible-AI principles into concrete, enterprise-ready practices for Indian corporations. By showcasing standards, regulatory trends, and concrete industry pilots, the discussion sought to equip leaders with actionable frameworks and to foster a collaborative ecosystem that can scale responsible AI across sectors.


Overall tone


The conversation began with an optimistic, forward-looking tone, emphasizing opportunity and collaboration. As speakers moved into challenges-such as uneven adoption, regulatory pressure, and implementation costs-the tone became more cautionary yet remained constructive, focusing on solutions and shared responsibility. The closing remarks returned to a hopeful, commitment-driven tone, urging continued dialogue and collective action.


Speakers

Vishal Anand Kanvaty


– Role/Title: Chief Technology Officer, National Payments Corporation of India (NPCI)


– Area of Expertise: Digital payments, AI-driven fraud detection and responsible AI governance [S1]


Sarika Guliani


– Role/Title: Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI


– Area of Expertise: AI policy, industry standards, responsible AI implementation [S3]


Dr. Satya Ramaswamy


– Role/Title: Chief Digital and Technology Officer, Air India Limited


– Area of Expertise: Aviation technology, AI-enabled customer service, safety-critical AI systems [S5]


Shantheri Mallaya


– Role/Title: Editor, Economic Times (Panel Moderator)


– Area of Expertise: Journalism, technology policy, AI ethics and industry discourse [S8]


Prativa Mohapatra


– Role/Title: Vice President and Managing Director, Adobe India


– Area of Expertise: Product governance, responsible AI, content authenticity and AI-driven creative tools [S11]


Andy Parsons


– Role/Title: Global Head for Content Authenticity, Adobe (runs the Content Authenticity Initiative)


– Area of Expertise: Content provenance, AI transparency, standards development (C2PA) [S13]


Amol Deshpande


– Role/Title: Group Chief Digital Officer and Head of Innovation, RPG Group


– Area of Expertise: Digital transformation, enterprise AI strategy, responsible AI implementation [S15]


Moderator


– Role/Title: Session Moderator (unnamed)


– Area of Expertise: Event facilitation, AI discussion moderation [S19]


Additional speakers:


Nita – mentioned in closing remarks; no role or expertise specified in the transcript.


Nanya – mentioned in closing remarks; no role or expertise specified in the transcript.


Full session reportComprehensive analysis and detailed insights

The session, presented by Adobe in association with FICCI, opened with moderator Shantari Mallaya (Economic Times) welcoming participants to “Responsible AI from Principles to Practice in Corporate India.” She framed trust, transparency and accountability as “foundational, not optional” for India’s accelerating digital transformation [5-6].


Andy Parsons, Global Head for Content Authenticity at Adobe, set the tone by declaring 2026 the year responsible AI becomes both a regulatory duty and a strategic opportunity. He highlighted that the EU AI Act’s enforcement provisions take effect in August, that California’s first AI law is already in force, and that India’s new IT rules on SGI are being implemented, shifting the business question from “should we be responsible?” to “can you prove you are responsible?” [24-33]. Parsons introduced Adobe’s leadership in the Content Provenance and Authenticity (C2PA) credentials, an open, free, cross-industry standard that embeds provenance metadata directly into media files, enabling anyone to verify a piece of content’s origin, model and tools [55-62]. He described this “nutrition-label” approach as essential for India’s massive digital population, where synthetic content and AI-generated misinformation pose real operational risks. He also warned of challenges: social-media platforms often strip metadata [89-92], consumer awareness of provenance symbols remains low [95-99], and building a profitable business case for provenance remains challenging [108-110]. Consequently, he argued for standards-based infrastructure rather than mere principles, and likened regulation to a catalyst that pushes good practice without being punitive [105-108].


After the opening, Mallaya positioned the panel as a deep dive into translating responsible-AI principles-fairness, accountability, transparency, privacy and inclusivity-into concrete enterprise strategies [144-150].


Amol Deshpande, Chief Digital & Innovation Officer, RPG Group, responded that responsibility must be orchestrated across the five AI layers (data, model, inference, deployment, monitoring) and cannot rely on a single solution. He advocated a “bring-your-own-AI” approach, where each function selects appropriate guardrails while the organisation supplies a scalable, safe environment and governance templates adaptable to diverse business units [162-166][177-184]. He emphasized people as the critical stakeholder, calling for extensive up-skilling to embed human judgement into increasingly complex generative and agentic AI systems [169-176].


Prativa Mohapatra, Vice-President & Managing Director, Adobe India, outlined Adobe’s internal ART (Accountability, Responsibility, Transparency) philosophy and how it is baked into product development pipelines through hundreds of validation steps. Across Adobe’s portfolio-Firefly and the Acrobat Assistant-every AI-generated output carries a content-credential tag that confirms licensing, data compliance and model traceability, thereby shielding enterprises from legal liability and requiring legal and compliance teams to redesign workflows to embed AI governance throughout the input-output lifecycle [209-218][224-232][235-238].


Satya Ramaswamy, Chief Digital and Technology Officer, Air India, illustrated a sector-specific deployment: a generative-AI virtual assistant launched in May 2023 that has handled 13.5 million customer queries with a 97 % autonomous success rate. The system balances a “safety knob” that prevents jailbreaks and inappropriate responses with a seamless user experience, using generative AI both to serve customers and to monitor its own performance. He likened the design to an autopilot/red-button safety-critical analogy, emphasizing human-in-the-loop oversight and “prompt firewalls” provided through Adobe partnerships that bolster risk management without stifling innovation [257-274][332-336].


Vishal Anand Kanwati, CTO, National Payments Corporation of India (NPCI), described AI-driven fraud detection that maintains fairness. NPCI began with a low false-positive threshold and, through data-driven model refinement and industry collaboration, achieved higher accuracy. A small language model now explains to users why a transaction was declined, delivering transparency that builds trust in the payments ecosystem. He stressed that regulatory safeguards are indispensable to prevent AI from “going berserk” and referenced the RBI’s responsible-AI framework as a guiding standard [286-293][298-302][370-376].


Points of Agreement

* All speakers endorsed the need for transparent provenance of AI-generated content – via C2PA credentials (Andy) [55-62], Adobe’s ART-driven content-credential tags (Prativa) [209-218], and NPCI’s transaction-explanation model (Vishal) [286-293].


* They concurred that open, standards-based infrastructure and reusable frameworks are essential for scaling responsible AI, with industry bodies such as FICCI, C2PA and RBI playing pivotal dissemination roles [66-70][297-304][332-340][344-347].


* Regulation was uniformly seen as a catalyst that must coexist with innovation (Andy) [105-108].


* Both Satya and Amol highlighted the critical importance of human-in-the-loop oversight and adjustable guardrails for safety-critical applications [180-182][360-362].


Points of Disagreement

1. Regulation intensity – Vishal argued that mandatory safeguards are essential to prevent harmful AI behaviour [370-376]; Sarika Guliani cautioned that regulation should be balanced and proportionate [379-382]; Andy positioned regulation as a catalyst that encourages good practice without being punitive [105-108].


2. Scope of standards – Andy promoted a single, open C2PA standard as the foundation for provenance [55-62]; Amol counter-argued that “one size does not fit all”, advocating sector-specific templates and a “bring-your-own-AI” model [168-180]; Prativa warned that without free, universally accessible frameworks the divide between large enterprises and MSMEs would widen [297-304].


3. Primary driver of adoption – Amol emphasized an awareness → action → demonstration pathway, with industry bodies disseminating frameworks [332-340]; Vishal insisted that regulation is indispensable for ecosystem safety [370-376]; Sarika stressed that responsible AI is a commitment to shared human values, not merely a compliance checkbox, and should be guided by the “people, planet, progress” agenda [383-389].


Key Take-aways

– Responsible AI must move from high-level principles to provable, operational practice.


Transparent provenance, enabled by open standards such as C2PA, is a cornerstone for trust.


– Effective governance requires coordinated people, process, technology and industry-body layers, not a simple checklist.


– Emerging regulations (EU AI Act, India’s IT rules, state-level AI laws) act as catalysts that should coexist with innovation.


Sector-specific pilots-Air India’s AI assistant, NPCI’s fraud-explanation service, RPG’s flexible governance, Adobe’s ART-driven products-demonstrate practical pathways.


– Without open, free frameworks, responsible AI risks becoming a luxury for large firms, leaving MSMEs behind.


Closing Remarks

Sarika Guliani (FICCI) concluded that responsible AI is a commitment to shared human values rather than a mere compliance checkbox, and that the “people, planet, progress” agenda must guide all technological innovation. FICCI pledged to continue the dialogue and translate the insights into concrete actions for the Indian ecosystem [383-389][389-390].


The moderator thanked the panelists and the audience, signalling that the conversation will move from discussion to implementation.


Session transcriptComplete transcript of the session
Moderator

I’d like to welcome you all to this session titled Responsible AI from Principles to Practice in Corporate India presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite Andy Parsons, our Global Head for Content Authenticity at Adobe.

Andy, over to you.

Andy Parsons

Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.

I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity. I’m going to talk about that in a minute. I’m going to talk about that in a minute. I’m going to talk about that in a minute. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.

And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point. But can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, responsible AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.

So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.

But it’s made the trust problem absolutely impossible to address. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that. Hundreds of millions of people consuming digital content every day.

In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to have leadership in the C2PA, which hopefully many of you have. have heard about this week, which is the Coalition for Content Provenance and Authenticity. And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free.

So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others. And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials.

I’ve encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio, or video. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others. And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company.

It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy I think is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency. Provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used. Simple ideas like knowing that a photograph is actually a photograph and not generated.

These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.

Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.

Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I have often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m not sure if that’s true. I’m not sure if that’s true. I’m not sure if that’s true. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.

What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.

And last, enterprises. Like many of yours in this room, that are willing and excited to go first that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again. And in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things. And that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries, because we have representatives from all of them on our excellent panel today.

So we’re fortunate to have leaders from Air India and PCI, RPCI, and the U .S. Department of Defense, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.

Shantheri Mallaya

Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.

So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.

Right. So, Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So, Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.

So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.

Amol Deshpande

Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.

It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise, to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.

We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.

You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. And that. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple

Shantheri Mallaya

Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Prativa here. Welcome, Prativa. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe is constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally? And at the same time, as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?

So all yours here.

Prativa Mohapatra

Okay, thank you. So I think Andy said the context. And since we are here not to learn about the principles, but the practices, I think everybody should go back with certain practices. So the first practice of AI governance, which we practice, is art. Which is accountability, responsibility, and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course, we are in the business of content for a very, very long time. And now the same content is becoming the currency which everybody’s debating. So our principles have been there for a while.

But. How it is actualized? and let me say how it’s translated into our products. And by the way, it’s in our products, it’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law, you will not be getting into any liability issues.

Because how you do it is by feeding. Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines.

So Acrobat has this new feature called Acrobat Assistant. It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.

So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to already, Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt and re -design and re -design to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI, ensure that you tick all the three.

If you miss any one, you might not be ready for the future. So that’s how I see it.

Shantheri Mallaya

Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Pratibha. I’ll circle back to you, time permitting. Let’s see how best we can get back. Dr. Satya, calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.

How do these things really fall in place in terms of vision and metrics?

Dr. Satya Ramaswamy

Thank you, Shantiri. Since the audience is international, a real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.

So it was a global first in the whole airline industry. Today, it has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to human agent the remaining 50 % of the contact volume comes to A .G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.

At the same time, we don’t want any jailbreak to happen. We don’t want problem injection to happen. We don’t want any inappropriate thing to happen. So we are watching the whole performance of the Gen A virtual assistant, A .G as we call it, all the time. So we use, in fact, Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer, right? So at the end of the day, when we send a response, we also ask the customer, you know, did it answer your question? And also allow them to give their reactions, right? Is it appropriate, inappropriate? And thankfully, over the last 20 years, it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it.

But now we are, as the technologies are maturing, for example, we now have… interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem. That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.

Shantheri Mallaya

Excellent. And given the kind of scale you’re operating at, I think every day is a new day. Yes, it is. We face challenges. There is new, brand new every day. Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here, or rather I’ll sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you.

How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts? One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be, fair at the same time proactive and detective when it comes to looking at fraud? So how, what are the aspects that you look at? keenly here

Vishal Anand Kanvaty

I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with the success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.

Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised

Shantheri Mallaya

absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you some things but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?

Is it getting risked?

Prativa Mohapatra

Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.

So, I think that’s a big thing. I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.

So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?

Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is similar to electricity and it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem. Absolutely.

Shantheri Mallaya

Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem? The industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit. So

Amol Deshpande

Shantiri, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?

Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.

Mind you, this would change. It’s not like this one guardrail construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is I think very, very critical. Absolutely. Thank

Shantheri Mallaya

you for that Amol. Dr. Satya, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean the principles, so on and so forth, and India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time, we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony absolutely, I

Dr. Satya Ramaswamy

think taking Air India, we are an international airline so we operate in many countries, for example we go to North America, US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world, so by nature we are geared to looking at the regulation in all parts of the world and I think and being in compliance, and our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right?

So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco Airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control. And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing.

the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is – this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation. For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it in the same spirit like Adobe.

Absolutely.

Shantheri Mallaya

So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts

Vishal Anand Kanvaty

Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important. While all of us realize.

it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high

Shantheri Mallaya

Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please

Sarika Guliani

first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken and the agency itself and the so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of FICCI and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.

That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Shantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.

So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (7)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The session was presented by Adobe in association with FICCI and titled “Responsible AI from Principles to Practice in Corporate India.””

The knowledge base explicitly states that the discussion titled “Responsible AI from Principles to Practice in Corporate India” was presented by Adobe in association with FICCI, confirming the partnership and session title [S2].

Confirmedhigh

“EU AI Act’s enforcement provisions take effect in August.”

EU AI Act enforcement begins in August, with oversight authorities appointed and penalties enforceable from 2 August (and the Act itself entered into force on 1 August 2024) [S72] and [S73].

Confirmedhigh

“Adobe leads the Content Provenance and Authenticity (C2PA) credentials, an open, free, cross‑industry standard that embeds provenance metadata directly into media files.”

C2PA is described as a technical standard that enables creators to attach cryptographically signed provenance metadata to media, and is supported by Adobe among other companies, confirming its open, cross-industry nature [S37] and [S76].

Confirmedmedium

“Amol Deshpande advocated a “bring‑your‑own‑AI” approach for organisational governance.”

The discussion notes that the phrase “bring your own AI” was highlighted and praised during the session, confirming its use by speakers such as Amol Deshpande [S1].

Additional Contextmedium

“India’s new IT rules on SGI are being implemented, requiring platforms to label synthetic content and act on it.”

India has introduced rules that obligate social-media platforms to label AI-generated/deep-fake content and remove flagged material within three hours, providing concrete detail on the regulatory environment referenced in the report [S79].

External Sources (82)
S1
Responsible AI in India Leadership Ethics & Global Impact part1_2 — -Vishal Anand Kanvaty- Chief Technology Officer, National Payments Corporation of India (NPCI)
S2
Responsible AI in India Leadership Ethics & Global Impact — -Vishal Anand Kanwati- Chief Technology Officer, National Payments Corporation of India (NPCI)
S3
Responsible AI in India Leadership Ethics & Global Impact part1_2 — The discussion concluded with Sarika Guliani from FICCI emphasising that “responsibility is not anymore a compliance che…
S4
Responsible AI in India Leadership Ethics & Global Impact — The session concluded with FICCI’s commitment to continue translating discussions into actionable frameworks. Sarika Gul…
S5
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact-part1_2 — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S6
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Dr. Satya Ramaswamy- Vishal Anand Kanvaty – Vishal Anand Kanvaty- Dr. Satya Ramaswamy Dr. Satya focuses on balancing…
S7
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S8
Responsible AI in India Leadership Ethics & Global Impact part1_2 — -Shantheri Mallaya- Editor at Economic Times, panel moderator
S9
Responsible AI in India Leadership Ethics & Global Impact — -Shantari Malaya- Editor at Economic Times, panel moderator
S10
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S11
Responsible AI in India Leadership Ethics & Global Impact — -Prativa Mohapatra- Vice President and Managing Director of Adobe India
S12
Driving U.S. Innovation in Artificial Intelligence — 2. Amy Cohen – Executive Director, National Association of State Election Directors 3. Andy Parsons – Senior Director of…
S13
Responsible AI in India Leadership Ethics & Global Impact — -Andy Parsons- Global Head for Content Authenticity at Adobe, runs the Content Authenticity Initiative at Adobe
S14
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excit…
S15
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S16
Responsible AI in India Leadership Ethics & Global Impact — – Andy Parsons- Amol Deshpande – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S17
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S18
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S19
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S20
Closing remarks – Charting the path forward — Importance of moving from principles to practical implementation
S21
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Safe SAIF, secure AI framework is something we have shared outside. And it is important to understand supply chain risk….
S22
Ethics and AI | Part 6 — A significant focus of the Act is placed on transparency. It mandates that users be informed when they are interacting w…
S23
Building the Next Wave of AI_ Responsible Frameworks & Standards — This comment fundamentally shifted the discussion from viewing responsibility as a constraint on innovation to seeing it…
S24
The Global Power Shift India’s Rise in AI & Semiconductors — The discussion concluded that India’s opportunity in AI and semiconductors is real but time-bound, requiring decisive ex…
S25
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — This comment shifted the discussion from high-level policy frameworks to concrete, relatable concerns. It established a …
S26
Toward Collective Action_ Roundtable on Safe & Trusted AI — And AI is allowing this to happen at a scale that at the moment we already see disruptions, but I think there’s real ris…
S27
AI as critical infrastructure for continuity in public services — This comment provides a concrete, measurable example of how AI exclusion occurs, moving beyond abstract discussions of i…
S28
The rise and risks of synthetic media — The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in he…
S29
AI slop’s meteoric rise and the impact of synthetic content in 2026 — In December 2025, the Macquarie Dictionary, Merriam-Webster, and the American Dialect Society named ‘slop’ as the Word o…
S30
Meta India VP highlights AI’s role in ensuring user safety against misinformation — Meta India Vice President Sandhya Devanathan said the companyuses AI to combat misinformationwhile stressing that it wil…
S31
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste. Honorable Minister Vaishnav, Your Excellency’s colleagues, let me begin by thanking our host, Prime Minister Mo…
S32
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S33
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S34
Conversational AI in low income & resource settings | IGF 2023 — Finding the right balance between regulation and innovation is crucial. By addressing these issues, AI can play a signif…
S35
Open Forum #17 AI Regulation Insights From Parliaments — Balancing Innovation and Regulation There’s a critical balance needed between regulation and innovation incentives. Cou…
S36
What is it about AI that we need to regulate? — Global AI Governance Initiatives: Directions and TrajectoriesGlobal AI governance initiatives are heading toward multipl…
S37
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges Charlie Halford…
S38
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — Nadja Blagojevic: Yes, very happy to. And thank you so much for having Google here. We’re very happy to be speaking with…
S39
WSIS Action Line C7 E-environment — – **Governance and Implementation Challenges**: Common barriers across all initiatives including unclear policy framewor…
S40
Advancing Scientific AI with Safety Ethics and Responsibility — -Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI…
S41
Laying the foundations for AI governance — – Industry attitudes toward regulation: whether companies genuinely want regulation or resist it due to power concentrat…
S42
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Andy Parsons positioned regulation as helping enterprises move from reactive to proactive responsible AI adoption. The u…
S43
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — Agent Standards Initiative focuses on industry-driven, consensus-based voluntary standards rather than regulation There…
S44
Industry leaders partner to promote responsible AI development — Anthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies,have joined to establishtheFrontier …
S45
Building the Next Wave of AI_ Responsible Frameworks & Standards — I think there is a significant role the governments, innovation hubs, academia, and startups have to play in developing …
S46
Safe and Responsible AI at Scale Practical Pathways — A sustainable data economy requires clear incentive models with guaranteed trust, value creation, and exchangeability me…
S47
AI for agriculture Scaling Intelegence for food and climate resiliance — The minister emphasizes that artificial intelligence in agriculture should rest on reliable data sources, be governed by…
S48
Opening address of the co-chairs of the AI Governance Dialogue — Infrastructure | Legal and regulatory International technical standards and their role to make sure that policy and reg…
S49
Responsible AI in India Leadership Ethics & Global Impact — Aviation industry’s safety-critical nature provides embedded concepts of human-in-the-loop control and regulatory compli…
S50
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S51
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Matilda Road:Mathilda, over to you. Thank you, Florian. Good morning everyone. It’s great to see so many of you here and…
S52
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — in the world in terms of policy and regulation. When Vision 2030 was launched by His Royal Highness the Crown Prince, we…
S53
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S54
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S55
WS #283 AI Agents: Ensuring Responsible Deployment — User control and human oversight are essential safeguards, particularly for high-impact decisions that are difficult to …
S56
Agentic AI in Focus Opportunities Risks and Governance — All panelists emphasized the critical importance of enterprise guardrails and human oversight. They stressed that while …
S57
Policy Guidelines — – ◾ Section 1: The Development of Open Access to Scientific Information and Research , gives an overview of the definiti…
S58
Is the AI bubble about to burst? Five causes and five scenarios — Historically,open systems often win in the long run– think of the internet, HTML, and Linux. They become standards, attr…
S59
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The level of consensus among the speakers was relatively high, particularly on the benefits and potential applications o…
S60
Comprehensive Report: European Approaches to AI Regulation and Governance — And how would the downstream provider offering then this final system to the border control or to the, for instance, to …
S61
Google to require disclosure of AI-generated content in political ads — Googleis implementing new rules requiring political ads on its platforms to disclose when images and audio are generated…
S62
Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content — Human rights | Legal and regulatory | Sociocultural Information Integrity and Human Rights Framework There must be dis…
S63
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — – Ioanna Ntinou- Mark Gachara Example of energy efficiency passes for houses in Germany and EU that are obligatory, mak…
S64
Responsible AI in India Leadership Ethics & Global Impact — And our customers are international, and when we operate in this international geographies, we have to comply with the a…
S65
Responsible AI in India Leadership Ethics & Global Impact part1_2 — This comment reframes the entire discussion from theoretical principles to practical implementation. It shifts the focus…
S66
What is it about AI that we need to regulate? — Global AI Governance Initiatives: Directions and TrajectoriesGlobal AI governance initiatives are heading toward multipl…
S67
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges Charlie Halford…
S68
From principles to practice: Governing advanced AI in action — – **Implementation Challenges Across Jurisdictions**: Participants highlighted the tension between rapid technological a…
S69
WSIS Action Line C7 E-environment — – **Governance and Implementation Challenges**: Common barriers across all initiatives including unclear policy framewor…
S70
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Examples of sectoral self-regulations are in the case of Mauritius in the perspective of increasing the capacity of exis…
S71
How AI in 2026 will transform management roles and organisational design — In 2026, AI will transform management structures and automate tasks as companies strive to demonstrate real value. By 20…
S72
EU AI Act oversight and fines begin this August — A new phase of the EU AI Acttakes effect on 2 August, requiring member states to appoint oversight authorities and enfor…
S73
EU AI Act officially comes into force — The world’s first comprehensive AI law, known as the EU AI Act, officially came intoforceon 1 August 2024, marking a sig…
S74
Keynotes — Legal and regulatory | Human rights O’Flaherty calls for the EU to maintain its commitment to enforcing the Digital Ser…
S75
EU AI Act published in Official Journal, initiating countdown to legal deadlines — The European Union has finalised its AI Act, a significant regulatory framework aimed at governing the use of AI within …
S76
Certifying humanity: Labeling content amid AI flood — These debates are no longer theoretical. Provenance-based initiatives such as theContent Authenticity Initiative (C2PA),…
S77
Day 0 Event #12 Tackling Misinformation with Information Literacy — Zoe Darma: to start with a quiz and that there are no wrong answers, but there actually are. There actually are right …
S78
Day 0 Event #265 Using Digital Platforms to Promote Info Integrity — Gisella Lomax connected online misinformation to devastating real-world consequences: “Information risks such as hate sp…
S79
India enforces a three-hour removal rule for AI-generated deepfake content — Strict new ruleshave been introducedin India for social media platforms in an effort to curb the spread of AI-generated …
S80
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S81
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S82
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Andy Parsons
7 arguments190 words per minute2010 words632 seconds
Argument 1
Principles‑to‑practice imperative (Andy Parsons)
EXPLANATION
Andy stresses that responsible AI must move beyond abstract principles and become a demonstrable part of corporate compliance and strategy. He frames this shift as essential for 2026, when responsibility will be both a regulatory requirement and a business opportunity.
EVIDENCE
He notes that responsible AI will stop being a slide in a deck and become part of a compliance strategy and an important opportunity, and that the panel’s theme is “the shift from principles to provable practice” [33-34]. He also points out that responsibility will become a discipline rather than a mere policy statement [32-33].
MAJOR DISCUSSION POINT
Principles‑to‑practice imperative (Andy Parsons)
Argument 2
C2PA content credentials as an open, interoperable standard (Andy Parsons)
EXPLANATION
Andy describes the C2PA content credentials as an open, cross‑industry standard that attaches provenance information to any media asset. The standard is designed to be freely adoptable and interoperable across tools and platforms.
EVIDENCE
He explains that five years of work produced the open C2PA standard, that a C2PA symbol appears on LinkedIn, and that the credentials provide transparent context for videos, audio, or images [61-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Andy’s description of C2PA matches the external mention of an open, free C2PA content credentials standard developed five years ago [S1][S2].
MAJOR DISCUSSION POINT
C2PA content credentials as an open, interoperable standard (Andy Parsons)
AGREED WITH
Prativa Mohapatra, Vishal Anand Kanvaty, Moderator, Sarika Guliani
DISAGREED WITH
Amol Deshpande, Prativa Mohapatra
Argument 3
Need for industry‑wide, standards‑based infrastructure (Andy Parsons)
EXPLANATION
Andy argues that a shared, standards‑based infrastructure for content trust is essential and must not be owned by any single company. He calls for an open, interoperable layer that any organization can adopt to embed transparency into AI‑generated content.
EVIDENCE
He highlights a cross-industry coalition that includes Adobe, Microsoft, BBC, OpenAI, Sony, Qualcomm and others, creating an infrastructure layer for content trust that is standards-based, non-proprietary, and available to everyone [66-70] and stresses that this philosophy is especially important for India [71-73].
MAJOR DISCUSSION POINT
Need for industry‑wide, standards‑based infrastructure (Andy Parsons)
AGREED WITH
Prativa Mohapatra, Amol Deshpande, Vishal Anand Kanvaty
DISAGREED WITH
Amol Deshpande, Prativa Mohapatra
Argument 4
EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons)
EXPLANATION
Andy points out that emerging regulatory regimes—such as the EU AI Act, California’s AI law, and India’s new IT rules—are compelling organizations to embed responsible AI practices now. He frames regulation as a catalyst for good practices rather than a purely punitive force.
EVIDENCE
He cites the EU AI Act’s enforcement provisions taking effect in August, the first U.S. state law in California, and India’s new IT rules on SGI, noting that India is actively shaping its own path [25-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The focus on the EU AI Act’s transparency requirements and balanced regulation is reflected in the EU AI Act transparency provisions [S22] and discussions on balancing regulation and innovation [S34][S35].
MAJOR DISCUSSION POINT
EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons)
AGREED WITH
Vishal Anand Kanvaty, Dr. Satya Ramaswamy, Sarika Guliani
DISAGREED WITH
Vishal Anand Kanvaty, Sarika Guliani
Argument 5
Embedding responsible AI at the core of products is essential rather than treating it as a bolt‑on feature
EXPLANATION
Andy argues that responsible AI must be baked into the core architecture of tools, not added later as an afterthought, to ensure genuine trust and provenance.
EVIDENCE
He explains that five years ago Adobe decided that responsible AI via content transparency had to be baked into the core of products like Photoshop and Premiere, not grafted on as a feature [57-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on baking transparency into tools rather than grafting it on is echoed in external notes about core integration of content transparency [S1][S5].
MAJOR DISCUSSION POINT
Core integration of responsible AI into products (Andy Parsons)
Argument 6
The AI trust crisis is real, concrete and impacts everyday users and businesses
EXPLANATION
Andy points out that the trust crisis caused by AI‑generated content is a tangible, daily problem affecting consumers, children, and enterprises across India’s diverse linguistic landscape.
EVIDENCE
He describes the trust crisis with AI as real, concrete, happening every day to children, businesses and individuals, especially across India’s cultural and linguistic diversity [37-39].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The real-world trust erosion and synthetic media risks are discussed in roundtable remarks on trust breakdown [S26] and the rise of synthetic media [S28][S29].
MAJOR DISCUSSION POINT
Real‑world AI trust crisis (Andy Parsons)
Argument 7
India’s massive digital population makes synthetic content and AI‑generated misinformation operational risks for businesses
EXPLANATION
Andy highlights that with hundreds of millions of daily digital consumers, AI‑generated misinformation is not abstract but an operational risk that enterprises must manage.
EVIDENCE
He notes that India has the world’s largest digital population, and that synthetic content and AI-generated misinformation are real operational risks for businesses [46-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s large digital user base and misinformation challenges are highlighted in the Meta India VP remarks on AI combating misinformation [S30] and the broader risks of synthetic media [S28].
MAJOR DISCUSSION POINT
Operational risks of AI‑generated misinformation in India (Andy Parsons)
S
Shantheri Mallaya
3 arguments159 words per minute1631 words611 seconds
Argument 1
Translating principles into enterprise strategy (Shantheri Mallaya)
EXPLANATION
Shantheri frames the central challenge as moving responsible‑AI principles—fairness, accountability, transparency, privacy, inclusivity—into concrete enterprise strategy frameworks. She asks panelists to explain how these values can be operationalised in real business contexts.
EVIDENCE
In her opening she asks how responsible-AI principles will be realistically translated into enterprise strategy frameworks and how organisations will go about it [144-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from principles to practice is also noted in the closing remarks [S20] and the responsible AI as an enabler discussion [S23].
MAJOR DISCUSSION POINT
Translating principles into enterprise strategy (Shantheri Mallaya)
Argument 2
India is positioning itself as a global leader in trustworthy and inclusive AI
EXPLANATION
Shantheri highlights that India is charting the course for the world in building trustworthy and inclusive AI, indicating a leadership role on the international stage.
EVIDENCE
She remarks that India is really charting the course for the world and that building trustworthy and inclusive AI is a momentous time for the country [135-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s leadership in trustworthy AI is highlighted in summit remarks on inclusive AI development [S31] and the global vision plenary noting India’s role [S32].
MAJOR DISCUSSION POINT
India as a global leader in trustworthy and inclusive AI (Shantheri Mallaya)
Argument 3
Balancing AI‑driven innovation with regulation and user experience is essential
EXPLANATION
She stresses the need to balance rapid AI innovation with regulatory compliance and maintaining a high-quality user and customer experience.
EVIDENCE
She asks how to balance AI-driven innovation, regulation, accountability, operational efficiency, and user experience within large-scale aviation operations [245-249].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for balance between regulation and innovation is discussed in the IGF session on conversational AI [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Balancing innovation, regulation and user experience (Shantheri Mallaya)
S
Sarika Guliani
2 arguments142 words per minute590 words249 seconds
Argument 1
Commitment beyond compliance, embedding human values (Sarika Guliani)
EXPLANATION
Sarika argues that responsible AI should be seen as a commitment to shared human values rather than a mere compliance checkbox. She stresses that technology choices now shape the future, and that ethical considerations must be embedded from the outset.
EVIDENCE
She states that responsibility is no longer a compliance check but a commitment of technology with shared human values, and that the choice of what to create defines our future, not just words on a slide [379-382].
MAJOR DISCUSSION POINT
Commitment beyond compliance, embedding human values (Sarika Guliani)
AGREED WITH
Andy Parsons, Prativa Mohapatra, Vishal Anand Kanvaty, Moderator
Argument 2
Regulation should be balanced, avoiding overly heavy‑handed approaches
EXPLANATION
Sarika argues that while regulation is necessary, it should be proportionate and not stifle innovation, advocating for a light‑touch regulatory approach where appropriate.
EVIDENCE
She notes that the discussion would need another session to compare light-touch versus balanced regulation, indicating a preference for proportionate regulatory frameworks [379-382].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balanced regulatory approaches are advocated in the IGF discussion on regulation vs innovation [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Need for balanced, proportionate regulation (Sarika Guliani)
P
Prativa Mohapatra
6 arguments156 words per minute1126 words432 seconds
Argument 1
Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra)
EXPLANATION
Prativa explains Adobe’s internal “ART” philosophy—Accountability, Responsibility, Transparency—and shows how it is baked into its generative AI tool Firefly and the Acrobat Assistant. This ensures that outputs are traceable, lawful, and trustworthy.
EVIDENCE
She describes how Firefly embeds a “nutrition label” that guarantees lawful, non-infringing output, and how Acrobat Assistant follows the same provenance principles, allowing users to trace the origin of content and ensure compliance [197-210] and [222-228].
MAJOR DISCUSSION POINT
Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra)
Argument 2
Product‑level governance methodology with hundreds of checks (Prativa Mohapatra)
EXPLANATION
Prativa notes that every new Adobe product undergoes a rigorous, secure development methodology that includes hundreds of validation steps, embedding responsible‑AI principles directly into the product lifecycle.
EVIDENCE
She states that each new product goes through a very strong, secure methodology with hundreds of steps, ensuring principles are embedded into creation processes [206-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The mention of a strong, secure methodology with hundreds of validation steps aligns with external commentary on product governance processes [S5].
MAJOR DISCUSSION POINT
Product‑level governance methodology with hundreds of checks (Prativa Mohapatra)
Argument 3
Risk of a divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra)
EXPLANATION
Prativa warns that a gap could emerge between large AI developers and smaller firms that lack resources, emphasizing the need for free, open frameworks that all can adopt. She cites Adobe’s early C2PA work as an example of making standards freely available.
EVIDENCE
She highlights the stark divide between big and small enterprises, the importance of free, accessible frameworks, and references Adobe’s 2019 content authentication initiative as a pioneering open effort [297-304] and notes that creators must continue providing such frameworks [305-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for open, free frameworks for all enterprises echo the discussion of open standards and inclusive AI leadership [S23][S31].
MAJOR DISCUSSION POINT
Risk of a divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra)
Argument 4
Large players must create reusable, open frameworks that MSMEs can adopt (Prativa Mohapatra)
EXPLANATION
Prativa argues that large enterprises should develop reusable, open‑source frameworks that smaller businesses can leverage, ensuring responsible AI does not become a luxury only for the well‑resourced. She calls for ongoing collaboration among technology creators to extend methodologies to the broader ecosystem.
EVIDENCE
She states that large enterprises must create frameworks that MSMEs can adopt, and that creators need to keep building methods for others to use, emphasizing the need for open, reusable solutions [315-317].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation for large firms to provide reusable, open frameworks matches the emphasis on open standards and inclusive AI development [S23][S31].
MAJOR DISCUSSION POINT
Large players must create reusable, open frameworks that MSMEs can adopt (Prativa Mohapatra)
Argument 5
Legal, compliance and ethics teams must redesign processes to embed AI governance
EXPLANATION
Prativa emphasizes that enterprises need to re‑opt and redesign their legal, compliance and ethical processes to incorporate AI governance throughout the organization.
EVIDENCE
She states that every organization has legal and compliance teams that must be re-opted and re-designed to address AI compliance, ensuring all three pillars-legal, compliance and ethics-are covered [234-237].
MAJOR DISCUSSION POINT
Re‑designing legal and compliance processes for AI governance (Prativa Mohapatra)
Argument 6
AI governance requires integration of people, process and technology, reflecting the ART philosophy
EXPLANATION
She outlines that responsible AI must combine accountability, responsibility, and transparency across people, processes, and technology, mirroring the ART framework used at Adobe.
EVIDENCE
She notes that enterprises need legal, compliance, and ethical strategies together, and that AI governance must tick all three-people, process, technology-to be ready for the future [233-236].
MAJOR DISCUSSION POINT
Holistic integration of people, process and technology in AI governance (Prativa Mohapatra)
A
Amol Deshpande
5 arguments181 words per minute759 words251 seconds
Argument 1
Orchestration across all AI layers; people‑centric governance (Amol Deshpande)
EXPLANATION
Amol stresses that responsible AI must be orchestrated across every layer of the AI stack and that people are a critical stakeholder. He advocates a “bring‑your‑own‑AI” approach with guardrails, rather than a one‑size‑fits‑all solution.
EVIDENCE
He explains that responsibility must exist at every AI layer, that people are a very important stakeholder, and that a scalable, safe environment with guardrails is essential, describing a “bring your own AI” scenario and the need for templates [162-166] and [169-176] and [177-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guardrails across the AI stack and people-centric governance are highlighted in the generative AI guardrails discussion [S33].
MAJOR DISCUSSION POINT
Orchestration across all AI layers; people‑centric governance (Amol Deshpande)
Argument 2
Awareness → action → demonstration cycle; role of industry bodies in disseminating frameworks (Amol Deshpande)
EXPLANATION
Amol outlines a three‑step process—awareness, action, demonstration—to embed responsible AI, and highlights the pivotal role of industry bodies in spreading best‑practice frameworks across sectors.
EVIDENCE
He states that the first step is awareness, followed by action, then demonstration, and that industry bodies (e.g., FICCI) are crucial for disseminating learnings and templates across the value chain [332-340] and [341-347].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The three-step cycle and role of industry bodies are reflected in the IGF roundtable on safe AI and the open forum on regulation insights [S34][S35].
MAJOR DISCUSSION POINT
Awareness → action → demonstration cycle; role of industry bodies in disseminating frameworks (Amol Deshpande)
DISAGREED WITH
Vishal Anand Kanvaty, Sarika Guliani
Argument 3
RPG Group’s need for flexible, scalable AI governance across diverse business units (Amol Deshpande)
EXPLANATION
Amol describes the RPG Group’s challenge of governing AI across a heterogeneous conglomerate, emphasizing that a single solution cannot fit all units and that flexible, scalable guardrails are required.
EVIDENCE
He notes the need for a scalable, safe environment with guardrails, that one size doesn’t fit all, and that templates are being exercised within the enterprise across diverse business units [168-180] and [181-184].
MAJOR DISCUSSION POINT
RPG Group’s need for flexible, scalable AI governance across diverse business units (Amol Deshpande)
Argument 4
Industry bodies help cascade standards and templates to MSMEs lacking resources (Amol Deshpande)
EXPLANATION
Amol argues that industry associations can bridge the resource gap for MSMEs by sharing standards, templates, and best practices, enabling smaller firms to adopt responsible AI without building frameworks from scratch.
EVIDENCE
He mentions that organizations like FICCI can help cascade frameworks, that MSMEs lack access to such information, and that industry bodies are critical for sharing learnings across sectors [344-347].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry associations bridging resource gaps for MSMEs are discussed in the IGF session on regulation and innovation [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Industry bodies help cascade standards and templates to MSMEs lacking resources (Amol Deshpande)
Argument 5
Enterprises need a scalable, safe AI environment with built‑in guardrails
EXPLANATION
Amol stresses that large organisations must provide a scalable environment where AI operates safely, with guardrails that protect against misuse while allowing flexibility.
EVIDENCE
He describes the need for a scalable, safe environment protected with guardrails as a key requirement for the enterprise [180-182].
MAJOR DISCUSSION POINT
Scalable safe AI environment with guardrails (Amol Deshpande)
D
Dr. Satya Ramaswamy
4 arguments183 words per minute1035 words338 seconds
Argument 1
Global regulatory compliance coexists with innovation; safety‑critical aviation example (Dr. Satya Ramaswamy)
EXPLANATION
Satya explains that Air India must comply with a patchwork of international regulations (US, EU, India) while still innovating with AI. He stresses that safety‑critical aviation standards drive rigorous compliance without stifling innovation.
EVIDENCE
He notes Air India’s operations across multiple jurisdictions, the need to obey DGCA, FAA, and EU regulators, and that compliance does not constrain Indian innovation, citing the partnership with Adobe and the launch of a global AI virtual assistant [351-364].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing global regulatory compliance with innovation is discussed in the IGF session on regulation and innovation [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Global regulatory compliance coexists with innovation; safety‑critical aviation example (Dr. Satya Ramaswamy)
Argument 2
Air India’s generative‑AI virtual assistant with safety guardrails and continuous monitoring (Dr. Satya Ramaswamy)
EXPLANATION
Satya details Air India’s AI‑driven virtual assistant that handles millions of customer queries, operates with a 97 % autonomous success rate, and incorporates multiple safety guardrails, continuous monitoring, and user feedback loops to prevent misuse.
EVIDENCE
He describes the launch in May 2023, handling 13.5 million queries, 97 % autonomous handling, safety knobs, jailbreak prevention, real-time monitoring, and the use of generative AI to watch its own performance, with Adobe providing indemnity [257-270] and [261-268].
MAJOR DISCUSSION POINT
Air India’s generative‑AI virtual assistant with safety guardrails and continuous monitoring (Dr. Satya Ramaswamy)
Argument 3
Safety‑critical aviation demands continuous human‑in‑the‑loop oversight of AI systems
EXPLANATION
Satya explains that because aviation is safety‑critical, AI systems must always allow a human operator to intervene instantly, ensuring safety overrides automated decisions.
EVIDENCE
He describes the red button on the joystick that lets a pilot take control at any moment if the autopilot behaves incorrectly, illustrating the human-in-the-loop safety mechanism [360-362].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop oversight for safety‑critical AI (Dr. Satya Ramaswamy)
Argument 4
Partnerships with technology providers like Adobe provide indemnity and confidence in AI deployments
EXPLANATION
Satya highlights that collaborations with firms such as Adobe, which offer indemnity, give Air India confidence to adopt AI while managing risk.
EVIDENCE
He notes that Adobe provides full indemnity in case of problems, which gives a lot of confidence in managing AI risk [269-270].
MAJOR DISCUSSION POINT
Strategic tech partnerships to mitigate AI risk (Dr. Satya Ramaswamy)
V
Vishal Anand Kanvaty
2 arguments184 words per minute582 words189 seconds
Argument 1
Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty)
EXPLANATION
Vishal argues that regulatory frameworks are necessary because unchecked AI can produce harmful outcomes; safeguards embedded in law protect the ecosystem and maintain trust.
EVIDENCE
He states that regulations are required because AI can go berserk, that safeguards are mandatory to prevent such behavior, and that regulations must be embedded into systems and consulted with stakeholders [370-376].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of regulation to prevent uncontrolled AI behavior is highlighted in the IGF discussion on regulation balance [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty)
DISAGREED WITH
Amol Deshpande, Sarika Guliani
Argument 2
NPCI’s AI for fraud detection prioritising low false‑positives and transparent explanations (Vishal Anand Kanvaty)
EXPLANATION
Vishal explains NPCI’s AI‑driven fraud detection system, which aims to keep false‑positive rates low while providing transparent, user‑facing explanations for declined transactions, thereby building trust in the payment ecosystem.
EVIDENCE
He notes the priority of minimizing false positives, the development of a language model that can explain why a transaction was declined, and that this transparency aligns with RBI’s responsible-AI framework, helping maintain trust in the payment system [286-294] and [295-301].
MAJOR DISCUSSION POINT
NPCI’s AI for fraud detection prioritising low false‑positives and transparent explanations (Vishal Anand Kanvaty)
M
Moderator
3 arguments132 words per minute132 words59 seconds
Argument 1
Responsible deployment outweighs speed of AI adoption
EXPLANATION
The moderator stresses that while AI can accelerate innovation, the priority must be on deploying it responsibly rather than merely adopting it quickly. Speed without responsibility could undermine trust and safety.
EVIDENCE
He notes that the real differentiator is not how quickly AI is adopted but how responsibly it is deployed, emphasizing the need for responsible AI over rapid adoption [3-4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on responsible deployment over speed mirrors the closing remarks on moving from principles to practice [S20].
MAJOR DISCUSSION POINT
Responsible deployment outweighs speed of AI adoption (Moderator)
Argument 2
Trust, transparency and accountability are foundational for AI in corporate India
EXPLANATION
The moderator frames trust, transparency, and accountability as non‑optional, foundational elements that must underpin AI initiatives in Indian enterprises.
EVIDENCE
He declares that trust, transparency and accountability are no longer optional and are foundational for the discussion on responsible AI [5-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Foundational importance of trust, transparency and accountability is reflected in the EU AI Act transparency focus [S22] and roundtable concerns about trust breakdown [S26].
MAJOR DISCUSSION POINT
Foundational role of trust, transparency and accountability (Moderator)
Argument 3
The session aims to advance safe and trusted AI in the corporate landscape
EXPLANATION
The moderator sets the purpose of the session as focusing on advancing safe, trusted AI practices within corporations.
EVIDENCE
He states that the conversation will center on advancing safe and trusted AI in the corporate landscape [7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session’s goal aligns with the overall theme of advancing safe, trusted AI in the responsible AI discussions [S20][S23].
MAJOR DISCUSSION POINT
Advancing safe and trusted AI in corporate sector (Moderator)
Agreements
Agreement Points
Transparency and provenance of AI‑generated content must be embedded in products and made openly verifiable.
Speakers: Andy Parsons, Prativa Mohapatra, Vishal Anand Kanvaty, Moderator, Sarika Guliani
C2PA content credentials as an open, interoperable standard (Andy Parsons) Adobe’s “ART” philosophy with nutrition‑label style provenance in Firefly (Prativa Mohapatra) NPCI’s transparent explanations for declined transactions (Vishal Anand Kanvaty) Trust, transparency and accountability are foundational (Moderator) Commitment beyond compliance, embedding human values (Sarika Guliani)
All speakers stress that responsible AI requires concrete, transparent provenance mechanisms-whether via open standards like C2PA, Adobe’s built-in nutrition labels, or transaction-level explanations-so that users can see how content or decisions are generated and trust the system [5-6][61-66][74-76][209-210][293-294][379-382].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy trends emphasize mandatory disclosure of AI-generated media, as seen in Google’s upcoming political-ad rules requiring clear labeling of synthetic content [S61] and broader calls for algorithmic transparency in public-interest frameworks [S62]; NPCI’s own transparency-by-design approach for its language models reinforces this direction [S49].
Open, standards‑based infrastructure and reusable frameworks are essential for scaling responsible AI across industries.
Speakers: Andy Parsons, Prativa Mohapatra, Amol Deshpande, Vishal Anand Kanvaty
Need for industry‑wide, standards‑based infrastructure (Andy Parsons) Risk of divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra) Industry bodies help cascade standards and templates to MSMEs (Amol Deshpande) RBI framework and transparent AI models as a reusable foundation (Vishal Anand Kanvaty)
The panel concurs that responsible AI cannot rely on proprietary solutions; it must be built on open, cross-industry standards and reusable frameworks that can be adopted by both large firms and MSMEs, with industry bodies playing a key dissemination role [66-70][297-304][332-340][344-347][292-301].
POLICY CONTEXT (KNOWLEDGE BASE)
International bodies promote voluntary, consensus-driven standards (e.g., the Agent Standards Initiative) to foster interoperable, responsible AI ecosystems [S43]; the AI Standards Hub and multistakeholder dialogues stress the need for open technical standards that remain adaptable to regulatory needs [S48][S51].
Regulatory frameworks are a catalyst and necessary safeguard for responsible AI, but should be balanced and proportionate.
Speakers: Andy Parsons, Vishal Anand Kanvaty, Dr. Satya Ramaswamy, Sarika Guliani
EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons) Regulation is essential to prevent AI “berserk” behaviour (Vishal Anand Kanvaty) Global regulatory compliance coexists with innovation (Dr. Satya Ramaswamy) Regulation should be balanced, avoiding overly heavy‑handed approaches (Sarika Guliani)
All agree that regulation is indispensable-acting as a catalyst, ensuring safety, and providing a level playing field-while emphasizing the need for proportionate rules that do not stifle innovation [25-27][370-376][351-364][379-382].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry perspectives acknowledge that well-designed regulation can shift firms from reactive to proactive AI governance, providing clarity and urgency for responsible practices [S42]; however, scholars warn against over-regulation and advocate proportionate, context-sensitive rules that complement existing laws [S53][S45].
Human‑in‑the‑loop oversight and guardrails are critical, especially for safety‑critical applications.
Speakers: Dr. Satya Ramaswamy, Amol Deshpande
Human‑in‑the‑loop oversight for safety‑critical aviation AI (Dr. Satya Ramaswamy) Enterprises need scalable, safe AI environments with built‑in guardrails (Amol Deshpande)
Both speakers highlight that AI systems must include real-time human oversight and robust guardrails to ensure safety, whether in aviation or broader enterprise contexts [360-362][180-182].
POLICY CONTEXT (KNOWLEDGE BASE)
Aviation safety standards embed human-in-the-loop controls and regulatory compliance, illustrating the necessity of oversight for high-risk AI systems [S49]; similar principles are echoed in broader AI governance discussions emphasizing user control and human accountability [S55][S56][S54].
Balancing rapid AI innovation with regulatory compliance and user experience is essential.
Speakers: Shantheri Mallaya, Dr. Satya Ramaswamy, Vishal Anand Kanvaty
Balancing AI‑driven innovation with regulation and user experience (Shantheri Mallaya) Global regulatory compliance coexists with innovation; safety does not constrain Indian innovation (Dr. Satya Ramaswamy) Low false‑positive rates and transparent explanations balance fraud detection with user trust (Vishal Anand Kanvaty)
The moderator and panelists agree that AI deployment must simultaneously pursue speed, compliance, and a high-quality user experience, using mechanisms such as transparent explanations and safety guardrails [245-249][351-364][286-294].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses stress the need to align fast-moving AI development with compliance mechanisms that do not hinder user experience, advocating incentive models that build trust while preserving innovation speed [S45][S46][S53].
Similar Viewpoints
Both stress that the priority is responsible AI deployment rather than merely rapid adoption, framing responsibility as a strategic imperative [3-4][33-34][5-6].
Speakers: Andy Parsons, Moderator
Responsible deployment outweighs speed of AI adoption (Moderator) Principles‑to‑practice imperative (Andy Parsons)
Both highlight the danger that responsible AI becomes a luxury for large firms and argue that industry bodies must provide open frameworks to enable MSMEs [297-304][332-340][344-347].
Speakers: Prativa Mohapatra, Amol Deshpande
Risk of divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra) Industry bodies help cascade standards and templates to MSMEs (Amol Deshpande)
Both see regulation as indispensable for safety and trust, even in highly regulated sectors like aviation and payments [351-364][370-376].
Speakers: Vishal Anand Kanvaty, Dr. Satya Ramaswamy
Regulation is essential to prevent AI “berserk” behaviour (Vishal Anand Kanvaty) Global regulatory compliance coexists with innovation; safety‑critical aviation demands compliance (Dr. Satya Ramaswamy)
Unexpected Consensus
Both a payment‑system leader (NPCI) and an airline (Air India) emphasize that AI safety must be achieved without compromising user experience, using transparent explanations and human oversight.
Speakers: Vishal Anand Kanvaty, Dr. Satya Ramaswamy
NPCI’s AI for fraud detection prioritising low false‑positives and transparent explanations (Vishal Anand Kanvaty) Air India’s generative‑AI virtual assistant with safety guardrails, continuous monitoring and human feedback (Dr. Satya Ramaswamy)
Despite operating in very different domains, both speakers converge on a model where AI safety, transparency, and user-centric design are jointly pursued, an alignment not explicitly anticipated at the start of the session [257-270][286-294].
POLICY CONTEXT (KNOWLEDGE BASE)
NPCI’s implementation of transparent small language models and Air India’s adherence to safety-critical, human-in-the-loop standards exemplify sector-specific applications of responsible AI that prioritize user experience alongside safety [S49].
An engineer (Andy Parsons) and a senior policy‑focused moderator both frame responsible AI as a strategic business opportunity rather than a compliance burden.
Speakers: Andy Parsons, Moderator
Embedding responsible AI as a leadership and operating discipline and opportunity (Andy Parsons) Trust, transparency and accountability are foundational for corporate AI (Moderator)
It is notable that a technical leader and the session moderator share a business-oriented view of responsible AI, treating it as a growth driver rather than a mere regulatory checkbox [32-33][5-6].
POLICY CONTEXT (KNOWLEDGE BASE)
Andy Parsons highlighted how emerging regulations can act as catalysts for proactive AI adoption, turning compliance into a competitive advantage, a view echoed by industry leaders who see responsible AI as a market differentiator [S42][S41].
Overall Assessment

The panel exhibits strong consensus on four core pillars: (1) embedding transparent provenance through open standards; (2) building open, reusable frameworks with industry‑body support; (3) viewing regulation as a necessary, balanced catalyst; and (4) ensuring human‑in‑the‑loop safety guardrails while balancing innovation and user experience.

High consensus across technical, business, and policy perspectives, indicating a unified direction for responsible AI implementation in India’s corporate sector. This alignment suggests that forthcoming initiatives are likely to prioritize open standards, collaborative governance, and proportionate regulation, facilitating scalable and trustworthy AI adoption.

Differences
Different Viewpoints
Extent and nature of regulation for AI
Speakers: Vishal Anand Kanvaty, Sarika Guliani, Andy Parsons
Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty) Regulation should be balanced, avoiding overly heavy‑handed approaches (Sarika Guliani) EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons)
Vishal argues that mandatory regulation is required to embed safeguards and prevent harmful AI outcomes [370-376]. Sarika counters that regulation must be proportionate and avoid stifling innovation, advocating a light-touch or balanced approach [379-382]. Andy frames regulation as a catalyst that pushes good practices rather than a punitive burden, citing the EU AI Act, California law and India’s IT rules as drivers for responsible AI [25-27][106-108]. These positions reveal a clear disagreement on how strong and prescriptive AI regulation should be.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI regulation range from calls for comprehensive safeguards to arguments for limited, sector-specific rules, reflecting divergent industry attitudes toward regulatory scope and the need for balanced policy design [S41][S42][S53].
Universal open standards versus industry‑specific, flexible frameworks
Speakers: Andy Parsons, Amol Deshpande, Prativa Mohapatra
C2PA content credentials as an open, interoperable standard (Andy Parsons) Need for industry‑wide, standards‑based infrastructure (Andy Parsons) One size doesn’t fit all… need templates per industry (Amol Deshpande) Risk of a divide… need free, accessible frameworks (Prativa Mohapatra)
Andy promotes a single, open, cross-industry standard (C2PA) that any organization can adopt, emphasizing non-proprietary, interoperable infrastructure [61-66][66-70]. Amol stresses that a “one size fits all” model is unrealistic and that each sector requires its own templates and guardrails, advocating a “bring-your-own-AI” approach [168-180]. Prativa warns that without free, open frameworks large enterprises will outpace MSMEs, underscoring the need for accessible standards to avoid a divide [297-304]. The speakers therefore disagree on whether a universal open standard can serve all sectors or whether tailored, industry-specific solutions are necessary.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between universal, open standards and adaptable, industry-specific frameworks is a recurring theme in AI governance, with initiatives like the Agent Standards Initiative advocating open, consensus-based standards while acknowledging the need for flexibility in implementation [S43][S48][S58].
Primary driver for responsible AI adoption – industry bodies versus regulatory mandates
Speakers: Amol Deshpande, Vishal Anand Kanvaty, Sarika Guliani
Awareness → action → demonstration cycle; role of industry bodies in disseminating frameworks (Amol Deshpande) Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty) Regulation should be balanced, avoiding overly heavy‑handed approaches (Sarika Guliani)
Amol emphasizes that the ecosystem should first become aware, then act, and finally demonstrate responsible AI, with industry associations (e.g., FICCI) playing a key role in cascading standards and templates to the broader market [332-340]. Vishal argues that regulation is indispensable to keep AI from behaving dangerously and must be embedded in systems [370-376]. Sarika, while acknowledging the need for regulation, calls for a proportionate, balanced approach that does not over-regulate, suggesting that industry bodies can complement but not replace regulation [379-382]. The tension lies in whether industry-led self-governance or statutory regulation should be the main engine for responsible AI.
Unexpected Differences
Open‑standard advocacy versus internal proprietary governance approaches
Speakers: Andy Parsons, Prativa Mohapatra
C2PA content credentials as an open, interoperable standard (Andy Parsons) Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra)
Andy strongly advocates for an industry-wide, free, open standard (C2PA) that any organization can adopt, emphasizing cross-industry interoperability [61-66][66-70]. Prativa, while supporting responsible AI, focuses on Adobe’s internal ART framework embedded within its own products, without explicitly championing an external open standard. This subtle divergence-external open standards versus internal proprietary governance-was not anticipated given the overall consensus on the need for transparency.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate pits open-standard advocates, who promote interoperable, community-driven specifications, against firms favoring proprietary governance models; this mirrors broader discussions on open ecosystems versus closed incumbents in technology history [S43][S44][S58].
Overall Assessment

The panelists uniformly agree that responsible AI, transparency, and accountability are essential for India’s digital future. However, they diverge on three main fronts: (1) how prescriptive regulation should be, ranging from mandatory safeguards to balanced, light‑touch frameworks; (2) whether a single open standard can satisfy all sectors or whether industry‑specific, flexible solutions are required; (3) the relative weight of industry bodies versus statutory regulation in driving adoption. These disagreements are moderate rather than polarising, reflecting differing strategic preferences rather than fundamental opposition.

Moderate disagreement – the differing views on regulatory intensity, standardisation strategy, and governance mechanisms could lead to fragmented implementation unless a coordinated consensus is reached. The implications are that policy makers and industry leaders must negotiate a hybrid model that blends baseline regulatory requirements with adaptable standards and strong industry‑body participation to avoid silos and ensure inclusive, trustworthy AI deployment.

Partial Agreements
All four speakers share the goal of achieving transparency and accountability in AI systems. Andy pushes for a global open standard (C2PA) that tags content with provenance [61-66]. Prativa describes internal product‑level governance (the ART philosophy) that embeds traceability directly into Adobe tools [197-210]. Satya highlights the necessity of a human‑in‑the‑loop safety mechanism in aviation AI [360-362]. Vishal focuses on transaction‑level transparency, providing users with explanations for AI‑driven decisions [292-294]. While the end‑goal of trustworthy AI is common, the speakers diverge on the mechanisms—global standards, internal product design, operational human oversight, or user‑facing explanations.
Speakers: Andy Parsons, Prativa Mohapatra, Satya Ramaswamy, Vishal Anand Kanvaty
C2PA content credentials as an open, interoperable standard (Andy Parsons) Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra) Human‑in‑the‑loop oversight for safety‑critical AI (Satya Ramaswamy) Transparent explanations for declined transactions (Vishal Anand Kanvaty)
Takeaways
Key takeaways
Responsible AI must move from high‑level principles to provable, operational practice within enterprises. Transparency and provenance of AI‑generated content are essential; open, interoperable standards such as C2PA enable this at scale. Effective AI governance requires coordinated people, process, technology, and industry‑body layers – not a single checklist. Regulatory developments (EU AI Act, India IT rules, state‑level AI laws) are viewed as catalysts that should coexist with innovation. Sector‑specific implementations illustrate practical approaches: Air India’s guarded generative‑AI assistant, NPCI’s fraud‑detection model with transparent explanations, RPG Group’s flexible, scalable governance across diverse units. There is a risk of a divide between large enterprises and MSMEs; open, free frameworks and industry‑wide dissemination are needed to ensure inclusive adoption.
Resolutions and action items
FICCI pledged to continue the dialogue and translate insights into concrete actions for the Indian ecosystem. Adobe highlighted its ART (Accountability, Responsibility, Transparency) methodology and will continue embedding it in product pipelines such as Firefly and Acrobat. Air India committed to maintain continuous monitoring and safety guardrails for its generative‑AI virtual assistant, leveraging partner technologies for risk mitigation. NPCI will expand its transparent AI‑driven fraud‑explanation service and align it with emerging regulatory frameworks. Industry bodies (e.g., C2PA, FICCI, sector associations) agreed to promote open standards and share governance templates to help MSMEs adopt responsible AI.
Unresolved issues
How to harmonise global AI regulations (EU AI Act, OECD, UNESCO) with India’s emerging policies and the diverse needs of different sectors. The precise balance between industry‑led self‑regulation and mandatory regulatory intervention remains unsettled. Effective mechanisms for consumer awareness of provenance symbols and UI design for transparency are still under development. Specific approaches for integrating human‑in‑the‑loop oversight in high‑volume payment fraud detection were mentioned but not detailed. Scalable, low‑cost governance frameworks that MSMEs can realistically implement without extensive legal teams were not fully resolved.
Suggested compromises
Adopt a hybrid model where open, industry‑driven standards provide the baseline, complemented by proportionate regulatory requirements to ensure safety without stifling innovation. Implement safety guardrails that are adjustable – tighter for high‑risk contexts (aviation) and lighter for consumer‑facing services, balancing risk and user convenience. Encourage large enterprises to create reusable, open‑source governance templates that can be cascaded to smaller firms via industry bodies. Regulators to act as catalysts, offering guidance and frameworks while allowing flexibility for companies to innovate within those boundaries.
Thought Provoking Comments
2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity.
Frames the timeline as a decisive turning point, moving responsible AI from a nice‑to‑have to a business imperative, which sets a forward‑looking urgency for the whole panel.
Established the central theme of the session and prompted other speakers to discuss concrete ways to meet that 2026 deadline, leading to deeper talks on standards, compliance and operationalisation.
Speaker: Andy Parsons
The question is no longer ‘should we be responsible with AI?’ but ‘can your systems actually prove that you have been responsible with AI?’
Shifts the debate from philosophical agreement to measurable proof, introducing the concept of ‘provable practice’ that challenges participants to think about auditability and evidence.
Triggered a focus on provenance, metadata and standards (C2PA) and caused panelists like Prativa and Amol to reference how their organisations embed traceability into products.
Speaker: Andy Parsons
We built an open, cross‑industry standard – the C2PA content credentials – that embeds provenance directly into media files, so anyone can verify who made it, with what model, and when.
Introduces a concrete, industry‑wide solution that is non‑proprietary, highlighting collaboration over competition and providing a tangible tool for accountability.
Guided the discussion toward the importance of open standards, with later speakers (e.g., Amol and Prativa) echoing the need for interoperable frameworks and citing the C2PA as a model for other sectors.
Speaker: Andy Parsons
One size doesn’t fit all – we need a ‘bring your own AI’ approach, with orchestration across all AI layers and people as the most critical stakeholder.
Challenges the notion of a single, monolithic AI governance model, emphasizing flexibility, modularity, and the human factor in responsible AI deployment.
Shifted the conversation from generic principles to practical implementation strategies, prompting Prativa to discuss product‑specific safeguards and Satya to illustrate how Air India balances flexibility with safety.
Speaker: Amol Deshpande
Our AI governance philosophy is ART – Accountability, Responsibility, Transparency – and we embed it into every product through hundreds of validation steps.
Provides a memorable framework (ART) that simplifies complex governance concepts and demonstrates how Adobe operationalises them, making the abstract tangible.
Reinforced Andy’s provable practice theme, gave the panel a concrete example (Firefly’s nutrition labels), and encouraged other speakers to share analogous mechanisms in their domains.
Speaker: Prativa Mohapatra
Our generative AI virtual assistant has handled 13.5 million queries with a 97 % autonomous success rate, and we even use generative AI to monitor its own performance for safety.
Offers a real‑world, high‑scale case study that illustrates both the benefits and the safety challenges of AI, and introduces the novel idea of AI‑in‑the‑loop monitoring.
Moved the discussion from theory to operational reality, prompting follow‑up questions about risk management, prompting Vishal to discuss transparency in payments, and reinforcing the need for robust guardrails.
Speaker: Dr. Satya Ramaswamy
We built a small language model that can explain why a transaction was declined, giving customers transparent reasons while keeping false‑positive rates low.
Shows how transparency can be delivered at massive scale in a critical financial context, linking technical design (explainability) with consumer trust.
Introduced the payments perspective, expanding the conversation beyond media to financial services, and highlighted the practical trade‑offs between accuracy and user experience.
Speaker: Vishal Anand Kanwat
Responsibility is no longer a compliance checklist; it is a commitment to shared human values – we choose what we create, not just what we can create.
Elevates the discussion to a philosophical level, reminding participants that ethical intent underpins technical measures, and framing responsible AI as a value‑driven choice.
Served as a concluding synthesis, reinforcing earlier points about standards, governance, and human‑centric design, and set the tone for future collaborative actions beyond the session.
Speaker: Sarika Guliani
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that moved the dialogue from abstract principles to concrete, measurable practices. Andy Parsons’ framing of 2026 as the deadline for provable responsible AI and his introduction of the C2PA standard set the agenda, prompting panelists to showcase how their organisations translate those ideas into product‑level safeguards (Prativa’s ART framework, Satya’s airline AI assistant, Vishal’s transparent payment explanations). Amol’s ‘bring your own AI’ and emphasis on people added nuance, steering the conversation toward flexible, human‑centric governance. Each of these insights sparked new sub‑topics—standards, auditability, scalability, and the balance between regulation and innovation—thereby deepening the analysis and shaping a cohesive narrative that blended technical solutions with ethical imperatives.

Follow-up Questions
What are the implementation costs and day‑to‑day operational expenses of adopting responsible AI practices?
Understanding financial implications is crucial for enterprises to plan and justify responsible AI investments.
Speaker: Andy Parsons
How can organizations demonstrably prove that their AI systems are responsible and compliant?
A measurable, auditable proof of responsibility is needed to move from principles to provable practice.
Speaker: Andy Parsons
How can consumer awareness of content‑provenance symbols (e.g., C2PA badge) be increased, and what UI designs are most effective?
Early consumer awareness is limited; effective UI can drive trust and adoption of provenance standards.
Speaker: Andy Parsons
What business case can be built for content provenance to make it financially compelling for enterprises?
Enterprises need clear ROI or value‑proposition arguments to invest in provenance infrastructure.
Speaker: Andy Parsons
How can standards adoption be improved given that many social‑media platforms strip metadata and provenance information?
Metadata stripping undermines transparency; research is needed on platform policies and technical solutions.
Speaker: Andy Parsons
What approaches allow embedding safety controls (the “safety knob”) in generative AI without degrading user experience?
Balancing safety with convenience is critical for customer‑facing AI services like virtual assistants.
Speaker: Dr. Satya Ramaswamy
How can prompt‑firewall and centralized control mechanisms be standardized across industries?
Standardized prompt controls could help prevent jailbreaks and misuse, but industry‑wide norms are lacking.
Speaker: Dr. Satya Ramaswamy
How can responsible‑AI frameworks be made accessible and affordable for MSMEs?
SMEs lack resources for extensive governance; scalable, low‑cost frameworks are needed to avoid a divide.
Speaker: Prativa Mohapatra
What role should industry bodies play in disseminating responsible‑AI templates and best practices to diverse sectors?
Industry bodies can cascade standards, but mechanisms for effective knowledge transfer require study.
Speaker: Amol Deshpande
How can global best practices (EU AI Act, UNESCO, OECD, etc.) be harmonized with India’s emerging regulatory landscape (DPDP Act, IT rules, etc.)?
Alignment is needed to avoid conflicting obligations and to create a coherent national AI governance model.
Speaker: Shantheri Mallaya
Is industry‑led governance realistically possible for AI at scale, or is regulatory intervention inevitable?
Determining the balance between self‑regulation and mandatory rules is essential for sustainable AI ecosystems.
Speaker: Vishal Anand Kanvaty
What metrics and governance models ensure fairness, accountability, and transparency in AI‑driven fraud detection for payment systems?
Payments require precise, unbiased AI; research is needed on appropriate performance and fairness metrics.
Speaker: Vishal Anand Kanvaty
How can AI transparency be integrated into legacy systems across sectors such as aviation, payments, and creative tools?
Legacy environments pose technical challenges for embedding provenance and auditability.
Speaker: Multiple (Andy Parsons, Dr. Satya Ramaswamy, Prativa Mohapatra)
What impact does the lack of consumer‑facing provenance symbols have on trust, and how can this impact be measured?
Empirical evidence is needed to justify investments in visible provenance cues.
Speaker: Andy Parsons
What barriers exist to global adoption of open standards like C2PA, and how can they be overcome?
Understanding technical, legal, and market obstacles is key to widespread standard uptake.
Speaker: Andy Parsons
How can AI governance frameworks be tailored for sector‑specific needs while maintaining interoperability?
Sector diversity requires flexible yet compatible governance models.
Speaker: Amol Deshpande
What are the implications of AI‑generated misinformation in a multilingual, culturally diverse market like India?
Misinformation risk is amplified by language and cultural variety; targeted research is needed.
Speaker: Andy Parsons
How can legal and compliance teams be upskilled efficiently to handle AI governance responsibilities?
Rapid skill development is essential for enterprises to meet emerging AI regulations.
Speaker: Prativa Mohapatra
What is the optimal balance between AI automation and human‑in‑the‑loop oversight for safety‑critical domains?
Ensuring safety while leveraging AI efficiency requires clear guidelines for human intervention.
Speaker: Dr. Satya Ramaswamy
How can the effectiveness of AI transparency measures be evaluated empirically across different industries?
Metrics and studies are needed to assess whether transparency initiatives actually build trust and reduce risk.
Speaker: General (multiple participants)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Who Watches the Watchers Building Trust in AI Governance

Who Watches the Watchers Building Trust in AI Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, introduced by Gregory C. Allen, featured Stephen Clare, co-lead author of the International AI Safety Report, Hiroki Hibuka, a Japanese AI policy expert, and Shana Mansbach of the think-tank Fathom, which convenes AI governance discussions [1-3][4-8][9-10]. Clare explained that the report, originating from the 2023 Bletchley Safety Summit, is meant to be an IPCC-style evidence base for AI governance and is backed by more than 30 countries and intergovernmental bodies [19-22].


He noted that many risks have moved from theoretical to observable, with billions of users and incidents such as deepfakes and AI-enabled cyber attacks prompting a surge in risk-management techniques [25-31]. Clare highlighted that model jailbreaks have become substantially harder, citing the UK Security Institute’s shift from minutes to several hours to find universal jailbreaks for the latest models [42-45]. Nevertheless, he warned that safeguards remain vulnerable to skilled actors, that implementation is uneven across companies, and that ensuring broad compliance is now a pressing governance challenge [51-57][58].


Hiroki contrasted hard-law and soft-law strategies, arguing that most jurisdictions already have sector-specific regulations (privacy, copyright, finance, etc.) and the key question is how to adapt them rather than create entirely new AI statutes [82-86]. He described the EU’s AI Act versus Japan’s and the US’s more sector-specific, “exempt” versus “exposed” approaches, noting Japan’s preference for pre-emptive rules and the need for more agile, multi-stakeholder soft-law mechanisms [87-95][96-100]. He emphasized the difficulty of evaluating values such as privacy or fairness and the lack of benchmark standards worldwide [98-100].


Mansbach argued that the rapid rise in AI capabilities has created a systemic trust deficit for the public, deployers, regulators and developers, which traditional command-and-control governance cannot address because of speed and technical-capacity gaps [105-113][114-118]. Fathom proposes a government-authorized marketplace of independent verification organizations (IVOs) that would assess outcomes such as child safety, data privacy, controllability and interpretability, providing a rebuttable presumption of a heightened standard of care [116-122][124-128][173-179]. She identified liability clarity, insurance eligibility and market advantage as three incentives for entities to seek verification, likening the model to UL or Underwriters Lab certifications [221-230][231-239].


Gregory highlighted that without insurance or liability frameworks AI adoption could be stifled, and that analogies such as AS9100 in aerospace or the NHTSA’s star-rating system illustrate how third-party standards can drive safety [207-214][330-334]. The panel agreed that current evaluation tools are narrow and quickly become outdated, underscoring the urgency of developing flexible, outcome-based standards and independent audits to keep pace with evolving AI systems [258-266][270-276][292-298]. Overall, they concluded that a layered, outcomes-focused verification ecosystem, supported by legal, insurance and market incentives, is essential to bridge the trust gap and enable effective AI governance [171-179][221-230][292-298].


Keypoints


Major discussion points


The International AI Safety Report as the new baseline for AI governance – The panel repeatedly cites the report as the “foundation” for current conversations, noting that AI risks have moved from theoretical to observable real-world impacts (e.g., deep-fakes, cyber-attacks) and that technical safeguards are becoming harder to bypass, yet still have vulnerabilities that raise urgent governance questions. [2-4][24-31][33-41][50-58]


Divergent global regulatory approaches – Participants compare the EU’s hard-law AI Act with Japan’s sector-specific, pre-emptive soft-law model and the United States’ high-level, principle-based regime, emphasizing that the real issue is how existing laws (privacy, copyright, sector regulations) are updated or supplemented rather than whether new AI-specific statutes are needed. [80-88][89-96]


The “trust problem” and the proposal of independent verification organizations (IVOs) – A central theme is the lack of trust for the public, deployers, regulators, and developers. The panel proposes a government-authorized marketplace of IVOs that issue outcomes-based certifications, which can clarify standards of care, unlock insurance, and create market incentives (e.g., “seal of approval” similar to UL). [106-112][117-124][125-130][171-178][221-230][231-239]


Practical challenges of auditing and evaluation – Audits are costly, lack clear economic incentives, and suffer from an “evaluation gap” because existing benchmarks are narrow and quickly become outdated. The discussion highlights the need for adaptable, incentive-aligned testing frameworks and more transparent, third-party evaluation capacity. [187-192][197-199][255-268][270-284]


Layered responsibility across the AI ecosystem – Rather than assigning safety to a single actor, the speakers argue for a “defense-in-depth” model that distributes duties among developers, downstream deployers, ecosystem monitors, and end-users-mirroring analogies to automotive and aerospace safety standards. [155-162][158-166][161-168]


Overall purpose / goal of the discussion


The panel’s aim was to take stock of where AI governance stands in 2026, using the International AI Safety Report as a common reference point, to compare how different jurisdictions are handling regulation, and to explore innovative governance mechanisms-particularly independent, outcomes-based verification-that can bridge the trust gap, align incentives, and support effective, scalable oversight of rapidly advancing AI systems.


Overall tone


The conversation began with a celebratory, appreciative tone toward the report and the progress made since the Bletchley Summit. As the dialogue progressed, the tone shifted to a more urgent and problem-focused stance, highlighting gaps in technical safeguards, regulatory inconsistencies, and incentive misalignments. By the end, the tone became constructive and forward-looking, emphasizing collaborative solutions (IVOs, market incentives, analogies to other safety regimes) while maintaining a realistic acknowledgment of the challenges ahead.


Speakers

Gregory C. Allen


Area of expertise: AI governance, policy discussion moderation


Role/Title: Moderator/Host of the panel discussion [S4]


Stephen Clare


Area of expertise: AI safety, technical risk management, AI governance


Role/Title: Co-lead author of the International AI Safety Report; co-lead writer of the report [S3]


Hiroki Hibuka


Area of expertise: AI policy, law, and governance, especially in Japan


Role/Title: Research Professor, Kyoto University Graduate School of Law; former Japanese government policymaker; non-resident senior associate at CSIS [S1]


Shana Mansbach


Area of expertise: AI governance, independent verification, policy innovation


Role/Title: Vice President of Strategy and Communications, Fathom [S5]


Additional speakers:


Karina Prunkle – Co-lead writer of the International AI Safety Report (mentioned in the discussion).


Full session reportComprehensive analysis and detailed insights

Gregory C. Allen opened the session by introducing the four panelists and noting Stephen Clare’s contribution to the International AI Safety Report as the “foundation” for AI-governance discussions in the coming year [1-4]. He also highlighted Hiroki Hibuka’s expertise on Japanese AI policy [5-8] and mentioned Shana Mansbach’s role at the young think-tank Fathom, a leading convenor of the ASHFE conference series [9-10].


Stephen Clare then outlined the origins and purpose of the International AI Safety Report. Drafted as the shared evidence base for the 2023 Bletchley Safety Summit and modelled on IPCC reports, the document is backed by more than thirty countries and intergovernmental organisations [18-22]. Its 2026 message is that “the rubber is really hitting the road”: risks once theoretical are now observable at scale, with a billion users worldwide and concrete harms such as deep-fake proliferation and AI-enabled cyber-attacks [24-31]. Clare reported that technical safeguards have improved markedly-modern models now require seven to ten hours for a universal jailbreak, compared with minutes for earlier systems [42-45]-and that twelve leading AI developers publish frontier safety frameworks, indicating greater transparency [48-49]. He cautioned, however, that safeguards remain vulnerable to skilled actors, implementation is uneven, and the key governance challenge is ensuring broad compliance and addressing non-adoption [51-58].


Hiroki Hibuka provided a comparative overview of global regulatory approaches. He emphasized that all jurisdictions already contain a mix of hard-law and soft-law instruments (privacy, copyright, sector-specific rules) [80-86] and argued that the policy task is to update these existing rules rather than create brand-new AI statutes. He contrasted the EU’s AI Act (hard-law, high-risk-focused) with Japan’s pre-emptive, sector-specific soft-law approach and the United States’ “exposed” principle-based regime that relies on high-level guidelines and post-hoc litigation [87-96]. Hibuka noted the difficulty of evaluating abstract values such as privacy, transparency and fairness, pointing to the current lack of benchmark standards worldwide [98-100]. He further observed that democratic debate is needed to decide acceptable safety levels (e.g., how many deaths are tolerable for autonomous vehicles) and that test-measure design-such as comparing accident rates on a straight highway versus a complex city-is itself a policy question [300-310]. Hibuka also highlighted public procurement as a powerful market pull: governments could require verified AI in contracts, creating a strong incentive for firms to seek certification [300-310].


Gregory then asked Shana Mansbach to explain Fathom’s perspective on the emerging “trust problem”. She described how the surge in model capabilities has generated uncertainty for the public, deployers, regulators and developers, producing a systemic lack of confidence that AI systems work safely, securely and as advertised [105-108]. She argued that traditional command-and-control governance cannot keep pace with AI’s speed or the scarcity of technical expertise outside frontier labs [111-114].


Mansbach proposed an outcomes-based marketplace of government-authorised independent verification organisations (IVOs). Regulators would define desired outcomes-such as child safety, data-privacy, controllability and interpretability-and IVOs would conduct up-to-date testing to certify that AI systems meet those outcomes [117-122]. She discussed the concept of a “standard of care” that verification could establish, providing a rebuttable presumption of heightened care and clarifying liability before any harm occurs [173-179]. Mansbach identified three primary incentives for organisations to seek verification: (i) liability clarity, (ii) eligibility for insurance coverage (insurers are currently refusing to underwrite AI-enabled products), and (iii) a market advantage akin to UL or Underwriters Lab seals, which could become decisive for buyers such as school superintendents [221-230][231-239]. She qualified these analogues as partial rather than perfect matches to existing safety-certification models [231-239].


Gregory linked these ideas to existing safety-standard mechanisms, noting that in aerospace the AS9100 certification is required for insurance and that insurers’ refusal to cover AI-driven activities could act as a de-facto regulatory lever [207-214][240-250]. He also drew an analogy to the U.S. National Highway Traffic Safety Administration’s star-rating system for vehicles, suggesting a similar rating could guide AI-system adoption [330-334].


Stephen elaborated on a “layered, defence-in-depth” responsibility model. He argued that no single actor can bear full responsibility: developers should embed training techniques to reduce dangerous outputs, downstream deployers should implement monitoring and classification systems, and ecosystem-wide monitors should track AI-generated content across borders. He stressed the need for societal-level resilience-hardening digital infrastructure against AI-enhanced cyber-attacks-rather than attempting to prevent every harmful use [155-168].


The panel then examined incentives for independent audits. Hibuka reiterated that without clear economic benefits corporate executives are unlikely to pursue verification, citing autonomous-vehicle certification as a strong market driver [187-192]. He reiterated that public procurement could provide a powerful pull if governments required verified AI for contracts [300-310], and noted that insurance could serve as another carrot, though current lack of AI-specific coverage limits this lever [197-199][318-328]. Stephen highlighted a significant “evaluation gap”: existing benchmarks are narrow, quickly become outdated, and fail to capture the breadth of real-world use cases, as many evaluations consist of static question sets that do not reflect the stochastic, multi-turn nature of modern models [255-267]. Shana agreed, adding that testing is intrinsically hard because model outputs vary across runs and downstream impacts can differ dramatically between users (e.g., a harmful suggestion that may be benign for most but catastrophic for a vulnerable individual) [270-277]. She argued that a competitive IVO marketplace would incentivise continual improvement of testing tools, creating a “race to the top” similar to how UL certification drives product safety in other sectors [285-290].


Gregory asked how consensus on risks could be turned into formal standards. Stephen responded that while the report provides a state-of-the-science baseline, there is still a lack of agreed-upon best practices, and any standards would need to evolve rapidly to keep pace with model capabilities [292-298][255-267].


Across the panel, the participants repeatedly referred to the International AI Safety Report as a foundational baseline for current AI-governance discussions [2-3][19-23]. They agreed that technical safeguards have improved yet remain vulnerable and unevenly applied [35-40][51-57]; organisational safety frameworks are inconsistent, creating a need for outcomes-based verification [48-57][111-130]; and insurance can serve as a powerful lever to drive adoption of verification standards [221-231][244-250]. Disagreements centred on the primary economic incentive for audits (public procurement versus insurance versus market pressure) [187-195][318-328][221-238] and on whether existing hard- and soft-law regimes are sufficient or new governance mechanisms are required [80-86][48-57][62-65].


Key take-aways


1. The International AI Safety Report is a foundational baseline confirming that AI risks are now material.


2. Technical safeguards are stronger but remain vulnerable and unevenly applied.


3. Global regulatory approaches differ, yet all must adapt existing hard- and soft-law rules to cover AI.


4. A trust deficit exists across stakeholders; an outcomes-based IVO marketplace could mitigate it by providing liability clarity, insurance eligibility, and market advantage.


5. Safety responsibility must be layered across developers, deployers and societal monitors.


6. Incentives such as insurance underwriting, public procurement and consumer-facing seals are essential to motivate audits.


7. Current evaluation benchmarks are narrow and outdated, necessitating dynamic, multi-turn testing tools.


8. Lessons from aerospace (AS9100), automotive safety ratings and UL certification can inform AI-safety standards.


Proposed actions


a. Establish a government-authorised IVO marketplace.


b. Encourage regulators and insurers to tie compliance with IVO verification to liability standards, insurance premiums and procurement contracts.


c. Develop sector-specific safety standards that combine hard law, soft law and voluntary frameworks.


d. Increase transparency from AI labs to reduce information asymmetry.


Unresolved issues include designing economically viable incentives, defining a universal standard of care, creating up-to-date evaluation methodologies that capture stochastic, multi-turn risks, and ensuring third-party auditors retain expertise as technology evolves. The panel suggested a hybrid approach that blends layered responsibility, flexible outcomes-based standards and market-driven incentives to achieve scalable, trustworthy AI governance.


Session transcriptComplete transcript of the session
Gregory C. Allen

Again, to my immediate right, we have Stephen Clare, who wrote the International AI Safety Report as the co -lead author, if I’m not mistaken. And he earned that applause, because that report is a remarkable document that I do think is the foundation upon which all conversations about AI governance now must rest for the next year. It’s the sort of minimum amount of knowledge that you must have to participate in the conversation, which I think is really a tribute to him. Then we have Hiroki Hibuka, who is currently a research professor at the Kyoto University Graduate School of Law, and was also deeply involved in drafting Japan’s first set of soft law regulations, and is an expert on all things AI, but also especially astute at what’s going on in Japan.

We also have a privilege of collaborating with him at CSIS, where he’s a non -resident senior associate. And I must say, he is probably the best person writing about Japanese AI policy in Japanese, but he is definitely the best person writing about it in English. And so I often tell Hiroki that, like, if he doesn’t write about it, nobody in Washington, D .C. knows about it. So it’s important, his work. And then finally, we have Shana Mansbach, who’s the vice president of strategy and communications at Fathom, which is a young think tank, started only two years ago, but has already succeeded as one of the best conveners of the ASHFE conference series on AI, and also now leading a policy initiative, which I think she’s going to tell us all about.

So without further ado, I’d like to start with you, Stephen. I just said that the report that you were the lead author of is sort of the bedrock for having a conversation on AI governance. For those in the audience who haven’t yet made it through, but they, of course, will, can you sort of set the stage? Where are we in 2026 in AI governance and in AI safety, technical and procedural intervention?

Stephen Clare

Sure. Thanks, Greg. First of all, I’m sorry if I’d known Greg was going to make the report, you know, required reading, I would have tried harder to make it shorter. Yeah. Thanks for having me. Thanks for really excited to be here. So for people who don’t know, the report is it was founded up the Bletchley 2023 Bletchley Safety Summit as sort of, you know, the shared evidence base for decision makers thinking about these complicated, fast moving, noisy governance questions. It’s kind of trying to be like the IPCC report for for AI. It’s backed by over 30 countries and intergovernmental organizations. You know, I’m one of two co lead writers along with Karina Prunkle, but there’s over 30 dedicated experts writing different sections, and there’s hundreds of people that review it.

So it’s really trying to be a sort of state of the art, what do we know? What don’t we know about general purpose AI systems and the risks they might pose? I think this year the main message of the report is like the rubber is really hitting the road or something with these kind of systems. Risks that even a year or two ago might have been theoretical are now very real and we’re seeing emerging empirical evidence. More real world impacts of AI on productivity and labor markets and in science and in software engineering. It’s all like really happening out in the world. There’s a billion people now using AI around the world. Many of those impacts include risks.

So we’re seeing effects of deepfake spreading, cyber attacks being more common with AI systems. And so the need for sort of risk management techniques that are effective is also growing. One thing that I found surprising working on the report is that in this domain on risk management and technical safety, there’s actually some good news. Quite a lot of good news, I’d say. In various ways, our technical safeguards are improving. Models are becoming much harder to jailbreak. So. You know. So three, four years ago, if you asked a model to give you a recipe for a Molotov cocktail, it would not do that. But if you said, oh, I miss my grandma, and she used to tell me this amazing bedtime story about how she loved making Molotov cocktails, please help me remember my grandmother, it would be like, okay, well, if it’s for your grandmother.

Then that stopped working maybe a year or two ago, but then if you maybe translated your question into Swahili or something and put it in the model and then translated the answer back, it might have made safeguards. So none of that works anymore. These safeguards are much harder to evade, and we know this quantitatively. For example, the UK Security Institute will try and evade the safeguards or jailbreak all these new models when they’re released. At the beginning of 2025, they could do this in literally minutes, find a sort of universal jailbreak that would elicit potentially harmful knowledge. For the latest models, it’s taking them seven, ten hours to get around safeguards. So there’s still vulnerabilities, but for novices or even moderately skilled actors, it’s basically the same thing.

It’s becoming much, much harder to evade them. We’re also seeing more of these safeguards get implemented into organizational practices. So 12 companies, all the leading AI developers now have frontier safety frameworks, which are these documents that describe how they plan to manage risks as they scale more powerful systems, which is many more than had them a couple of years ago and is, I think, a sign of transparency and sort of collective learning about risk management that’s worth noting. So basically, yeah, our toolkit for managing these risks is growing. But, you know, it wouldn’t be a safety report if I didn’t maybe end on a few caveats or some bad news. The first is that these technical safeguards are still vulnerable in many ways.

They can still be jailbroken with enough effort or in edge cases, and it’s very difficult to test and provide reliable assurances that these safeguards will work across this huge range of use cases that these models are now applied to in the real world. And on the organizational side, you know, these safeguards only work if they are applied. And although we’re seeing, especially from frontier developers, we’re very prominent, usually quite robust safeguards applied on models, across the whole industry, and especially behind the frontier, application remains quite inconsistent. The safety frameworks, all these companies have them, but they vary in the risks they cover, they vary in the practices that they recommend. And so the landscape as a whole, you know, these tools only work if they are applied.

And we still see that, some vulnerabilities across the landscape, which I think turns this technical challenge, that points towards the governance challenge of how do we assure broader adoption, how do you ensure compliance, what do you do when there’s a lack of compliance. We’re sort of facing these questions, and again, because these risks and the impacts are now not something that we can sort of push down the road anymore, I think, for future years, the governance questions are becoming a lot more urgent.

Gregory C. Allen

Terrific. And if I could contrast what you said with what we might have said if we were having this conversation back at the Bletchley Park AI Summit. But it’s almost like the only good news on AI safety, AI security, and AI governance at Bletchley was, well, at least we’re all here talking about it. And now, three years later, the good news is we’ve done a lot about it. We have techniques that can provide demonstrable increases in safety. We don’t know everything that we need to work, but we know a lot of stuff that does work. And really, a lot of the challenges, I think, as the report says, it’s now in the hands of policymakers to make sure that these safeguards get implemented robustly and diversely.

So with that, I now want to turn to Hiroki, who I hope can give us a state of where we are in the story of AI governance around the world. If the next steps are really in the hands of policymakers, where are we globally?

Hiroki Hibuka

Thank you, Greg. And again, congratulations. Stephen was the publisher of the great report. And I think, first of all, I feel very glad that now the discussion on AI governance is such advanced compared to three years ago. I’m a lawyer and I’m a former policymaker. I worked for the Japanese government for four years, designing the Japanese AI policies, mainly in terms of regulation and governance. And as a lawyer and policymaker, the question after reading the report is, where is the end? And to what extent stakeholders have to manage the risks? Because in the end, you can’t remove all the risks. AI is black box and the technology advances so fast. And even though there is advance and progress of Godwins, the next day you may find another risk.

So there is no end to the story of how regulators should design the regulations. That is the main question. All countries. Countries are facing and different nations, regions take different approaches. Maybe the most famous regulation is the EU AI Act. And in that context, a lot of people say, hey, EU takes a hard law regulatory approach on AIs while Japan or UK or United States takes a software approach. But I think it’s a completely wrong understanding of the regulatory framework because, as you know, there are already lots of regulations that can be applied to AI systems. Privacy protection laws, copyright laws, or sector -specific laws such as finance, automotive or healthcare. We already have a lot of regulations out there.

So the real question is not whether or not to regulate AIs, but the real question is how to update our existing regulations and whether or not we need additional regulations targeting AI systems. In addition to the existing regulatory framework, so in that sense all countries take the hard law approach and also all countries have soft laws because European Union there are a lot of technical standards to implement the EU AI Act that are now under discussion but anyways all countries have both hard laws and soft laws that is the start of the discussion and then when we compare EU approach and Japan approach the clear difference is whether to regulate AI holistically or not sector -specific and when I compare the Japanese policy and the US policy we are on the same position as to taking a sector -specific regulation the main difference I understand is whether you prioritize the exempt approach or exposed approach the US takes more exposed approach you can do whatever you want to do and the regulation is usually very high level the principle is very high But once you have a problem, if you damage others’ properties or lives, then you go to the court and you fight in the court.

The Japanese society is not like that. In Japan, actually the number of losses is very low. People prefer to set the rules in advance. Japanese companies are very, very good at complying with the given rules. But they are not very good at creating their own governance mechanisms or explaining to stakeholders why you are doing that. And now Japanese stakeholders are starting to realize that it doesn’t work. So we need to have more agile and multi -stakeholder approach. So we are trying to leverage the power of soft laws, negotiating among different stakeholders, and give the standards, guidances. But in the end, again, if you violate the existing hard laws, of course you will be sanctioned. So that’s the main differences in American approach and Japan approaches.

And in the end, all countries are facing difficult questions of how to deal with this cutting -edge technologies that are black box and there are unlimited risk scenarios. And sometimes we don’t know how to evaluate the values such as privacy or transparency or fairness. There has been no clear benchmark standards so far in the society. So how to design those benchmarks and regulation methods are the challenges all countries are facing.

Gregory C. Allen

Terrific, Hiroki. And Shaina, I know you have a unique perspective on this because your organization is now proposing sort of additional models of AI governance that are not really reflected in existing law, whether in the United States or Europe or Japan or India. So walk us through what you see as the important work we’re doing now.

Shana Mansbach

Sure. My panelists have set me up very well to say this. So I think as the International AI Safety Report shows, the capabilities around these models are surging. And as the capabilities surge, so too does the uncertainty around the risks, by which I mean, do these systems work safely, securely, and as advertised? That uncertainty creates a trust problem, a trust problem for the public, which doesn’t have a way of figuring out what is actually safe, a trust problem for deployers, by which I mean hospital systems, retail, banks, who want to and indeed need to use these systems, but have no idea what they can actually trust. So there’s a trust problem for the regulators, too.

They don’t know, how do you confer not just trust, but how do you confer earned trust? And I would say there’s a trust problem for the developers also, because if and as trust starts to grow, there’s a trust problem for So if the trust starts to decline, you’re going to see adoption decline as well, so this is something that developers should be focused on too. The current approach is just not the current approach to tech governance is not equipped to handle this trust problem very well. Traditional command and control governance says here are the rules, here are all the things you have to do, here are the procedures, here is what compliance actually here’s what compliance actually looks like.

There are a bunch of problems with this approach in the context of AI, but I’ll focus on two, which is the speed problem. AI moves really, really quickly, and even well -intentioned regulations are going to become outdated very, very quickly, and then there’s the technical capacity problem. Even with the rise of the AI safety institutes, which are doing amazing work, the talents, the expertise for understanding these systems and understanding their risks is largely concentrated in the frontier labs, which of course leads some people to say, well, let’s just go to the frontier labs. They can regulate themselves. I don’t think I have to spend too much time explaining why there are problems with that approach but it’s simple incentives I think all of us know people in the labs who are doing amazing, amazing work they are the people who make sure that I can because of them I sleep better at night but the incentives are just not there there are always going to be trade -offs between investing in safety testing and tooling and investing in development so we’re going to have problems with self -regulation in terms of addressing that trust gap so where does that lead us?

at Fathom, my organization, we’re very focused on coming up with new models that can solve this trust gap so we’re very focused on independent verification specifically the marketplace of independent verification organizations by which I mean a government -authorized and overseen marketplace of independent verifiers which are trying to be charged with creating testing and tooling to determine whether these AI systems are actually safe The difference here is that this is an outcomes -based approach. Instead of, as I said, having procedures, here are the rules, here are all the things you need to do, here are all the boxes you must check to be certified as being good, you have an outcomes -based approach where you have a government saying, here are the things that we care about.

We care about children’s safety. We care about data privacy and protection. We care about controllability and interpretability. And then you have independent verifiers that can actually go out, do the testing, have updated testing constantly to make sure that those outcomes are being met. We think that independent verification solves for a couple of these deficits in the trust context. First, they are independent. The labs are not grading their own homework. Second, democratic accountability. You have governments that are creating outcomes instead of the industry doing it itself. Third, flexibility. Under this system, the IVOs, independent verification organizations, are constantly updating their testing and criteria to make sure that they’re keeping up with the pace of technology and the pace of risks as well.

And I think the fourth thing, which is pretty interesting, is it creates a race to the top here. Right now, the only people working on safety testing and tooling are in the labs. What we’re envisioning is a marketplace that incentivizes ever better testing and tooling here. I could talk about IVOs for days and days, but let me just end on one point. I was talking to Greg about this earlier, and Greg asked, are there analogous systems or industries or sectors that we could talk about? And I said, yeah, sort of. I mean, in America, we have Underwriters Lab. There’s LEED certification. There are some analogies. But the honest answer is there’s not a perfect analogy.

We have had the same regulatory system for the last century. And I think that with the rise of AI, we’re seeing that system is no longer built for purpose. And when we try to use old systems, hard law, soft law, any of these things, we’re really struggling to make it work. So what I’m trying to do, what I’d encourage all of us to do is to say, you know, we do need to think a little bit differently. Because this is what this technology in this time calls for.

Gregory C. Allen

Well, that’s great. So there’s a few points I want to pull together there. The first is, you know, as Hiroki pointed out, in the U .S. system, liability law looms extremely large, right? The lawsuits at the end of this story when things go wrong. And when you have, as, for example, ChatGPT does, 800 million weekly average users, something’s going to go wrong every week, right? And the question is… How is that going to intersect with our existing body of regulation? How is that going to intersect with liability law? The second thing is this is going to, because we’re talking about these general purpose technologies, this is going to be adopted in so many different sectors of the economy.

And right now, as Shana pointed out, the number of people who have, you know, Steven’s expertise on what it takes to really make AI systems safe and well -governed and perform reliably as intended across the whole range of potential applications, that’s not a lot of humans on planet Earth who are good at that stuff. And because these AI models are going to be deployed in just about every sector of the economy, we need some level of those capabilities in every sector of the economy. And so the question is, you know, if I am a financier, if I am a finance company, if I am a health care company, you know, how am I going to know and how are my consumers going to know?

that when they use AI -related capabilities, it’s going to work reliably as intended over the full range of acceptable use cases. And so, Stephen, I want to come to you and ask, when it comes to governance, when it comes to oversight and verification, how do you see the balance of responsibilities in terms of what responsibilities need to fall upon the model developers, what responsibilities need to fall upon the users, what responsibilities need to fall on independent third parties, whether that’s the government, whether that’s auditors, whether that’s this marketplace of verification that Shana is talking about. So what do you see as the balance of responsibilities, and how might this go wrong, how might this go right?

In 30 seconds or less.

Stephen Clare

I mean, I’m sure it’s kind of the boring but true answer. It’s the boring part of it. depends and it’ll vary a lot across use cases and sectors. I think probably it’s not the case that it’s fair or helpful or true to allocate to one actor or another, but instead we need this layered approach of just many different policies and practices at different parts of the stack. Because none of our approaches are foolproof, they all have vulnerabilities, and so we have, instead of safety by design, we have this safety by degree situation where we want defense in depth. So for developers, there will be training techniques that they can implement to make models less likely to elicit dangerous knowledge in the first place.

If there are people building on top of those models and then deploying them, there will be monitoring systems they can put in place and classifiers that identify dangerous queries and stop models from answering them. and then probably for ecosystem monitoring bodies which could be deployers but could also be other institutions in the world there can be tracking how AI content is spreading across borders and around the world and then I think there’s this other aspect of we’re focusing a lot on sort of model or developer safety but as we are moving into this world where many people around the world are having access to powerful, helpful intelligent technologies and we also just need to adapt for that reality and think about resilience at the societal level too of how do we adapt to the beneficial use cases and the various use cases that these models will be used for so thinking about hardening digital systems against increased cyber attacks just sort of admitting the reality of the situation in many ways and adapting to it rather than trying to prevent all harmful uses in the first place I think we need a variety of approaches across all these different actors

Gregory C. Allen

Yeah. And just to use an analogy for how broad the group of stakeholders is, if you think about a ride hailing service, a taxi service like Uber, you have the automobile manufacturers who have to make sure that this is a solid car design that was manufactured safely and appropriately to specification. Then you have Uber, where in some countries Uber owns the car, and so they’re responsible for ensuring that it gets maintenance appropriately. And then you have the driver who’s responsible for ensuring that they are actually following the law and driving the car safely. And if you apply that analogy to AI, you have the model developer, then you might have the sort of business use case deployer, which could be a bank, a medical device company.

Who? A financial institution, whoever. And then you finally have the end customer who’s receiving those services and making sure that they’re using them appropriately. And so. If you think about that sort of different body of use cases, as I said before, the capabilities are not symmetric across all of those. But there are sort of obligations. And so, Shana, I want to come back to you and ask this model that you’re proposing, what exactly does it mean for the different stakeholders in the ecosystem? How does their life change if we adopt the system that you’re in favor of?

Shana Mansbach

Yeah, I mean, the overarching answer is we create trust throughout the system, which is the missing piece here. I think there are a couple of pieces that I would pull out. You had mentioned liability earlier, and let me talk about that a little bit. What this system does, it does not assign liability. It doesn’t say, you know, deployers, developer, it’s you, it’s you, it’s you. We’re seeing, at least in America, courts move their way through this. Sister. court cases move their ways through the court system and we’ll see where that is but where that ends up being but what is really missing is a standard of care and this is I think one of the real advantages that this system has so right now at least how it works in our current tort system is that if you’re Waymo kill someone someone can sue and then a judge and a jury has to figure out so again we’re not answering who should be sued but let’s say that the family of someone who got hurt or killed is suing Waymo what happens is that the jury has to decide whether whether the person who was sued did the right thing and if you are not technical that is the hardest thing even if you are technical and maybe even Waymo doesn’t know So what this system would do is confer, if you are verified, it would confer, the verification would confer a rebuttal presumption of having met a heightened standard of care.

So what we’re doing is clarifying and defining up front before an actual harm happens what a deployer or whoever is sued is actually supposed to do instead of having this very, very messy system where someone after the fact has to figure out what went wrong and who’s responsible for that. I can talk about other layers of this back here, but I think the liability piece is really key. I mean, we just see this. I think it’s a reflection of the trust problem here where when you’re a deployer, I mean, God, I think everyone that I talk to, you know, again, hospital systems, retail, banks, anyone who needs to be consumer facing is really worried about this problem.

I mean, when I get sued, what do I do? And maybe there’ll be. a populist backlash and everyone will hate everyone who’s using AI systems. And it’s much better to, ahead of something like that, ahead of that happening, have that standard of care defined up front and have that seal of approval conferred.

Gregory C. Allen

And Hiroki, as you think about the different stakeholders in the system and especially the idea of auditors, which now there are a number of organizations being founded, it seems like almost every day, who are proposing to provide external evaluation services that can help companies understand, as Shane has said, this product or this service or this company meets the seal of approval and we vouch for it as an independent entity. What kind of momentum do you see for this independent assessment part of the story across regulatory frameworks?

Hiroki Hibuka

Independent evaluation. Independent evaluation is essential given that we are all using AI systems for all different situations, starting from language models to healthcare systems to car driving. But it would be not easy to persuade corporate executives to use the independent audit without clear economic incentives. For example, if you get the certification for autonomous driving, then you can sell the car to the big market. Then, of course, you pay for the audit. But if you take this audit for this language model, then you can prove that this language model is relatively safer than the other models. But it doesn’t necessarily make enough incentive for model developers to conduct the audit or evaluation systems, independent evaluation, because there is no clear financial incentives.

Gregory C. Allen

Actually, could I? I can ask you to elaborate on that. So where might these financial incentives come from? You mentioned one, which is the regulators force you to do it. That’s one. Maybe insurance is one. another like where where might these incentives come from

Hiroki Hibuka

I think it should start from the regulated areas such as cars health care systems finance systems or infrastructures because everybody needs a strong requires strong trust on those systems if it doesn’t work well then somebody might have a baby kills that’s a big problem and maybe you could say hey but in the end if you are killed you can be compensated but it’s not the end of the story while if the damage could be compensated by money by the company and stakeholders are okay with that maybe companies like to just run the system go and and compensate to the victims for example if the language model says something discriminated the company can just say hey we’re very sorry we introduce better guardrails and we pay for that if you want compensation

Gregory C. Allen

in terms of what is possible, what interventions work, what the risks are. But I want to ask about how we go from that degree of consensus to something that might be more of like a standard around procedural implementation. You know, Shana’s term of art is standard of care, which matters a lot in the American legal system. I’m sure it matters a lot in other legal systems. I’m just ignorant about, you know, how and where. And so I’m curious, you know, what do you see as the gap? If these independent evaluators, these independent auditing organizations are emerging, how do they go from we think we’re good at this to, no, this is the accepted best practice?

You know, we have accepted consensus on the risks and the interventions, but, like, how do you turn that into a procedure? Just to give an example to the folks in the audience, I used to work at a rocket company, and the safety standard in the American aerospace industry is AS9100. And in the history, of our company, there’s kind of like a before AS9100 moment, and then there’s an after AS9100 moment. And everything changed for our company, you know, after we got that third -party audit evaluation. A lot of our customers, you know, just said, we do not sign checks for companies that are not AS9100 certified. So, you know, you are deeply steeped in where we are today on the consensus, but how far are we from converting that into standards and procedures for third -party evaluation?

Yeah. I’ll also say one follow -up to Hiroki’s point, too, about auditing. Not only is there sort of a lack of incentives to conduct audits voluntarily now, but there might even be disincentives where one is it’s costly, and it slows you down, and there’s very intense competitive pressures to release faster. And there’s also potentially… like, information or security risks to sharing. You spent hundreds of millions, maybe billions of dollars developing a model, and then you have to share it with an external party before deployment. Like, serious risks to, or perceived risks, at least, to having that information leak or… So I think, yeah, there’s some serious challenges there. I guess there’s one other potential part of the story, which is sometimes you see companies want to be willfully blind, right?

If they have a report that says my product is not safe, well, now they know they’re going to lose the lawsuit. Whereas if they never commission the report, maybe they’ll win the lawsuit. So, Shana, what do you see as meaningful interventions that can help address this problem, both the cost side that Stephen mentioned and the other parts of the incentive structure?

Shana Mansbach

Yeah, let me make a couple of points. I mean, I think we’re talking about the cost of audits, and I think this… this is a big issue that we think about a lot. This system will not work if everyone, if there’s a flat fee, everyone is paying a ton. I mean, we are really, we think that an unsuccessful, there are many ways that a system looks unsuccessful, and one of those ways is if it is just protecting incumbents. And we’re thinking, we envision the system as something that works for, you could verify a general purpose LLM, you could also have narrow AI, you could have a tiny little tool, a little chatbot that is used in schools.

Those three different products should not be audited, not only at the same cost, but in the same way. I mean, compliance isn’t just the check that you’re writing, it is how much of a pain in the butt is it? How many lawyers do you need? How long will this take? So the great thing about this being a marketplace is that the system is right -sized to risk type, to size of these products. and again instead of having just a one size fits all this is what you have to do to comply because I think that that is a real issue it really quickly I just want to go back to you know the question that you asked Hiroki about incentives I mean you can imagine a system where this is mandatory and maybe in some areas you can imagine that but I think that there are three real real carrots for wanting to get verified we talked a little bit about liability so obviously the liability clarity that this is a big carrot I think the insurance piece the insurance piece is real right now we are seeing the big insurers saying we’re not going to touch this we’re not going to insure any AI products because we have no idea what’s inside of them at least in America the way that life insurance works is if you want insurance you have to have a lot of money and a lot of money and a lot of money and a lot of money you have to jump on a scale and tell someone how healthy you are and what are the things that you do and the insurer decides okay are you worthy of being insured and at what premium I think that’s actually a pretty direct analog for what we’re trying to do here where the books are opened and an insurer can look at whether they don’t have to do the testing themselves, but they can look at whether the system has been verified and say, okay, we will actually insure you or we will insure you at a more affordable premium.

I think the third thing is just straight -up market competitive advantage. If I’m a school superintendent and I am choosing between two learning chatbots to put in my schools, I’m not going to choose the one that has not been verified. I want the one that has been verified, that is safest. Yes, because I’m worried about getting sued, but because I want my kids to be safe. And you can imagine a situation much like Underwriters Lab in the United States where basically all consumer products like light bulbs, toothbrushes, basic things that you buy in a store like Walmart, all have the UL seal of approval, and those are the ones that get sold in stores. They have a huge market advantage.

They pay a little bit, but not very much. And in exchange for doing that, they go to market in a way that, or they compete in a market in a way that the ones that don’t go through verification. do. I’m so sorry, Gary, you asked me an actual question and I just answered everyone else’s question and probably not my own.

Gregory C. Allen

It’s okay. You get out of jail free card because you mentioned insurance, which is something I’m deeply interested in right now. I mean, in that space orbital launch vehicle example that I just mentioned, you can’t get insurance for space launches of satellites until you’re AS9100 certified. And that is 10 % of the cost of getting a satellite into space is just the insurance on the rocket. And so basically companies that can’t get insurance can’t compete in the market. And as Shana mentioned, and I think this is a super undercovered story, there are now many of the major insurers in the United States at least are saying, for your enterprise risk policy, AI is not included. So if you are a major bank and you are doing big, important financial transactions, as soon as you start using AI, you’ve lost all your insurance.

And I think the Trump administration in the United States has a very light -touch regulatory approach. And my concern there is that, well, just because the government is not doing anything big and bold on regulation doesn’t mean there will be no regulation. The insurers will step in. And if the insurers exit the market, maybe not in legal terms, but in economic outcome terms, that could be very similar to draconian regulation. So, Shana, you’re mentioning the Underwriters Lab, which is an organization that writes standards that are relied upon by underwriters, the people who are issuing insurance. This is a huge part of the regulatory and governance ecosystem that I think is really important. And so now I’m hoping, Stephen, that you’re going to tell me, that you’ve been reached out to by a bunch of insurance companies, and they’re all reading your report eagerly and thinking about this.

But maybe, maybe not. What’s the case?

Stephen Clare

Not yet, but it’s a really long report. 312 pages, but it goes like that. Maybe I can come back to the best practices point a little bit, because I think we’re talking about auditing here, and at least I know there’s a lot of steps involved, I’m sure, but at least at the technical level, the main tool we have right now to audit the capabilities of the RISC -MD AI model are evaluations. And although in my opening I sort of talked about, oh, it’s great we have this toolkit that’s emerging and it’s strengthening, and that is true, I think on evaluations in particular, as far as like, okay, let’s say we have auditors that are looking at these companies, looking at models, what are they actually looking at to audit or evaluate the models?

I think we actually have a big gap here, a big evaluation gap in terms of, well, how are we actually assessing? So if we’re moving towards best practices, not only do I think we don’t have a sense of the best practices right now, but if we did, they’d be different in a year, because the capabilities are moving too quickly for these technical tools to be in date, for very long. So for example, you’ll have, you know, these evaluations often look like a set of questions related to a certain topic, and you ask the model, so you have a bunch of questions about biosecurity or a bunch of questions about cybersecurity. And if it’s above, if it scores high enough on the test, you say, whoa, this is a dangerous capability, and we need to implement more safeguards or something.

And as far as what’s best practice or safe risk management for a company, we evaluate in terms of, well, does it seem like the safeguards apply proportionately to the risk that you’ve assessed? But I think in many cases, these evaluations we’re using are already not super informative about real -world risk because they’re too narrow. Because you have to build a set of questions that gives you some information about the vast range of use cases in the real world. And as models have become more capable and general and adopted more widely, this has become much more difficult. And I don’t think there’s very many actors out there that are constantly thinking about new ways to evaluate the capabilities.

And so I think this… This is like an important gap in terms of our toolkit. that is, again, quite urgent because these models have been released and we’re using our current evaluations, which are already, in many cases, out of date and not super informative about real -world risk. Shannon, do you want to jump in here?

Shana Mansbach

Yeah. Stephen, I agree with you so much. I mean, all of us are obsessed with benchmarks because that’s kind of all we have, and they’re just so narrow. I spend a lot of time with organizations that we think will become these IVOs, and testing is so, so hard. I mean, think about this. We have a fundamentally stochastic system, so I can ask something 10 times, system 10 times, and I’m going to get 10 different answers. So what does that mean in a safety context? Another problem that we have, what a model outputs is not the same thing as what someone does with it. So think about in the context of mental health. Maybe the model says to 10 different people different versions of, I think you should kill yourself.

Nine times, maybe for nine of those users, that’s fine, they will laugh it out. But for one of those users, there’s going to be a real problem here. and also the multi -turn nature of AI. I mean, you build relationships with these systems and you ask long queries and the stuff just gets really complicated really quickly as technical minds could explain far better than I could. So what we’re trying to do here is incentivize better testing because right now the only people creating evals or eval organizations or doing God’s work, doing awesome stuff, but what does it mean? You’re the best meter out there. I mean, there’s not an incentive to go from good to the best.

And the other actor working, of course, are the labs. And I think many of the labs are actually attempting to be responsible actors here, but again, there’s an incentive gap. I think the only way you’re going to solve this is to have an ecosystem where all of the actors are competing to have the best services, to have the best evaluations, to have the best feedback, to have the best feedback, to have the best feedback, And we hope one day one of these IVOs says, I’ve developed a new type of testing that figures out this kid safety thing that no one has ever thought about. And then the next day someone says, well, we have to be better because then everyone will want to be verified from that organization.

So you are incentivizing ever better testing. And as Stephen says, I think that just given how quickly and dramatically the capabilities and the risks of these systems are increasing, we need really good testing and tooling that can keep up with that. And the only way to do that is to incentivize

Gregory C. Allen

So, Stephen, if I could come to you about what Shana just said. You pointed out how the state of the art in evaluations and assessment is constantly shifting as the capabilities are shifting. I sometimes hear the frontier labs say, yes, and that’s why we’re the only ones who can do the testing, because we’re the ones out there on the frontier. But Shana is making this point about misaligned incentives, which I think we saw. In a conversation you and I had a couple weeks ago in the XAI Grok undressing children kind of example, there’s perverse incentives sometimes at work here in terms of the companies evaluating themselves. So how do you reconcile that gap between the frontier AI labs often do have a unique perspective and a unique understanding, but also it’s really hard to see how we could ever be comfortable with them being the only ones assessing themselves?

Stephen Clare

Well, I can talk about a bit in the context of the report where we try to work with everybody to get the state of the science across the whole landscape. And there I think it is true that there’s this big information asymmetry between the people in the labs who both have the most technical capacity and also the most access to leading models and all of the information about testing and development and all of the information about the technology that’s being used in the lab. And if you don’t draw on that knowledge, you can’t really do anything about it. you’re not going to be able to understand what’s actually going on in the AI world but then I think we brought in a lot of perspectives from academia and society and government feedback to sort of get a full perspective of the landscape as far as what to do going forward to deal with this I think probably it looks something like this with partnerships that are aiming to draw on that knowledge but then aiming for transparency and information sharing that gives third parties and external actors a better understanding of what’s actually going on because it’s true like even writing the report we were reliant on these papers that labs will occasionally publish and drop with like very useful data on how people are using the models or adoption rates but we’re kind of reliant on these like ad hoc publications and then that leaves a lot of gaps across the landscape and different risks and so we you know constantly had the word uncertainty or unknowns in the report because we lack that data outside of the labs

Gregory C. Allen

And do you think that that’s likely to remain the case, or do you think that that could change over time? As we’ve seen, literally, the safety staff of some of these labs quit and start their own auditing companies. So are they likely to have their skills atrophy as they get farther from the development process, or do you think it’s credible that these third -party organizations can build, the word that comes to mind is like economies of scale that are relevant to be able to continue advancing the state -of -the -art of safety and governance, even as the technology keeps evolving?

Stephen Clare

I’m not sure, but I think what we can do is sort of look at the trend, and the trend is towards, I think, a stronger ecosystem around AI labs. As more people, as these problems of lack of data and lack of independent verification are identified more, there’s more people working on it. And then I think we’ve seen some movement towards greater transparency with AI labs as well. So frontier safety frameworks are now a governance mechanism that’s in the EU. AI are in the code of practice, and it’s become institutionalized. It started as a voluntary, anthropic, just published, a responsible scaling policy. And so you see these movements towards sharing more information in more structured ways.

I think also, yesterday, there were the new commitments from the companies at the summit, which were related to sharing data about usage. So I think as a broader set of actors in society are paying attention to AI, because, again, we’re feeling the effects more clearly. It’s becoming more of an economic priority. We’ll see more demand from outside the labs to share this information, and maybe that will lead to some changes.

Gregory C. Allen

Hiroki, you’ve written a ton about AI, but in your capacity as a lawyer, you also have a lot of understanding of many different industries. Are there any lessons learned from other industries here that have solved this sort of technical expertise exists here, but the need for independence exists here? What kind of precedents do you see that we can learn from?

Hiroki Hibuka

Okay, so before that, let me add one more incentive, which is public procurement. If the government says, we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, LLM or model is safe and then government procures this standard then it will be a big incentive for developers so that is one thing and when I try to answer your questions I think democratic debate is necessary as to what kind of risk level is acceptable and also what kind of test measures are good because there is any single specific answer as to this is acceptable level of perspective.

For example in Japan every year more than 2 ,000 people were killed by a human driven car and the question is what kind of safety would we require for the autonomous vehicles? Is it okay if the kill number is less than 2 ,000 or would we like to require more safety than human drivers? If so, what would be the level? There is no single answer to that kind of question so we need to debate. in a democratic manner as to what is our acceptable goal. And also about the test measures. For example, we can just simply compare the number of rates per kilometers, but if you test in a very safe straight highway, of course it’s easier to get to safety.

While if you try to drive in a pretty complex city, it’s gonna be very difficult. So how to measure how to define the test method is another question. And I don’t go into the details, but the thing that discussion has been done in a lot of industries, car industries, or finance industries, or aerospace industries, we can certainly do a lot of lessons learned from the existing.

Gregory C. Allen

Yeah, one analogy that as you were talking, you jogged my memory, is the National Highway Transportation Safety Administration in the United States, which actually industry begged for this organization to be created. They did. in the 60s and 70s because they said, look, all of us are going to claim that we have safe cars, but only some of us are making big investments in becoming safe, and we want to reward the people whose good behavior is making big safety investments. And so they created this new organization which would give cars a safety rating on one to five, five star or one star. And so now the companies can only get a five star rating if they’re actually doing what it takes to be safe.

And consumers, you know, they’re not always qualified to rip open their car’s engine and see what it looks like under the hood, what’s safe, but they can interpret that five star rating. And so my idea was to ask you, Shana, to elaborate on this in the context of your model, but I’m now scared of the beeper, which is quite loud and scary. So please join me in thanking our terrific panel. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The International AI Safety Report was drafted as the shared evidence base for the 2023 Bletchley Safety Summit and is backed by more than thirty countries and intergovernmental organisations.”

The AI Safety Summit was held at Bletchley Park with participation from China, the United States, the European Union and over 25 other nations, demonstrating broad multi-country backing, and the summit produced the Bletchley Declaration establishing a shared understanding of AI risks [S76] and [S77].

Confirmedhigh

“Risks once theoretical are now observable at scale, with concrete harms such as deep‑fake proliferation and AI‑enabled cyber‑attacks.”

Discussions of AI risk management explicitly cite the spread of deepfakes and the rise of AI-enabled cyber attacks as emerging threats [S1].

Additional Contextmedium

“The International AI Safety Report is modelled on IPCC reports.”

The IPCC is referenced as a successful example of an international, evidence-based report that creates a shared factual base for policy, providing context for why the AI Safety Report would adopt a similar structure [S75].

External Sources (86)
S1
Who Watches the Watchers Building Trust in AI Governance — – Hiroki Hibuka- Shana Mansbach – Stephen Clare- Hiroki Hibuka
S2
Lights, Camera, Deception? Sides of Generative AI | IGF 2023 WS #57 — Hiroki Habuka, Civil Society, Asia-Pacific Group
S3
Who Watches the Watchers Building Trust in AI Governance — – Stephen Clare- Hiroki Hibuka- Shana Mansbach – Stephen Clare- Shana Mansbach
S4
Who Watches the Watchers Building Trust in AI Governance — -Gregory C. Allen: Moderator/Host of the panel discussion
S5
Who Watches the Watchers Building Trust in AI Governance — – Hiroki Hibuka- Shana Mansbach – Shana Mansbach- Gregory C. Allen
S6
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S7
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S8
Global telecommunication and AI standards development for all — Bilel Jamoussi:Thank you, thank you LJ and good afternoon everyone. I’d like to invite a list of colleagues for a big an…
S9
Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477 — Policymakers need to support collaboration between different sectors The analysis underscores the critical need for cyb…
S10
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — Lastly, inclusive involvement of the technical community in the policy-making process is advocated. The technical commun…
S11
AI Safety at the Global Level Insights from Digital Ministers Of — So I think there’s that. I do think that it needs to be obviously multi -sector. It’s a fairly obvious point. How do you…
S12
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S13
Why science metters in global AI governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S14
AI Meets Cybersecurity Trust Governance &amp; Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S15
https://dig.watch/event/india-ai-impact-summit-2026/who-watches-the-watchers-building-trust-in-ai-governance — I mean, I’m sure it’s kind of the boring but true answer. It’s the boring part of it. depends and it’ll vary a lot acros…
S16
AI That Empowers Safety Growth and Social Inclusion in Action — Despite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap betwee…
S17
Advancing Scientific AI with Safety Ethics and Responsibility — And also create more awareness about the main fundamental thing is that they will be expected to document whatever testi…
S18
Networking Session #232 Bringing Safety Communities Together a Fishbowl Style Event — Legal frameworks exist but enforcement remains problematic due to lack of understanding within judiciary and law enforce…
S19
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Current syllabuses are outdated and policy makers face pressure to update educational frameworks rapidly
S20
Main Session 2: The governance of artificial intelligence — Different sectors (financial services, agriculture, healthcare) require different regulatory approaches, but there’s a n…
S21
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived ina…
S22
Consumer data rights from Japan to the world | PART 1 | IGF 2023 — Minako Morita-Jaeger:Thank you, Javier. My PowerPoint. Yes, lovely. And then you can, yeah, and you can change it when I…
S23
Japan favours softer AI regulations — Japan is paving the way for a more lenient approach to AI regulation, as an official familiar with the deliberationsreve…
S24
AI as critical infrastructure for continuity in public services — “We shouldn’t be fixing things after the fact, but we should go on an input before the deployment.”[115]. “The second on…
S25
Open Forum #26 High-level review of AI governance from Inter-governmental P — Yoichi Iida: Thank you very much, Ambassador. You talked about a lot of various risks and challenges. In particular, y…
S26
WS #123 Responsible AI in Security Governance Risks and Innovation — These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, a…
S27
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Kyoko Yoshinaga:Thank you, Michael. Welcome to Japan. I’m Kyoko in Kyoto. Okay. So let me, first of all, give you a brie…
S28
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S29
Global AI Governance: Reimagining IGF’s Role &amp; Impact — Legal and regulatory | Privacy and data protection | Cybersecurity Regulatory approach – existing laws vs new framework…
S30
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Context-specific deployment focusing on appropriate use cases can unlock both productivity and trust simultaneously
S31
In brief — The scientific literature on the evaluation of humanitarian assistance is extensive. Approaches include the scient…
S32
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Kamesh Shekar:Thank you so much, Luca. And so, yeah, so I guess we have very few time to rush through the paper. But our…
S33
Opening — As new technologies emerge, there is a need to assess whether existing governance frameworks are sufficient or if new on…
S34
WS #134 Data governance for children: EdTech, NeuroTech and FinTech — Current laws and regulations may not provide sufficient coverage for emerging technologies like neurotechnology. Some co…
S35
New Technologies and the Impact on Human Rights — **Implementation Over Innovation**: There was consensus that established international frameworks provide adequate found…
S36
Global AI Policy Framework: International Cooperation and Historical Perspectives — Despite coming from different backgrounds (diplomatic/legal vs academic), both speakers advocate for patience and carefu…
S37
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — “But just thinking about closing the AI insurance divide, we released this paper, and in it we talk about around six cha…
S38
Who Watches the Watchers Building Trust in AI Governance — Actually, could I? I can ask you to elaborate on that. So where might these financial incentives come from? You mentione…
S39
Japan’s move toward active cyber defence: a strategic shift in national security — On 10 September, the Liberal Democratic Party (LDP)proposeda groundbreaking system of ‘active cyber defence’ (Nōdō-teki …
S40
The Protection of Children Online — National policies vary greatly in their reliance on technical measures. Countries such as Australia and Ja…
S41
Japan passes landmark cyber defence bill — Japan has passed theActive Cyber Defence Bill, which permits the country’s military and law enforcement agencies to unde…
S42
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe Metzger:Thank you, Bilel. Maybe to be as succinct as possible, just would like to mention four areas, which I t…
S43
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The discussion highlighted the importance of policy interoperability rather than uniform global governance, recognizing …
S44
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S45
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S46
Driving Social Good with AI_ Evaluation and Open Source at Scale — This highlights the complexity of contextual safety requirements and the need for flexible evaluation frameworks
S47
Procuring modern security standards by governments&amp;industry | IGF 2023 Open Forum #57 — An important observation highlighted by the coalition is the lack of recognition of open internet standards by governmen…
S48
Policies and platforms in support of learning: towards more coherence, coordination and convergence — – (a) Common standards for needs assessment and evaluation of learning programmes; – (b) Coordination and possibly int…
S49
AI Safety at the Global Level Insights from Digital Ministers Of — There’s a need to develop an independent evaluation ecosystem similar to accounting auditors, but the optimal structure …
S50
https://dig.watch/event/india-ai-impact-summit-2026/who-watches-the-watchers-building-trust-in-ai-governance — Independent evaluation. Independent evaluation is essential given that we are all using AI systems for all different sit…
S51
The Overlooked Peril: Cyber failures amidst AI hype — Developing and enforcing legal and policy instruments, such as the 11 UN cyber norms, is imperative. These norms provide…
S52
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — Lastly, inclusive involvement of the technical community in the policy-making process is advocated. The technical commun…
S53
Leveraging AI4All_ Pathways to Inclusion — The report identified three interconnected pillars essential for inclusive AI: design, access, and investment. The desig…
S54
Table of Contents — Closely linked and o/ften a consequence of government internal policies is public procurement. Where standards and commo…
S55
https://dig.watch/event/india-ai-impact-summit-2026/leveraging-ai4all_-pathways-to-inclusion — This is where you have to make sure AI is usable in real world conditions. I know we’re in the AI Impact Summit, but som…
S56
Please cite this document as: — 8. Members should take greater account of environmental criteria in public procurement of ICT goods and services and inc…
S57
WS #123 Responsible AI in Security Governance Risks and Innovation — These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, a…
S58
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment. This sug…
S59
Who Watches the Watchers Building Trust in AI Governance — This comment exposes a fundamental technical limitation in current AI safety approaches: the evaluation methods themselv…
S60
How to make AI governance fit for purpose? — Shan emphasized international collaboration through the ITU and global standards development, expressing concern about p…
S61
From Technical Safety to Societal Impact Rethinking AI Governanc — This comment set the entire tone and direction of the discussion, establishing the framework that all subsequent panelis…
S62
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Kyoko Yoshinaga:Thank you, Michael. Welcome to Japan. I’m Kyoko in Kyoto. Okay. So let me, first of all, give you a brie…
S63
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S64
Online trust: between competences and intentions — Trust (or the lack thereof) is a frequent theme in public debates. It is often seen as a monolithic concept. However, we…
S65
https://dig.watch/event/india-ai-impact-summit-2026/who-watches-the-watchers-building-trust-in-ai-governance — Sure. My panelists have set me up very well to say this. So I think as the International AI Safety Report shows, the cap…
S66
OECD DIGITAL ECONOMY PAPERS — These gaps result from misaligned incentives, a lack of awareness, externalities, a misperception of risks and informati…
S67
Advancing Scientific AI with Safety Ethics and Responsibility — Oversight should be distributed across multiple entities rather than relying on a single central authority, creating che…
S68
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Kamesh Shekar:Thank you so much, Luca. And so, yeah, so I guess we have very few time to rush through the paper. But our…
S69
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — – Clara Neppel: Senior Director at IEEE David Leslie: Can everyone hear me? Samara, can you hear me? Hello? Hello? …
S70
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Moderator – Massimo Marioni:AI’s role in securing the future. Dr. Helmut Reisinger, Chief Executive Officer, EMEA and LA…
S71
Opening address of the co-chairs of the AI Governance Dialogue — ## Themes from Previous Year 3. Establishing international technical standards that allow policy and regulation to rema…
S72
Laying the foundations for AI governance — – **Lan Xue**: Dean (Dean Xue Lan), expertise in governance and policy Artemis Seaford: That is a great question. So th…
S73
Multi-stakeholder Discussion on issues about Generative AI — Furthermore, Andrade highlights the significance of dialogue and cooperation in the global AI landscape. He particularly…
S74
Report outlines security threats from malicious use of AI — The Universities of Cambridge and Oxford, the Future of Humanity Institute, Open AI, the Electronic Frontier Foundation …
S75
Will science diplomacy survive? — Science in diplomacyis about using scientific evidence and advice for foreign policy decision-making. In these cases, so…
S76
China, the US, EU, and 25+ countries have joined forces to manage the risks of AI — At the AI Safety Summit hosted at Bletchley Park in England, representatives from China, the United States, the European…
S77
AI Safety Summit adopts Bletchley Declaration — On the first day of theUK AI Safety Summit, the government of the UK introduced the ‘Bletchley Declaration’ on AI safety…
S78
Knowledge Café: WSIS+20 Consultation: Strenghtening Multistakeholderism — Anita Gurumurthy: Thanks. So suggestions at the international level, as well as national to local. So I will start with …
S79
A Global Compact for Digital Justice: Southern perspectives | IGF 2023 — Anita Gurumurthy:But of course, like everything that is political and has an opportunity in the horizon, we will deal wi…
S80
Global Risks 2025 / Davos 2025 — Gillian R. Tett: Well, as somebody who obviously benefited in the past as being part of the media from vertical trust,…
S81
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — Audience:I hope someone can hear me, I still can’t have my video on. We can hear you clearly. Thank you. So this has bee…
S82
Setting the Scene  — This insight is particularly thought-provoking because it identifies that while technology and protection methods are im…
S83
UK report quantifies rapid advances in frontier AI capabilities — For the first time, the UK has published adetailed, evidence-based assessmentof frontier AI capabilities. The Frontier A…
S84
Climate change and Technology implementation | IGF 2023 WS #570 — This would enable broader adoption of these solutions, fostering real progress in addressing climate change and achievin…
S85
Non-regulatory approaches to the digital public debate | IGF 2023 Open Forum #139 — The lack of compliance of private tech companies and states with human rights obligations online propels effects of onli…
S86
Global Enterprises Show How to Scale Responsible AI — -Regulatory Approaches and Global Alignment: The panel debated whether global regulatory alignment is necessary or feasi…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
G
Gregory C. Allen
3 arguments167 words per minute2623 words939 seconds
Argument 1
The International AI Safety Report is the essential foundation for any AI governance discussion.
EXPLANATION
Allen emphasizes that the report represents the minimum body of knowledge required to meaningfully participate in AI governance debates, positioning it as the bedrock for future policy work.
EVIDENCE
He praises Stephen’s report as a remarkable document that forms the foundation for all conversations about AI governance and describes it as the minimum amount of knowledge needed to join the discussion [2-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report is described as an IPCC-style evidence base for AI governance and a shared scientific foundation for policy work [S12][S13].
MAJOR DISCUSSION POINT
Report as foundational knowledge for AI governance
AGREED WITH
Stephen Clare
Argument 2
Policymakers must translate existing technical safeguards into robust, diverse implementation across sectors.
EXPLANATION
While technical safety tools have improved, the remaining challenge lies in ensuring that policymakers adopt and enforce these safeguards consistently and at scale.
EVIDENCE
Allen notes that good news includes many techniques that demonstrably increase safety, but the real challenge now is that “the good news is we’ve done a lot about it… the challenges are now in the hands of policymakers to make sure that these safeguards get implemented robustly and diversely” [62-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for multi-sector collaboration and agile, sector-spanning regulation highlight the need for policymakers to operationalise technical safeguards [S9][S21].
MAJOR DISCUSSION POINT
Policy implementation of AI safety techniques
AGREED WITH
Stephen Clare, Shana Mansbach
Argument 3
The insurance market can serve as a powerful lever to drive AI safety standards, and the current lack of coverage creates market pressure.
EXPLANATION
Allen points out that major insurers are refusing to cover AI‑related risks, which could compel companies to adopt verification and safety standards in order to obtain necessary insurance and remain competitive.
EVIDENCE
He explains that many U.S. insurers are not including AI in enterprise risk policies, meaning banks and other firms lose insurance coverage when they use AI, and that this insurance gap can act as a lever for safety standards [244-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of insurance as a lever for AI safety, including parallels with other high-risk domains where coverage depends on safety certification, supports this claim [S1].
MAJOR DISCUSSION POINT
Insurance as lever for AI safety compliance
AGREED WITH
Shana Mansbach
S
Stephen Clare
4 arguments189 words per minute2162 words685 seconds
Argument 1
The International AI Safety Report functions as an IPCC‑like evidence base for AI governance.
EXPLANATION
Clare describes the report as a shared, state‑of‑the‑art evidence base that helps decision‑makers understand what is known and unknown about general‑purpose AI risks.
EVIDENCE
He explains that the report was founded at the 2023 Bletchley Safety Summit, is backed by over 30 countries and intergovernmental organisations, and aims to be the “IPCC report for AI,” summarising what we know and don’t know about AI risks [19-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report is positioned as an IPCC-like, state-of-the-art evidence base for AI risk assessment in global governance debates [S12][S13].
MAJOR DISCUSSION POINT
Report as evidence base for AI governance
AGREED WITH
Gregory C. Allen
Argument 2
Technical safeguards have markedly improved, making model jailbreaks significantly harder.
EXPLANATION
Clare highlights that recent models resist jailbreak attempts far better than earlier versions, with the time required for successful evasion increasing from minutes to many hours.
EVIDENCE
He cites the UK Security Institute’s attempts, noting that at the start of 2025 a jailbreak could be achieved in minutes, whereas for the latest models it now takes seven to ten hours, demonstrating that “it’s becoming much, much harder to evade them” [42-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Observations that evading modern models now takes many hours, making jailbreaks substantially more difficult, are noted in recent safety assessments [S1].
MAJOR DISCUSSION POINT
Improvement of technical safeguards
AGREED WITH
Gregory C. Allen
Argument 3
Organisational safety frameworks are expanding but remain inconsistent, creating a governance challenge around compliance.
EXPLANATION
Clare observes that while more AI developers now publish frontier safety frameworks, the scope and rigor of these frameworks vary, and many companies still apply them unevenly.
EVIDENCE
He notes that twelve leading AI developers now have frontier safety frameworks, but the risks covered and recommended practices differ across firms, leading to inconsistent application and a need for broader compliance mechanisms [48-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of gaps between adopted safety principles and their practical implementation underline the inconsistency of organisational frameworks [S16].
MAJOR DISCUSSION POINT
Need for consistent application of safety frameworks
AGREED WITH
Gregory C. Allen, Shana Mansbach
Argument 4
Current AI evaluation methods are narrow and quickly become outdated, necessitating new dynamic assessment tools.
EXPLANATION
Clare argues that existing evaluations rely on limited question sets that fail to capture real‑world risk, and because AI capabilities evolve rapidly, these tools lose relevance fast.
EVIDENCE
He describes evaluations as “a set of questions related to a certain topic,” which are often too narrow to be informative about real-world risk, and stresses that the rapid evolution of models makes many of these evaluations obsolete within a short time frame [258-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Critiques of existing benchmarks as quickly becoming obsolete and insufficient for real-world risk assessment are highlighted in recent governance reviews [S1].
MAJOR DISCUSSION POINT
Gap in AI evaluation tools
AGREED WITH
Hiroki Hibuka, Shana Mansbach, Gregory C. Allen
H
Hiroki Hibuka
3 arguments149 words per minute1274 words509 seconds
Argument 1
All countries already possess both hard and soft AI regulations; the key difference lies in sector‑specific versus holistic regulatory approaches.
EXPLANATION
Hibuka contends that the debate should focus on how existing legal regimes are adapted for AI, noting that the EU, Japan, the UK and the US all employ a mix of hard and soft law, but differ in whether they regulate AI holistically or by sector.
EVIDENCE
He references the EU AI Act as a prominent regulation, then explains that privacy, copyright and sector-specific laws already apply to AI, asserting that “all countries have both hard laws and soft laws” and that the main distinction is between holistic and sector-specific regulation [80-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions of sector-specific versus holistic AI regulatory models across jurisdictions illustrate this distinction [S20].
MAJOR DISCUSSION POINT
Existing legal frameworks and regulatory approaches
Argument 2
Japan’s pre‑emptive, soft‑law‑focused regulatory culture needs a more agile, multi‑stakeholder governance model.
EXPLANATION
Hibuka points out that Japanese companies excel at complying with prescribed rules but struggle to create their own governance mechanisms, suggesting that a flexible, stakeholder‑driven approach is required to keep pace with AI advances.
EVIDENCE
He notes that Japan experiences very low loss numbers, prefers setting rules in advance, and that Japanese firms are good at following given rules but not at creating their own governance, leading to a call for a more agile, multi-stakeholder approach [88-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on Japan’s preference for softer AI regulation and calls for more agile, multi-stakeholder governance support this view [S23].
MAJOR DISCUSSION POINT
Need for agile, multi‑stakeholder AI governance in Japan
Argument 3
Independent AI audits are essential but lack clear economic incentives; public procurement can provide a strong motivator.
EXPLANATION
Hibuka argues that without financial incentives, corporations are reluctant to undergo independent evaluation, and that government procurement policies could create market demand for verified, safe AI systems.
EVIDENCE
He explains that executives need clear economic incentives to adopt audits, cites the difficulty of persuading them without such incentives, and proposes that government procurement of verified models would create a powerful incentive for developers [187-195][318-328].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses note the absence of clear financial incentives for independent audits and suggest government procurement as a potential driver [S1].
MAJOR DISCUSSION POINT
Incentivising independent AI audits
AGREED WITH
Stephen Clare, Shana Mansbach, Gregory C. Allen
S
Shana Mansbach
4 arguments175 words per minute2464 words843 seconds
Argument 1
The rapid surge in AI capabilities has created a pervasive trust deficit among the public, deployers, regulators and developers.
EXPLANATION
Mansbach describes how the accelerating performance of AI systems leaves every stakeholder uncertain about safety, security and reliability, undermining confidence in AI deployments.
EVIDENCE
She outlines trust problems for the public, for deployers such as hospitals and banks, for regulators, and for developers, noting that “the capabilities are surging, and so too does the uncertainty around the risks” and that this creates a “trust problem” across all groups [106-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust challenges arising from fast-moving AI capabilities are highlighted in discussions of AI governance and security trust deficits [S14].
MAJOR DISCUSSION POINT
Trust problem caused by AI capability surge
Argument 2
An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance.
EXPLANATION
Mansbach proposes a government‑authorized marketplace where independent verifiers test AI systems against outcome goals (e.g., child safety, privacy), providing flexibility and continuous updates that match the rapid evolution of AI.
EVIDENCE
She describes IVOs as “government-authorized and overseen marketplace of independent verifiers” that assess outcomes such as children’s safety, data privacy, controllability, and interpretability, emphasizing independence, democratic accountability, flexibility, and a “race to the top” for better testing [111-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proposals for government-authorized verification marketplaces and independent oversight mechanisms align with this outcome-based model [S1][S24].
MAJOR DISCUSSION POINT
Outcomes‑based independent verification model
AGREED WITH
Hiroki Hibuka, Stephen Clare, Gregory C. Allen
Argument 3
Verification can establish a pre‑emptive standard of care, clarifying liability and reducing post‑incident legal uncertainty.
EXPLANATION
Mansbach argues that a verification seal would confer a rebuttal presumption that a developer or deployer has met a heightened standard of care, simplifying court decisions and providing clearer liability expectations before any harm occurs.
EVIDENCE
She explains that verification would “confer a rebuttal presumption of having met a heightened standard of care,” shifting the legal analysis from post-harm fact-finding to an upfront definition of required practices, thereby easing the burden on juries and courts [174-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of a verification seal that creates a rebuttal presumption of heightened care is discussed in recent governance literature [S1].
MAJOR DISCUSSION POINT
Verification as pre‑emptive liability standard
Argument 4
Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach.
EXPLANATION
Mansbach stresses that different AI products (LLMs, narrow AI, school chatbots) require tailored audit scopes and fees, and that a marketplace structure can align costs with the specific risk profile of each product.
EVIDENCE
She notes that “the system is right-sized to risk type, to size of these products,” and that a single uniform audit would be inappropriate for diverse AI offerings, emphasizing the need for proportionality in compliance costs [224-230].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for layered, risk-based regulatory approaches that avoid uniform treatment across AI products support this proportionality argument [S15].
MAJOR DISCUSSION POINT
Proportionate, risk‑based audit scaling
AGREED WITH
Hiroki Hibuka
Agreements
Agreement Points
The International AI Safety Report is the essential foundation/evidence base for AI governance discussions.
Speakers: Gregory C. Allen, Stephen Clare
The International AI Safety Report is the essential foundation for any AI governance discussion. The International AI Safety Report functions as an IPCC‑like evidence base for AI governance.
Both speakers emphasize that the Report provides the minimum knowledge required to participate in AI governance and serves as an IPCC-style shared evidence base for policymakers. [2-3][19-23]
POLICY CONTEXT (KNOWLEDGE BASE)
High-level AI governance forums have repeatedly emphasized the need for a shared evidence base, noting that reports such as the International AI Safety Report provide the empirical foundation for policy work (e.g., IGF 2025 high-level review) [S44] and echo calls for evidence-based AI policy across multiple stakeholder groups [S45].
Technical safeguards have improved, but policymakers must ensure their robust, widespread implementation.
Speakers: Gregory C. Allen, Stephen Clare
Policymakers must translate existing technical safeguards into robust, diverse implementation across sectors. Technical safeguards have markedly improved, making model jailbreaks significantly harder.
Both note that recent models are much harder to jailbreak, yet the remaining challenge is for policymakers to translate these technical gains into consistent, sector-wide safeguards. [62-65][35-40][51-57]
POLICY CONTEXT (KNOWLEDGE BASE)
Several jurisdictions, notably Japan and Australia, have pioneered the adoption of technical safeguards for online safety, illustrating the policy relevance of robust implementation [S40]; broader discussions stress that the main challenge lies in implementing existing safeguards rather than inventing new ones [S35].
Current organisational safety frameworks are inconsistent; a systematic, outcomes‑based verification mechanism is needed for consistent compliance.
Speakers: Gregory C. Allen, Stephen Clare, Shana Mansbach
Policymakers must translate existing technical safeguards into robust, diverse implementation across sectors. Organisational safety frameworks are expanding but remain inconsistent, creating a governance challenge around compliance. An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance.
All three agree that while many firms now publish safety frameworks, their scope and rigor vary, creating a compliance gap that could be closed by a government-authorized outcomes-based verification marketplace. [62-65][48-57][111-130]
POLICY CONTEXT (KNOWLEDGE BASE)
The need for an independent evaluation ecosystem comparable to accounting auditors has been highlighted as a priority for AI safety assurance [S49]; standards-as-implementation tools are also advocated to create systematic, outcomes-focused verification processes [S42].
The insurance market can serve as a powerful lever to drive AI safety verification and standards.
Speakers: Gregory C. Allen, Shana Mansbach
The insurance market can serve as a powerful lever to drive AI safety standards, and the current lack of coverage creates market pressure. Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach.
Both highlight that insurers are currently refusing AI coverage, creating market pressure, and that a verification seal could unlock insurance and lower premiums, providing a strong incentive for safety compliance. [244-250][221-231]
POLICY CONTEXT (KNOWLEDGE BASE)
Recent research on closing the AI insurance divide identifies insurance as a key market lever that can shape risk profiling and drive verification standards [S37]; discussions on financial incentives further underline insurance’s role in motivating compliance [S38].
Independent verification/audits are essential but require new incentive structures and a marketplace of IVOs to fill the evaluation gap.
Speakers: Hiroki Hibuka, Stephen Clare, Shana Mansbach, Gregory C. Allen
Independent AI audits are essential but lack clear economic incentives; public procurement can provide a strong motivator. Current AI evaluation methods are narrow and quickly become outdated, necessitating new dynamic assessment tools. An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance. Terrific. And if I could contrast what you said with what we might have said if we were having this conversation back at the Bletchley Park AI Summit.
All agree that third-party verification is needed; however, current incentives are weak. A marketplace of IVOs, supported by public procurement and clearer standards, can address the evaluation gap. [187-195][258-267][111-130][185-186]
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for a dedicated independent evaluation ecosystem for AI mirror the accounting-audit model and stress the need for new incentive structures [S49]; persuading corporate leaders without clear economic incentives remains a challenge, highlighting the importance of market-based mechanisms such as insurance or regulator-driven mandates [S50, S38].
Audit costs should be proportional to risk and product size, with mechanisms such as public procurement or market demand aligning incentives.
Speakers: Shana Mansbach, Hiroki Hibuka
Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach. Independent evaluation is essential … but it would not be easy to persuade corporate executives to use the independent audit without clear economic incentives.
Both stress that a one-size-fits-all audit is inappropriate; costs and rigor must match the specific risk profile, and incentives like public procurement can make audits attractive to firms. [224-230][187-195][318-328]
POLICY CONTEXT (KNOWLEDGE BASE)
Public procurement policies that reference technical standards are seen as a way to align incentives and ensure risk-based auditing [S54]; risk-based insurance pricing further supports proportional audit costs [S37]; broader incentive-design discussions emphasize regulator-driven and market-driven levers [S38].
Similar Viewpoints
Both recognize that a substantial body of existing regulations (hard and soft) already exists worldwide, but their application is uneven and sector‑specific, requiring updates rather than entirely new laws. [48-57][80-86]
Speakers: Stephen Clare, Hiroki Hibuka
Organisational safety frameworks are expanding but remain inconsistent, creating a governance challenge around compliance. All countries already possess both hard and soft AI regulations; the key difference lies in sector‑specific versus holistic regulatory approaches.
Both point to the inadequacy of current evaluation benchmarks and argue for new, continuously updated testing frameworks delivered by independent verifiers. [258-267][111-130]
Speakers: Stephen Clare, Shana Mansbach
Current AI evaluation methods are narrow and quickly become outdated, necessitating new dynamic assessment tools. An outcomes‑based marketplace of independent verification organizations (IVOs) can overcome the speed and technical‑capacity gaps of traditional command‑and‑control AI governance.
Unexpected Consensus
Insurance as a primary market lever for AI safety compliance.
Speakers: Gregory C. Allen, Shana Mansbach
The insurance market can serve as a powerful lever to drive AI safety standards, and the current lack of coverage creates market pressure. Audits must be risk‑proportionate; a marketplace can scale verification costs to product size and risk, avoiding a one‑size‑fits‑all approach.
While Gregory approaches the topic from a policy-maker’s perspective and Shana from a think-tank/market-design angle, both converge on the insight that insurers’ refusal to cover AI creates a strong incentive for verification and standards-an alignment not explicitly anticipated at the start of the discussion. [244-250][221-231]
POLICY CONTEXT (KNOWLEDGE BASE)
The AI insurance divide paper argues that insurance can serve as the primary market mechanism to enforce safety standards and drive compliance across AI providers [S37]; complementary analyses discuss how insurance-based incentives can be operationalised within regulatory frameworks [S38].
Overall Assessment

The panel shows strong convergence on four pillars: (1) the International AI Safety Report as the foundational evidence base; (2) technical safeguards have improved but need policy‑driven, sector‑wide enforcement; (3) independent, outcomes‑based verification is essential to bridge evaluation gaps; and (4) financial mechanisms—especially insurance and procurement incentives—can drive adoption of verification standards. There is high consensus on the need for a risk‑proportionate, market‑aligned verification ecosystem, and moderate consensus on the exact regulatory approach (sector‑specific vs holistic).

High consensus on the necessity of standards, verification marketplaces, and insurance‑driven incentives; moderate consensus on how existing regulatory regimes should be adapted. The alignment suggests momentum toward establishing a formal IVO marketplace linked to insurance and procurement requirements, which could shape future AI governance frameworks.

Differences
Different Viewpoints
What primary economic incentive should drive adoption of independent AI audits?
Speakers: Hiroki Hibuka, Shana Mansbach, Gregory C. Allen
Independent evaluation is essential but lacks clear economic incentives; public procurement could create demand (Hiroki) [187-195][318-328] Insurance can serve as a carrot, providing a rebuttal presumption of higher standard of care and market advantage for verified products (Shana) [221-238][250-251] The current lack of AI coverage by insurers creates market pressure; insurers stepping in could act like regulation (Gregory) [244-250]
All three speakers agree that independent audits are needed, but they diverge on the most effective lever: Hiroki stresses government procurement contracts, Shana highlights insurance coverage and competitive market signals, while Gregory points to the broader insurance gap as a de-facto regulatory driver [187-195][318-328][221-238][250-251][244-250].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders have identified regulator-mandated compliance, insurance-linked liability, and market demand as the main economic incentives that could spur adoption of independent AI audits [S38]; the difficulty of securing executive buy-in without clear financial benefits is also documented [S50].
Whether existing hard‑ and soft‑law frameworks are sufficient or new governance mechanisms are required.
Speakers: Hiroki Hibuka, Stephen Clare, Gregory C. Allen
All countries already have hard and soft AI regulations; the challenge is updating them rather than creating new rules (Hiroki) [80-86] Organisational safety frameworks are expanding but remain inconsistent, creating a governance gap that needs broader compliance mechanisms (Stephen) [48-57] Policymakers must translate technical safeguards into robust, diverse implementation across sectors (Gregory) [62-65]
Hiroki views the current legal mix as a sufficient foundation that merely needs refinement, whereas Stephen and Gregory argue that the present frameworks are fragmented and that new, possibly outcome-based, governance tools are required to ensure consistent safety and compliance [80-86][48-57][62-65].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the adequacy of current governance frameworks are reflected in multiple sources: an assessment of whether 20-year-old mechanisms can address new technologies [S33]; recognition that existing laws may not cover emerging domains such as neuro-tech [S34]; consensus that international frameworks provide a solid foundation but implementation is the key challenge [S35]; calls for patience and careful evaluation before introducing new regulations [S36]; and discussions on policy interoperability versus uniform global governance [S43].
Unexpected Differences
Effectiveness of Japan’s pre‑emptive, low‑loss regulatory culture versus the need for stronger technical safeguards.
Speakers: Hiroki Hibuka, Stephen Clare
Japan’s low loss numbers and preference for setting rules in advance suggest its current approach works (Hiroki) [88-90] Technical safeguards remain vulnerable and inconsistently applied across the industry, indicating existing approaches are insufficient (Stephen) [51-57]
Hiroki presents Japan’s pre-emptive, soft-law-focused model as largely successful, whereas Stephen stresses ongoing technical vulnerabilities and uneven adoption, an unexpected contrast given both discuss safety but from opposite assessments of current effectiveness [88-90][51-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Japan’s recent active cyber-defence legislation exemplifies a pre-emptive, low-loss regulatory approach, allowing proactive measures against cyber threats [S39, S41]; at the same time, Japan has been a leader in deploying technical safeguards for online safety, providing a historical contrast between pre-emptive policy and technical solutions [S40].
Overall Assessment

The panel shows moderate disagreement centered on how to create effective incentives for independent AI audits and whether existing regulatory regimes are adequate. While all participants agree on the necessity of stronger governance and verification, they diverge on the primary levers (public procurement, insurance, market competition) and on whether new outcome‑based mechanisms are needed beyond current hard/soft law structures.

The disagreements are substantive but not polarising; they reflect different policy‑design preferences rather than outright conflict, suggesting that a blended approach—combining regulatory updates, insurance‑driven standards, and procurement‑linked audits—could reconcile the viewpoints and advance AI safety governance.

Partial Agreements
They share the goal of establishing independent verification but differ on the mechanism: Stephen focuses on improving evaluation tools, Shana on creating a government‑authorized IVO marketplace, and Hiroki on coupling audits with public procurement incentives [258-267][111-130][187-195].
Speakers: Stephen Clare, Shana Mansbach, Hiroki Hibuka
All agree that independent verification/auditing is essential to close the trust and safety gap (Stephen notes evaluation gap; Shana proposes IVO marketplace; Hiroki stresses need for independent evaluation) [258-267][111-130][187-195]
They concur that existing evaluation methods are insufficient, but Stephen frames it as a technical gap needing new tools, while Shana emphasizes the need for a broader outcomes‑based verification ecosystem to keep pace with rapid capability growth [258-267][271-277].
Speakers: Stephen Clare, Shana Mansbach
Both highlight that current AI benchmarks/evaluations are narrow, quickly become outdated, and impede reliable risk assessment (Stephen) [258-267]; (Shana) [271-277]
Takeaways
Key takeaways
The International AI Safety Report (2023‑2026) provides a baseline knowledge set for AI governance and shows that real‑world risks from general‑purpose AI are now material. Technical safeguards (jailbreak resistance, safety frameworks) have improved markedly, but remain vulnerable to skilled attacks and are not uniformly applied across the industry. Governance is shifting from theoretical discussion to urgent implementation; policymakers must ensure that existing safeguards are adopted at scale. Regulatory approaches differ globally: the EU uses a hard‑law AI Act, Japan and the US rely on sector‑specific or principle‑based rules, but all need to update existing laws (privacy, copyright, sector regulations) to cover AI. Liability concerns are rising as AI is embedded in many sectors; current legal frameworks lack a clear standard of care for AI systems. A major trust deficit exists for the public, deployers, and regulators; an outcomes‑based marketplace of independent verification organizations (IVOs) is proposed to provide credible, up‑to‑date testing and certification. Responsibility for safety must be layered across developers (model training and safeguards), deployers (monitoring and risk assessment), and ecosystem monitors/independent auditors (verification and societal resilience). Incentives for independent audits are weak; potential carrots include insurance underwriting discounts, public‑procurement requirements, and market advantage (e.g., a “seal of approval” similar to UL or AS9100). Current evaluation benchmarks are narrow, quickly become outdated, and fail to capture stochastic, multi‑turn, real‑world risk; better, dynamic testing tools are needed. Lessons from other industries (automotive safety ratings, aerospace AS9100, insurance underwriting standards) can inform the development of AI safety standards and verification processes.
Resolutions and action items
Proposal to create a government‑authorized marketplace of Independent Verification Organizations (IVOs) that conduct outcomes‑based testing and issue verification seals. Encourage regulators and insurers to tie compliance with IVO verification to liability standards, insurance premiums, and eligibility for public procurement contracts. Call for the development of sector‑specific safety standards that combine hard law, soft law, and voluntary safety frameworks, with periodic updates to keep pace with AI advances. Suggest that AI labs increase transparency and share safety data with external auditors and the broader community to reduce information asymmetry.
Unresolved issues
How to design and fund economically viable incentives (insurance, procurement, regulatory mandates) that make independent audits attractive to AI developers. What concrete, industry‑wide procedural standards (analogous to AS9100) should look like for AI systems and how they will be enforced. How to define a universally accepted “standard of care” for AI deployments across diverse sectors and jurisdictions. Methods for creating and maintaining up‑to‑date evaluation benchmarks that capture stochastic, multi‑turn interactions and real‑world risk profiles. The balance between self‑regulation by frontier labs and external verification, especially given the rapid evolution of capabilities. How democratic debate will determine acceptable risk thresholds (e.g., safety levels for autonomous vehicles) and translate them into enforceable metrics.
Suggested compromises
Adopt a layered, defense‑in‑depth responsibility model that does not place the entire burden on any single actor but distributes duties among developers, deployers, and independent auditors. Combine hard‑law requirements with soft‑law standards and voluntary safety frameworks to allow flexibility while ensuring baseline safety. Use market mechanisms (insurance discounts, procurement preferences, consumer‑facing seals) as carrots to encourage voluntary verification, rather than relying solely on punitive regulation. Implement a hybrid approach where sector‑specific regulations are complemented by overarching outcomes‑based standards that can be adapted as technology evolves.
Thought Provoking Comments
Technical safeguards are getting much harder to evade – jailbreak attempts that used to take minutes now take seven to ten hours, and many models are becoming resistant to classic prompt‑jailbreak tricks.
Highlights concrete progress in AI safety that counters the dominant narrative of only bad news, showing that engineering advances can meaningfully reduce risk.
Shifted the conversation from a purely pessimistic view to a more balanced one, prompting Gregory to contrast the Bletchley‑Park optimism with current realities and setting up the later discussion on how to sustain and scale these gains.
Speaker: Stephen Clare
Even though safeguards are improving, they remain vulnerable and their adoption is uneven; the real governance challenge is how to assure broader compliance and what to do when there is a lack of compliance.
Identifies the critical gap between technical capability and policy implementation, moving the focus from technical fixes to systemic governance issues.
Served as a turning point that moved the dialogue toward policy‑level solutions, leading directly to Hiroki’s comparison of regulatory approaches and Shana’s proposal for independent verification.
Speaker: Stephen Clare
The key question isn’t whether to regulate AI at all, but how to update existing hard‑law frameworks (privacy, copyright, sector‑specific regulations) and whether additional AI‑specific rules are needed.
Reframes the regulatory debate by positioning AI within the broader legal ecosystem, challenging the simplistic EU‑vs‑US dichotomy.
Redirected the discussion from creating brand‑new AI statutes to integrating AI considerations into current laws, prompting deeper comparison of sector‑specific versus holistic regulation and influencing Shana’s focus on outcomes‑based standards.
Speaker: Hiroki Hibuka
Japan’s culture favors setting rules in advance and strong compliance, but companies struggle with self‑governance and explaining decisions; we need a more agile, multi‑stakeholder soft‑law approach.
Provides a nuanced cultural perspective that highlights why a single regulatory model may not fit all jurisdictions, emphasizing the need for flexible, collaborative governance.
Enriched the conversation with a concrete example of how national context shapes AI policy, leading Gregory to ask about incentives for auditors and prompting Shana to discuss market‑driven verification mechanisms.
Speaker: Hiroki Hibuka
The core problem is a trust gap among the public, deployers, regulators, and developers; we need an outcomes‑based marketplace of government‑authorized independent verification organizations (IVOs) to certify that AI meets defined safety, privacy, and controllability standards.
Introduces a novel governance model that moves beyond command‑and‑control to a dynamic, market‑driven certification system, addressing both technical and institutional challenges.
Opened a new line of inquiry about how such a marketplace could function, leading to a detailed discussion on liability, insurance, and economic incentives, and influencing later comments from Hiroki and Stephen about audit costs and evaluation gaps.
Speaker: Shana Mansbach
Verification would create a rebuttal presumption of having met a heightened standard of care, giving developers and deployers a clear legal shield before any harm occurs.
Connects the technical verification concept to concrete legal benefits, showing how it could reshape liability and risk management in practice.
Prompted Gregory to explore the interplay between liability law and insurance, and spurred Hiroki to discuss financial incentives, thereby deepening the conversation about practical implementation.
Speaker: Shana Mansbach
Current evaluations are narrow and quickly become outdated; we lack robust, real‑world risk assessments, which is a major gap in our safety toolkit.
Critically assesses the state of AI auditing tools, highlighting that even the best‑available benchmarks may not capture emerging risks, thus questioning the reliability of proposed verification schemes.
Triggered Shana and Stephen to discuss the need for continuous, incentive‑driven improvement of testing methods, reinforcing the argument for a competitive IVO marketplace.
Speaker: Stephen Clare
Insurance can be a powerful carrot: insurers will only underwrite AI‑enabled products that have been verified, similar to how UL certification drives market adoption in other industries.
Identifies a concrete economic lever that could drive widespread adoption of verification, linking governance to market dynamics.
Shifted the discussion toward real‑world enforcement mechanisms, leading Gregory to draw parallels with aerospace AS9100 certification and reinforcing the feasibility of the proposed model.
Speaker: Shana Mansbach
Public procurement can serve as an incentive: if governments only buy AI systems that have passed verification, developers will have a strong motivation to obtain certification.
Adds another practical policy tool that leverages government buying power to accelerate adoption of safety standards.
Expanded the set of suggested incentives beyond liability and insurance, reinforcing the multi‑pronged approach advocated by Shana and highlighting how different levers can work together.
Speaker: Hiroki Hibuka
Overall Assessment

The discussion pivoted around three core insights: (1) tangible technical progress in safeguards, (2) the persistent gap between those safeguards and their consistent, enforceable adoption, and (3) innovative governance proposals that blend legal, economic, and market mechanisms. Stephen’s acknowledgment of both advances and shortcomings set the stage for Hiroki’s reframing of regulation as an integration problem, while Shana’s introduction of an outcomes‑based verification marketplace offered a concrete solution that resonated with the panel. Subsequent comments about liability, insurance, and public procurement turned abstract ideas into actionable incentives, steering the conversation from diagnosis to potential implementation. Collectively, these thought‑provoking remarks reshaped the dialogue from a bleak outlook on AI risk to a nuanced roadmap for building trust and accountability across stakeholders.

Follow-up Questions
How can consensus on AI risks and interventions be transformed into accepted best‑practice standards and procedural certifications for independent evaluators?
Moving from informal agreement to formal standards (like AS9100) is needed for widespread industry adoption and to give customers confidence in AI safety.
Speaker: Gregory C. Allen
What financial incentives can be created to motivate companies to undergo independent AI audits and verification?
Without clear economic benefits, firms may view audits as costly and optional; incentives such as regulatory mandates, insurance discounts, or procurement requirements could drive participation.
Speaker: Gregory C. Allen, Hiroki Hibuka
How can liability and insurance frameworks be aligned with AI verification to establish a clear standard of care for developers and deployers?
A defined standard of care linked to verification could reduce legal uncertainty, lower insurance premiums, and encourage responsible AI deployment.
Speaker: Shana Mansbach
What robust, up‑to‑date evaluation methodologies can capture the stochastic, multi‑turn, and real‑world risk profile of AI systems?
Current benchmarks are narrow and quickly become obsolete, limiting the effectiveness of audits and risk assessments.
Speaker: Stephen Clare, Shana Mansbach
How can we design benchmark and regulatory methods for abstract values such as privacy, transparency, and fairness where no clear standards currently exist?
Absent benchmark standards, regulators struggle to assess compliance across jurisdictions, hindering consistent governance.
Speaker: Hiroki Hibuka
What mechanisms can ensure consistent adoption of technical safeguards across the entire AI industry, not just frontier developers?
Safeguards are unevenly applied, creating systemic risk; a strategy is needed to promote uniform implementation.
Speaker: Stephen Clare
Can third‑party verification organizations sustain expertise as AI technology evolves, or will skill atrophy undermine their effectiveness?
Ensuring that external auditors keep pace with rapid AI advances is crucial for long‑term credibility of independent verification.
Speaker: Stephen Clare
What lessons from other industries (automotive, aerospace, finance) can inform the creation of independent AI verification and safety‑rating systems?
Existing safety‑rating frameworks (e.g., NHTSA, UL) may provide models for structuring AI governance and certification.
Speaker: Hiroki Hibuka
How can public procurement be leveraged as an incentive for AI developers to obtain safety verification?
Government purchasing decisions that require verified AI could create market pressure for compliance.
Speaker: Hiroki Hibuka
How should societies define acceptable safety thresholds for autonomous AI systems (e.g., comparing AI‑induced fatalities to human‑driver rates)?
Establishing democratic, quantifiable safety targets is necessary for policy decisions on autonomous technologies.
Speaker: Hiroki Hibuka
How can a marketplace of independent verification organizations be designed to scale cost‑effectively across diverse AI product sizes and risk levels?
A right‑sized, tiered audit system would avoid one‑size‑fits‑all costs and make verification accessible to both large models and niche tools.
Speaker: Shana Mansbach
What concrete steps are needed to operationalize a layered ‘defense‑in‑depth’ approach that allocates safety responsibilities among developers, deployers, and societal actors?
Clarifying duties at each layer is essential to avoid gaps where safeguards fail or are not applied.
Speaker: Stephen Clare
How can we mitigate perverse incentives where companies might avoid audits to escape liability, ensuring they do not remain willfully blind to risks?
Mechanisms are needed to prevent firms from skipping verification to dodge legal exposure, preserving the integrity of the oversight system.
Speaker: Gregory C. Allen (referencing Shana)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Secure Talk Using AI to Protect Global Communications & Privacy

Secure Talk Using AI to Protect Global Communications & Privacy

Session at a glanceSummary, keypoints, and speakers overview

Summary

The event opened with Wish Gurmukh Dev welcoming attendees and outlining Tanla Platforms’ three core principles-innovation, collaboration and impact-which underpin its Wisely.ai agentic AI platform aimed at combating spam and scams globally [4-9]. A fireside chat was introduced featuring Sanjay Kapoor, a veteran telecom leader and Tanla board member, and Vikram Sinha, CEO of Indosat Ooredoo Hutchison, who is driving the telco’s transformation into an AI-focused company [14-20].


Sanjay highlighted the rapid digitisation of the global economy, noting that digital payments are projected to exceed $14 trillion by 2027 and that both India and Indonesia face escalating cyber-crime losses amounting to billions of dollars each year [25-31]. Vikram recounted a 2024 MasterCard advisory board meeting that revealed $5 billion in losses for Indonesians and that 65 % of the population experience spam or scam weekly, prompting Indosat to prioritize protecting its 100 million customers [46-57]. He explained that Indosat chose Tanla as a strategic partner rather than a vendor, integrating Wisely.ai into its operations, which led to a 9 % ARPU growth versus a 3 % industry average and a reduction in churn from 3.6-3.7 % to 1.6 % within a quarter [86-92].


When asked about return on investment, Vikram said measurable financial benefits appeared within six to eight months, emphasizing that AI-driven protection across voice and WhatsApp channels is essential for maintaining customer trust and business viability [96-106]. In the subsequent panel, Anshuman Kar emphasized that scams cost over $1 trillion globally, with SMS accounting for 70 % of fraud in India and 65 billion SMS messages sent monthly, and cited Wisely.ai’s protection of approximately $500 million in estimated losses in its first six months [153-164].


Panelists including Ratan Kumar Kesh and Neha Mahatme discussed challenges such as senior-citizen vulnerability, account-mule schemes, limited data visibility, and the rapid evolution of offensive AI that outpaces defensive models [191-214][236-244]. Bipin Preet Singh added that fragmented fraud-prevention efforts across fintech and banking hinder effectiveness, calling for a national digital payments intelligence platform and greater data sharing, as advocated by the RBI, to enable coordinated detection of scams [255-306][341-342]. Anshuman concluded that while attack surfaces are increasingly interconnected, current defenses remain fragmented, and a coordinated, real-time intelligence architecture across telecom, finance and regulators is required to safeguard the digital economy [345-355].


The session closed with Robert J. Ravi describing BSNL’s AI initiatives for network optimisation and customer experience, reinforcing the view that AI must be embedded across infrastructure to achieve comprehensive protection [372-383]. Overall, the discussion underscored that collaborative AI solutions, supported by cross-industry data sharing and regulatory coordination, are critical to transforming digital trust from a promise into an operational infrastructure [136-140][402].


Keypoints

Major discussion points


The scale of digital payments and the escalating fraud problem demand AI-driven trust.


Sanjay highlighted the rapid digitisation of the global economy, the $14 trillion digital-payments forecast and the billions lost to AI-powered scams, framing trust as a systemic risk that must be addressed at the board level [25-31].


Indosat’s partnership with Tanla’s Wisely.ai platform uses AI to protect millions and shows early business impact.


Vikram described how a shocking 2024 scam-loss report ( $5 bn lost, 65 % of Indonesians hit weekly ) triggered the decision to partner with Tanla, leading to a full-stack AI factory, GPU-cluster deployment and real-time protection [46-53]; he then cited concrete results – 9 % revenue growth vs. 3 % industry, churn falling from 3.7 % to 1.6 % – as proof of the platform’s value [86-92].


Demonstrating ROI is essential for scaling AI investments.


Sanjay asked how the initiative moves from a “customer-complaint” issue to a board-level ROI discussion [94-95]; Vikram responded that within six-to-eight months the AI solution delivered measurable P&L benefits (higher ARPU, lower churn) and reinforced the strategic shift from pure connectivity to “peace of mind” for customers [96-102].


Panelists across telecom, banking, fintech and payments stress the fragmented nature of fraud detection and call for integrated, data-shared ecosystems.


Anshuman set the stage by quantifying the fraud magnitude and the proliferation of SMS/OTT channels [153-169]; Ratan Kumar Kesh explained how banks use transaction-pattern analytics but still face “mule” account abuse [191-210]; Neha pointed out that behavioral-journey data, limited visibility and the faster evolution of offensive AI hinder prevention [236-244]; Bipin highlighted the need for a national-level data-intelligence authority and shared how in-house models outperform generic ones, yet siloed efforts limit impact [255-267][280-287].


BSNL’s vision extends AI beyond fraud to network optimisation, edge computing and federated learning for inclusive rural services.


Robert Ravi described AI-driven network diagnostics, the AI-Vani customer-experience system, and plans for edge data-centres and federated learning that keep user data private while improving service quality, especially in underserved regions [372-383][388-395][398-401].


Overall purpose / goal


The event was designed to showcase how AI can transform digital trust: first by presenting Tanla’s Wisely.ai solution and its partnership with Indosat, then by surfacing sector-wide challenges through a multi-stakeholder panel, and finally by outlining a broader, collaborative roadmap (including telecom, finance and regulatory bodies) for a secure, inclusive digital economy.


Overall tone and its evolution


– The opening remarks are formal and celebratory, welcoming guests and emphasizing innovation [1-4].


– The conversation quickly shifts to a serious, urgent tone as Sanjay and Vikram discuss the massive fraud losses and the need for decisive leadership [25-31][46-53].


– As the dialogue moves to partnership details and ROI, the tone becomes optimistic and solution-focused, highlighting measurable wins [86-92][96-102].


– The panel discussion adopts a collaborative yet critical tone, acknowledging fragmented defenses and calling for ecosystem-wide data sharing [153-169][191-210][236-244][255-267].


– The closing remarks from BSNL and the host return to a hopeful, visionary tone, emphasizing future-ready AI infrastructure and inclusive rural outreach [372-383][388-395][398-401].


Overall, the discussion progresses from celebration to problem-identification, through evidence-based solutioning, to a collective call for coordinated action, ending on an aspirational note about building a trustworthy digital future.


Speakers

Wish Gurmukh Dev – Host/MC representing Tanla Platforms and its group companies Carex and Value First [S1].


A. Robert J. Ravi – Chairman and Managing Director, Bharat Sanchar Nigam Limited (BSNL); telecom leader with over three decades of service, gold-medalist in Electronics & Communication Engineering [S4].


Vikram Sinha – President, Director and Chief Executive Officer of Indosat Ooredoo Hutchison (formerly Indosat Orido Hutchison) [S5].


Ratan Kumar Kesh – Executive Director and Chief Operating Officer, Bandhan Bank [S6].


Anshuman Kar – Chief Customer Success Officer (formerly Chief Growth) at Tanla Platforms; moderator of the panel discussion [transcript].


Neha Gutma Mahatme – Director, Amazon Pay India [S9].


Audience – General audience members; no specific titles mentioned.


Sanjay Kapoor – Host of the fireside chat; former CEO of Bharti Airtel, current Board Member of Tanla Platforms, distinguished global telecom leader [S13].


Bipin Preet Singh – Founder and CEO of MobiQuik, leading fintech entrepreneur; also a customer of Tanla Platforms [S15].


Additional speakers:


Uday – Tanla partner referenced as a strategic partner and collaborator on the AI solution [transcript].


Vipin – Panelist addressed during the discussion on ecosystem responsibility [transcript].


Pratham – Participant addressed at the start of the panel Q&A [transcript].


Ashutosh – Mentioned by the moderator while introducing the panel [transcript].


Ruthen – Person addressed by the moderator regarding national-scale responsibility for citizen protection [transcript].


Full session reportComprehensive analysis and detailed insights

The evening opened with Wish Gurmukh Dev thanking the audience and welcoming them on behalf of Tanla Platforms and its group companies Carex and Value First. He outlined Tanla’s three enduring principles-innovation, collaboration and impact-and introduced Wisely.ai as an agentic AI platform designed to identify, prevent, eliminate and record spam and scam activity worldwide, already operating in Indonesia, India and with major Indian banks to protect millions of users in real time [1-4][5-7].


A fireside chat was then announced. Sanjay Kapoor, a four-decade veteran of global telecom, former CEO of Bharti Airtel and current Tanla board member, was introduced as the visionary steering the company toward a world-class AI-driven communications enterprise. The guest speaker was Vikram Sinha, President, Director and CEO of Indosat Oridu Hutchinson, who has overseen the telco’s transformation from a traditional network operator into an AI-focused technology company committed to “AI for all” and digital inclusion [8-12].


Sanjay set the strategic backdrop by noting that the global economy is digitising at unprecedented speed. He cited projected digital payments of over US $14 trillion annually by 2027, more than five billion people online, and the addition of nearly two billion new internet users in South and Southeast Asia. He highlighted India’s digital economy expected to surpass US $1 trillion by 2030 and Indonesia’s GMV already exceeding US $100 billion, while warning that this scale brings rising cyber-crime, digital fraud and organised scam operations that cost billions each year and constitute a systemic trust risk that must be addressed at the board level [15-22][25-31][S1].


Vikram responded with a concrete illustration from a 2024 MasterCard advisory-board meeting in London, where the Global Anti-Scam Association reported that Indonesians had lost US $5 billion that year, with 65 % of the population experiencing spam or scam on a weekly basis. The victims were predominantly middle-income and lower-income women and elderly women, an “eye-opening” data point that compelled Indosat, a 58-year-old operator likened to Indonesia’s BSNL, to move beyond merely connecting customers and to protect its 100 million subscriber base [30-38][46-57].


Recognising the urgency, Indosat chose Tanla not as a simple vendor but as a strategic partner. Vikram emphasized, “We don’t need a vendor, we want a partner,” underscoring the need for co-creation rather than a product-only relationship [66-68]. Tanla’s GPU-cluster, including GB200 H100 units, was deployed to train bespoke models, enabling the detection of close to two billion spam instances and the flagging of 2.3 million scammers in real time [66-68][70-73][80-84][122-127].


The business impact of the Wisely.ai integration was evident in Indosat’s quarterly results. ARPU grew 9 %, outpacing the industry average of 3 %, while churn among serious-base customers (tenure > 90 days) fell from 3.6-3.7 % to 1.6 % within six to eight months of deployment, demonstrating a clear ROI and reinforcing the shift from pure connectivity to providing “peace of mind” for customers [86-92][96-102].


Vikram also highlighted his hands-on leadership, noting that he spends five days each month in the field, visiting villages and new-capital outlets to ensure the solution meets on-ground needs [100-104].


Following the fireside chat, Anshuman Kar opened the panel by quantifying the fraud problem: global scam losses now exceed US $1 trillion, and SMS accounts for roughly 70 % of Indian fraud, with 65 billion SMS and 15 billion OTT messages sent each month in India. He cited Wisely.ai’s early success, estimating that the platform prevented about US $500 million in losses within its first six months of launch [153-164][S1].


Panelists explored why existing defences remain fragmented. Anshuman stressed the need for “co-ordinated, real-time intelligence” across telcos, banks and fintechs [170-176][349-353]. Ratan Kumar Kesh described how banks use AI-driven rule-engines to flag out-of-routine transactions, yet warned that “mule” accounts-legitimate-looking accounts rented to launder money-remain a major, under-addressed threat, especially for senior citizens [191-210][191-199]. He also recounted a pick-pocket anecdote illustrating law-enforcement gaps that leave scammers untraced even when multiple agencies possess relevant data [310-322]. Neha Gutmā Mahatme added that fraud is a behavioural journey that begins long before a payment, and that limited visibility into external social-engineering data, combined with the faster evolution of offensive AI, hampers defensive models constrained by privacy and regulatory limits [236-244][240-242]. Bipin Preet Singh reinforced the systemic nature of the problem, noting that 99 % of the scams reported by his fintech’s customers involve money stolen from other banks, and called for a national Digital Payments Intelligence Authority to enable ecosystem-wide data sharing [255-259][279-283]. An audience member questioned whether the existing Digital Payments Intelligence Platform already provides sufficient integration, highlighting ongoing gaps despite its launch [332-340].


The governance discussion returned to responsibility for protecting citizens at national scale. Ratan highlighted law-enforcement gaps, while Bipin suggested that the RBI-led Digital Payments Intelligence Authority could assume a coordinating role, acknowledging that effective implementation will require both regulatory leadership and industry participation [279-283][310-322].


A separate perspective was offered by A. Robert J. Ravi, Chairman and Managing Director of BSNL. He described AI-driven network diagnostics that pinpoint complaint hotspots, the AI-Vani system that routes callers to the appropriate agent, and a “recharge expert” AI that mitigates spam on WhatsApp. Looking ahead, Ravi outlined plans for edge data centres and federated-learning models that keep user data on-device while still benefiting from collective training, thereby extending AI-based protection to rural users without compromising privacy [372-383][388-395][398-401].


In synthesis, the participants reached broad consensus that AI-powered anti-fraud solutions such as Wisely.ai are already delivering real-time protection and measurable financial returns (e.g., ARPU uplift, churn reduction, $500 m loss avoidance). They agreed that scaling these benefits requires a shift from vendor-type relationships to strategic partnerships and, crucially, an ecosystem-wide data-sharing architecture that bridges telco, banking, fintech and regulator signals. The panel highlighted persistent challenges: the arms-race between offensive and defensive AI, limited external behavioural data, the need to protect vulnerable groups (senior citizens, low-income women), and the difficulty of balancing security with customer-experience friction. Future directions identified include federated learning, edge AI, and coordinated national-level intelligence platforms that respect privacy while delivering rapid, cross-border fraud detection [345-355][S33][S39].


Overall, the event demonstrated that AI-enabled anti-fraud solutions are moving from pilot projects to measurable business outcomes, but scaling them requires ecosystem-wide data sharing, coordinated regulation, and continued innovation to stay ahead of increasingly sophisticated scammers [136-140][402].


Session transcriptComplete transcript of the session
Wish Gurmukh Dev

Thank you everyone. Thank you very much. Thank you. Once again, ladies and gentlemen, a very good evening and welcome to what promises to be a truly memorable evening. On behalf of Tanla Platforms and our group companies, Carex and Value First, I extend a warm and a hearty welcome to our enterprise and telco customers, our global strategic partners, board members, and our incredible team. At the core of Tanla’s DNA are three enduring principles, innovation, collaboration, and impact. For three decades, this DNA has driven us to build innovation at scale, touching billions of users, and what excites us the most is the greenfield landscape that we have always explored. Along with it, it has helped us work in close partnership with our esteemed customers, our regulatory ecosystem, our telco partners, and the broader ecosystem to ensure that every step has been collaborative and always ahead of the curve.

And lastly, it has helped us ensure that every innovation we pioneer creates a tangible and a measurable impact in the world. And it’s these principles that shaped Wisely .ai, our agentic AI platform built to identify, prevent, eliminate, and bring to the books the growing menace of spam and scam, not just in India, but world over. Today, Wisely .ai is live and delivering real impact at Indosat in Indonesia, at BSNL in India, and with our leading banks in India, safeguarding millions of users in real time every single day. Tonight, we don’t just want to talk about it, we want to bring it to life. So without further ado, let’s bring the story to life. Please welcome our guest for the fireside chat.

A fireside chat from the theme Vision to Impact, driving customer engagement with AI -driven trust. It gives me immense pleasure to invite our host for the fireside chat, who has spent nearly four decades as a distinguished global telecom leader, leading one of India’s most iconic companies as CEO of Bharti Airtel, shaping global mobile policy as a key voice on the board and executive committee of GSMA, and building a legacy that stretches from telecom to digital services and beyond. We are honored to have him as our board member at Tanla Platforms, where his global perspective and vision, continue to shape our journey towards becoming a world -class AI -driven communications enterprise. Ladies and gentlemen, please put your hands together for Mr.

Sanjay Kapoor. A guest on the fireside chat is a seasoned global telecom leader who has not only defined the arc of the industry, but also built it. A career spanning across some of the most dynamic markets across Asia and Africa, he has held senior leadership roles from being CEO of Bharti Airtel Africa and Managing Director of Bharti Airtel Seychelles to serving as a CEO of Orido Group in Maldives and Director -CEO of Indosat Orido before taking on his current role. Today, he leads one of Indonesia’s most transformative telcos, Indosat Orido Hutchison, driving its evolution from a telco into an AI tech co, anchored by a bold vision. of AI for all and a deep commitment to digital inclusion and security for every Indonesian.

Please join me in welcoming the President, Director and CEO of Indosat Oridu Hutchison, Mr. Vikram Sinha. I hand over the baton to our esteemed host, Sanjay, to take it forward and we all look forward to it. Thank you.

Sanjay Kapoor

Thank you. Thank you for your kind words and welcome Vikram. Before we really get down to asking a few questions from the person who is going to be on the firing range for today’s chat, let me set up a pollute for what we are going to be discussing. We all know that the global economy is rapidly digitizing, but the trust has become its most crucial foundation. Digital payments are expected to surpass $14 trillion annually by 2027, with more than 5 billion people online. In South and Southeast Asia, nearly 2 billion people are coming online at a record speed, driven by affordable smartphones, low -cost data, and national digital infrastructure initiatives. India’s digital economy is projected to cross $1 trillion by 2030, while Indonesia’s has already exceeded $100 billion in GMV.

Yet, this scale brings vulnerabilities. Both markets are facing rising cybercrime, digital fraud, and organized scam operations, causing billions of dollars worth of losses each year. Globally consumers have lost over a trillion US dollars in scams Today’s fraud is no longer isolated phishing It is AI powered, it is cross border, it is automated and industrial in scale This is not just a consumer experience issue anymore It is an economic issue, it is a systemic risk issue, a trust issue and it demands great leadership to combat it It’s my privilege to welcome Vikram who I have known for years and years We worked together at ATL2 He is the President, Director, CEO of Indosat, Orido, Hutchison and is serving over 100 million customers in that country Under his leadership, Indosat has accelerated its transformation into a digital first AI technology access across both urban and rural communities.

Indosat has evolved as an AI tech company and partnered with Tanla, guided by a powerful vision, AI for all. And that’s a very powerful statement that they make. So Vikram, welcome here. And we’ll get down to some sharing of insights and questions to you. We’ve all known about digital fraud becoming more intense. We all know it’s eroding trust. And you as a CEO and with your lens, when did you really move it from being a customer complaint issue to a board level issue? Because there must

Vikram Sinha

First of all, again, it’s an absolute honor and privilege, especially having it with Sanjay. You know, I have a long learning history. So thank you. Thank you, Sanjay. And it’s an absolute honor. I think coming back to your question, let me share with all of you a true story. I’m also on the advisory board of MasterCard. I still remember early 2024, one of the board meeting in London, advisory board meeting, the Asia SCAM and the GASA, Global Anti -Scam Association, presented the data and I was blown off. That report shows that in 2024 itself, 5 billion US dollar Indonesians have lost. What touched me is these are all middle income, lower income women, elderly women. This was an eye -opening data for me, number one.

Number two, the next key highlight, Sanjay, it was every Indonesian, 65 % of the Indonesians are facing spam or scam on a weekly basis. So that itself was a trigger for me that Indosat being such an iconic brand. Let me tell you, Indosat is like BSNL of Indonesia. 58 year old company, first company to connect Indonesia to the world. It became Indosat Oridu Hachisan. But people have lot of expectation. So as a CEO, that was the trigger Sanjay that our role is not only to connect. Our role is to also protect my 100 million customer. And that is where I got very serious that we need to solve this problem for our 100 million customer.

Sanjay Kapoor

Yeah, I mean, I think every board worth its while today. gets intimidated by this problem that is hitting them. And I’m so glad to hear from you that your board is fully aligned with you on this cause and you’ve been able to convince them to say, I really want to go ahead making some serious investments and changes because of this. And my leading question from here is that I just said that scammers are using AI, voice cloning, automated phishing campaigns, synthetic identities. How did you think of AI in the middle of all this? You know, because it is a new technology. People are still surfacing where they’re headed. But you’ve picked it up as the foundational infrastructure for protecting, you know, this at a national scale.

So tell us about that.

Vikram Sinha

So let me put it this way. I’m a very strong believer of fake it till you make it. So I started talking about AI two years back and I had very little understanding. Of what AI will do. I’m telling you a true story. I was in. many people still struggle with that today. But let me tell you fast forward. I was invited by, I think, Sundar and Google Circle. There were 15 CEOs. This was around a year back. I was on a breakfast table. The joke started by saying that AI is everywhere other than P &L. This is how the breakfast started. But within an hour, I understood companies, countries who have been all in and ahead of the curve and who are solving real problem, they have started seeing value.

So for me, if I have to solve a real problem at scale, and that is where we said that if these scammers are using AI, we have heard many stories and the way they clone their voice, you know, you will be so scared what all are happening. Then we were very clear, we want a partner, we don’t need a vendor We want a partner who can work with us and use AI to solve this real problem And I have to say Sanjay, I think you are on their board, Uday is here We work with 96 vendors, we categorize among them 20 strategic partners But there are 4 or 5 where I invest time, which becomes very strategic for us Because I was trying to solve a real problem, I met Uday and then our commitment was aligned And that is how we wanted to make sure that not only we solve it, we do it in a way which should become a global case study

Sanjay Kapoor

And with an AI -led model that you have put in place What are the benefits that are accruing to you at a customer level to begin with?

Vikram Sinha

Yes, because as I said, you know, there’s a lot of AI as a toy. We are very clear what we don’t want to do. So now that this is my first showcase, I put it on my quarterly result. And then you have been a CEO, you have reported quarterly after quarterly, until unless you have a substance, you don’t put any example on your investor deck. So if you look at my last quarterly investor deck, we have put it there that with Tanla platform, three things I’ll highlight. You know, quarter four, ARPU grew for the industry 3%, we grew 9%. Number one. Number two, our churn. Our churn, because markets are mature. You know, you don’t have to be over -obsessed with gross assets.

And you know, you have to deliver experience. our churn for serious base greater than 90 days from a level of 3 .6, 3 .7 have come down to 1 .6. And this is just a beginning, Sanjay, you know, because the model is getting trained. And I’m very confident that we will see much more value going forward.

Sanjay Kapoor

And, you know, from here, being an ex -CEO and being a board member for very many years, now, this question of ROI always haunts every board to say, you’re making an investment in this, it seems to be doing good for your customers. What about the ROI on what you’ve done?

Vikram Sinha

You know, this is, again, and it’s a fair question, you know, investment on AI is not small. So until and unless you see the impact of AI on your P &L, it will not be scalable. So. So very clearly, within six, eight months, we have seen whether it is ARPU, whether it is churn. And the most important thing is, Sanjay, where we lost, if I go back to my last two decades of experience, we as Telco, we were very inward looking. The biggest thing which we missed was focusing on customer love. I think this is very fundamental. This problem which I am solving is so, so fundamental that the role of Telco is not only to connect, it is also to give peace of mind.

Protection is a big statement. And the channel which is getting used is voice, WhatsApp. So you need to solve it for your customer. Otherwise, you have no business.

Sanjay Kapoor

I mean, I hear you and you are a passionate CEO who believes in keeping his ears in and the ears out. And I see you

Vikram Sinha

I had no idea about Tanla or anything. In fact, first time when Uday came to meet me, I thought it was a startup. I’m again being very honest. But then somebody told me they are solving for banks in India. I think we have to understand if somebody is solving this problem for banks. Because, you know, spam is one thing. But the bigger issue is scam. And these scams which happen, these are small tickets. These are like 50 rupees, 100 rupees, 500 rupees. These are like 50 rupees. and it goes up to as high as $10 ,000. You know, I’m just giving you example. But then I realized that they have done some good work in India. But I have to say, Sanjay, you know, where it moved from vendor to strategic partnership, we were also very keen that my team want to do a bit of an engineering with them.

So we have a full stack AI factory. We have our own GPU cluster. I think there are a few things where we have done before India. Our cluster of GB200, H100 was live. So I told Uday, let’s train the data because see the power of compute and GPU. You know, we all talk about TikTok. TikTok was all designed on GPU. They don’t even use CPU. So if you have to be ahead of the curve, today on that platform, let me give you two data points. Close to 2 billion spam instances. Scam. Clearly threat intelligence protected. 2 .3 million scammers flagged and customers are getting real time as you know we have grown up on the Airtel values I spent 5 days every month in the market I was in a village I was going to the new capital of Indonesia which is Far Flung on my way I stopped my car I saw an outlet I asked him my language still is not good in their language I asked him what do you like about Indosat IM3 he said this Sat Spam Spam Scam it’s solving real problem so again we just launched it on Whatsapp channel also I think Whatsapp is one of the challenge which is maximum getting misused so we have to continuously evolve and this is where Tanla has committed that we will make sure we do it together and we do it properly

Sanjay Kapoor

excellent you know these fireside chats have a time limitation so we have to keep it up and my stopwatch is telling me we’ve exceeded time already. So let me wind it off. Vikram, first and foremost, thank you for these insights. What stands out from our conversation today is how digital trust moves from concept to reality, which is what you’ve just described. When over 100 million subscribers are protected by AI, when billions of communications are analyzed in real time, and when millions of malicious actors are stopped within the ecosystem, trust is no longer a promise. It becomes an infrastructure. And what Indosat has shown through AI for All is that inclusion and protection are not trade -offs.

They must advance together. So thank you for your insights, Vikram, and it is a pleasure having you today.

Vikram Sinha

Thank you, Sanjay. Thank you.

Wish Gurmukh Dev

Can I request both of you just pose for a picture, please? Thank you very much, Sanjay. Thank you very much, Vikram. Wow. Two global leaders, one who defined the yesteryears of telcos across the world, and the other who’s redefining and bending the arc to set the future of telecom leveraging AI. Thank you so much, Vikram and Sanjay, for this scintillating talk. Thank you very much. Our next session, ladies and gentlemen, is… going to be a panel discussion wherein we have Anshuman Kaur Chief Growth my apologies Chief Customer Success Officer of Tanla Platforms who is going to moderate the panel AI for Citizen Protection and Securing the Digital Economy May I request Anshuman to kindly come on to the podium please First of all panelists Mr.

Ratankesh Executive Director and Chief Operating Officer Bandhan Bank Bandhan Bank is one of the largest and the fastest growing banks in India with over 32 million customers served by the across 35 states He is leading multiple functions including technology, operations, customer experience and transformation functions second of our panelists Mr. Bipinpreet Singh founder and CEO MobiQuik a leading fintech entrepreneur at the forefront of India’s digital payments evolution Bipinpreet Singh has built MobiQuik into India’s largest digital wallet with over 180 million users please welcome Mr. Bipinpreet okay we’ll go ahead with the third panelist while we wait for Bipin our third panelist Ms. Nehaji Mahatme director Amazon Pay India a payments and fintech leader driving customer centric digital financial experiences at scale shaping how millions transact seamlessly and securely through the Amazon Pay India app please welcome Ms.

Nehaji Mahatme Ms. Neha Kavya can I request you to just check with Bipin please Maybe Anshuman. Yeah. Oh, Bipin is on his way. To Bipin, ladies and gentlemen, founder and CEO Moby Quick, leading fintech entrepreneur at the forefront of India’s digital payments evolution. Thank you.

Anshuman Kar

Good evening, everyone. As we just heard in this fireside chat, the problem is big. We just heard numbers of over… over $1 trillion being lost in the global economy because of scams and frauds. If you think about India in particular, SMS as a channel, almost 70 % of the scams originate from that channel. And as messaging itself has expanded into other OTT channels as well, and I’ll share some numbers, now 65 billion SMSes are spent monthly in India. Another 15 billion are sent monthly over OTT channels. So when you look at these numbers, it is clear that it is a channel, while it is important and critical, it’s only proliferating even further. So in that context, when you think about, and I joined Tanla relatively recently, compared to the three decades of history as a chief customer officer, and it has been a privilege to see the build and the deployment of WISD AI platform.

And it is an honor to have Vikram here. As a CEO, and there is nothing better to hear that validation directly from the customer. And you heard the impact that it’s having. on the end users in terms of protecting them from scams and spams. In fact, the estimations are within six months of launch, we have protected almost $500 million in estimated losses. And then as you think about where this takes us in the future, scams and scamsters are continuing to evolve. They’re not sitting idle. And so we have to stay a few steps ahead in the innovation curve. That becomes critical. And when they’re becoming more sophisticated, they’re becoming more personalized, and they’re actually probably also becoming more successful at times.

So tonight, I will not, before I get into solutions, I want to focus on the problem. Is the problem really getting better? Or is it getting worse? And why? So we have a distinguished panel here, and they all provide very different vantage points in the industry. We have banking, who sees the transaction risk and, frankly, a lot of regulatory accountability as well. You have fintech, who sees a lot of velocity and scale, and they also are obsessed with customer experience. And then you have platforms like Amazon Pay that have the commerce side, they have the payment side. So they see a lot of behavior signals across multiple parts of the platform. But from a citizen perspective, an average user, it’s a seamless journey.

They don’t operate in this individually. So that’s something that we will delve into. And as part of that, we would love to deep dive in terms of, how the key stakeholders in this ecosystem need to work to thwart this menace that is in front of us. So with that, I want to welcome our distinguished panelists. Thank you for joining us for this discussion. Thank you. So let me start off with you, Pratham. Recent Supreme Court judgment, a couple of weeks back, talked about, I think, 54 or 56 ,000 crores being lost to scams. In fact, they called it dacoity. I don’t think I’ve heard that term lately. I think it was around Chambal and all. I used to watch movies when I had heard that term.

But it is of that magnitude and scale. So the question, Pratham, is why is this still a problem? And what is really not working?

Ratan Kumar Kesh

mostly senior citizens and at times even IIT Bombay professors so that’s like the spectrum you can look at it they are being defrauded the second is a lot of customers are now able to open accounts in most of the banking companies they open banking accounts and those accounts are being utilized to siphon off the funds stolen from somewhere and getting routed so there are two parts of the problem in different sense the bigger trouble is the second one the second one is being done willingly with a country having 1 .5 billion population there are a lot of people who would be willing to open accounts and then across multiple banks the India stack makes it pretty simple to have an account onboarded very quickly in just about few minutes and then they go and rent that account and get a fee per month and that number and the lure of making easy money is so high and that’s why it’s so difficult and to me that’s not working So at one end, we celebrate India AI Summit, all the global leaders, heads of states, the big AI celebrities are coming all over here.

And we are talking about our countrymen who are actually defrauding poor senior citizens, and they have got hard -earned money of their entire life, and those are being siphoned off. So that’s very, very sad to see. So I think what is not working is the mindset. It’s not going to stop so soon. So it’s a bit of a philosophical response, but I’ll come to the bit more technical response a little later. The second is our customers getting defrauded using multiple things. That part I think it has got improved significantly because most of the banks now have very sophisticated rule engines. So now as banks like ours, we process. millions of transactions. Like I was just saying that my UPI volume, we are just an 11 -year -old bank.

My daily UPI volume is something like 60 lakhs per day. Now, the volume and velocity of transactions are very, very high. But the good part is that depending on the customer’s profile, the customer routine transaction pattern, we can identify an out -of -routine transaction. If a customer happens to withdraw 10 ,000 rupees from a particular ATM, generally, he or she withdraws from somewhere else, we can say it’s a non -routine transaction. Someone withdraws 10 ,000, suddenly withdraws 25 ,000, we know it’s a non -routine. Somebody never makes a rent payment, suddenly starts making a rent payment using one of the payment channels, we say it’s an out -of -routine. So once you have out -of -routine transactions, we are able to identify those.

Sometimes we prevent those transactions. Sometimes we go back, do enhanced due diligence, and then allow. So that part is working fine, and the tools are getting more and more mature. AI is helping us to build the algo a lot better. So that part is clearly working. So if you are seeing the numbers, the fact that the velocity and the volume has gone up in terms of percentage is coming down. But the mule part of it is very, very scary, which is customer account being utilized as a rent. You know, the other day, one of my employees from the fraud prevention team called up one of the fraudsters, saying that, you know, this is a senior citizen, you called so many times, you tried to defraud, why are you doing this?

So his response was that, can you tell me how much money you make a month? It must be 50 ,000, 70 ,000? I’ll give you double, you start giving me data. The fraudster is telling my employees, saying that, can you share with me more data? Don’t worry, I’ll not tell anybody, just give me data, I’ll give you 70 ,000, I can even pay you 1 ,50 ,000 per account. Now, there again, we are using a whole bunch of technology, including the algorithm in terms of the transaction monitoring algorithm to really prevent that. And I think Carix has developed one which we are implementing now, which is this anti -phishing tool. which has got some very interesting capabilities. If some of you are interested, you could talk to the Karik spokes.

I think that is quite interesting. They sit on the DLT platform. They scan that particular SMS, where is it originated from, some of the links that they provide in terms of collecting data, whether it is genuine or fake. They look at the keywords using some of the AI algo and then come back and use some of the techniques to stop preventing and sending the SMS. To the potential customer who could then get defrauded. So I think these are some of the things which are working. So largely that is what is the spectrum I would see, Ansuman. It is a long answer, but that is what it is.

Anshuman Kar

No, this is fantastic insight and it is obviously great. In fact, my parents are so scared they do not even use ATM cards because of the risk of being defrauded. In fact, they wait for me to come and they will do recharge on the phone, otherwise they are going to the stores to do it. Because this is becoming really scary and especially vulnerable populations, like senior citizens and all, are particularly exposed. let me ask let me turn it over to you Neha right you sit at the intersection you see AI patterns I’m sure Amazon uses AI all over you analyze behavior patterns and all across commerce across payments right but why are we not able to stop scams across the whole journey

Neha Gutma Mahatme

so I want to kind of talk about three four aspects that we’ve learned on why it is difficult so first of all I think GAMP is not at a transaction or a payment transaction level I think it is a behavioral journey it starts much before the payment really happens and that’s really where the fundamental issue is and that’s what I think as an industry we need to kind of solve for the second and if I kind of belabor on that point I think the social engineering happens much ahead it is not really when the transaction is happening or the payment is happening the deepfakes, the voiceovers the fake identities the layering of transactions, I think it’s all making it very difficult to stop scams at the point of the transaction.

The second point is the data side was limits visibility. So while as Amazon we have really good data internally on the platform but I think we miss the data of how these social engineering patterns are getting created outside of Amazon and that really prevents us. The third is the human psychology evolves faster than models. So while you can really build models, refine algorithms but can’t beat the offensive AI because the AI is being used on both sides. It’s not that the preventive AI is working, there is also offensive AI. And the offensive AI works unconstrained while the defensive AI has constraints of privacy, constraints of regulations, constraints of customer experience. And so I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s benefits that you need to provide.

So I think that’s the third part and therefore the last part which is really the crux of the point is that AI is helping detect anomalies. It is not helping detect the malintent or the behavior and unless and until I think we solve the malintent or the behavior I think the scans, the

Anshuman Kar

It’s a fantastic response. And basically what you’re saying is no one institution is seeing the whole journey. You’re seeing pieces of it. But there’s a lot of parts that are interconnected. Let me bring Vipin into the conversation. Vipin, you see a lot of transaction velocity and scale. But you’re also focused on customer experience because that can become a friction point if you do go overboard in times to protect. How does AI come in? How do you calibrate your model and AI to balance that potential friction that potentially can impact your growth and legitimate customers versus protecting from fraud?

Bipin Preet Singh

Thank you. Thank you, first of all, Ashutosh and Tanla Solutions for inviting me here. It’s a privilege. We are also customers of Tanla, so happy customers. Thank you. I think when it comes to AI and the usage of AI for fraud, I want to give some perspective with respect to first the kind of fraud. So we operate in payments and financial systems. And I think what’s happened in the last 10 years is the financialization and digitalization of financials has happened at an exponential scale. And so many different entities have gotten interconnected, just like what you are saying, that a loophole in just one place is sufficient to create fraud and scam throughout the ecosystem. So one thing we have to be very clear that one entity cannot control scams.

It’s very, very difficult. Because like the experience that we have at MobiQuake is 99 % of whatever scams that our customers complain of are not the money. It’s not the money that they get stolen out of MobiQuake, but it is actually stolen out of some other bank. and come into MobiQuick. So we are the recipients and, you know, and we get the complaints that the money has come here and we need to take action, whether it is coming through UPI, coming through credit card fraud and all those things. Now, therefore, you know, the standards of education and the standards of 2FA, second factor authentication, they need to be there. Perhaps are not equally enforced. The awareness and the education is not equally enforced.

And that brings me to the second point, which is that, you know, the AI, the scams have also become very, very sophisticated. Right? In our company, and we are a fintech company, we employ so many smart people. There are people who have fallen fraud to scams where they got a WhatsApp from me asking to buy gift cards with my photo and people have bought it, you know, without trying to verify or seeing what the number is. On the WhatsApp thing, because, you know, this is… It’s becoming, with AI, it’s becoming harder and more sophisticated. You know, the modus operandi of the scamsters is becoming extremely smart. And they are getting very, very smart at understanding the profile of the customer.

It’s not, they don’t target everyone. They have a very clear idea on who is likely to fall for a scam. So, there is need, for example, and there is a, I think there is an effort going on at the RBI’s end. And the government said to create a intelligence body which will share data across the entire payments ecosystem. I think it’s called Digital Payments Intelligence Authority or something like that. And that is something that’s extremely important because until data sharing starts to happen at an India level scale, you cannot identify. I can identify patterns. I can identify a pattern which works for me. But, you know, the scamsters will get smarter because they will keep changing their MO outside of Mobiquit.

So it’s very very difficult to keep adapting to that. So there is a national level initiative that is required. Third thing which I want to say is on the LEA front. I think the almost the entire country all the police everyone knows where the scams are. And this I am not able to understand why no action gets taken. It is the same places. It is the same origins. Right. But somehow no action gets taken. And I think until there is fear of law until enforcement has happened that that which will fraud you know payments fraud will come. At our end you know what we are doing is we are creating our own in -house models.

So as far as technology is gone in terms of machine learning and now with AI. We have created our own models. We have created our own models trained on our own data sets because they work best. for our kind of transaction pattern. But they may not work best for the kind of solutions that other companies may need. In fact, we have explored solutions of fraud and machine learning from other companies, but they have a very poor performance because they are trained on some industry -level data, which is not the same patterns that we get. So, at our end, we are a tech company, we can adapt. But I cannot say the same for all the entities, at least on the financial domain.

And that’s a big problem because until there is a national… And I think the regulator, especially RBI, is very, very concerned about this. I think we have heard now, and in my recent conclave where I went to, that they are saying that enough of making transactions easy. Now we need to go in the reverse direction. Now we need to make transactions a little difficult so that there is some friction because otherwise, you know, people are losing money.

Anshuman Kar

It’s great points. And as you said, I think the silos of data that we have, and in fact, you talked about training the models. In fact, you know, again, when we went to Indosat as well, you had to go and train them, even including language nuances as well. So these things become critical in terms of adapting. You mentioned about like RBI initiative and all, and I direct this to you, Ruthen. Who ultimately owns the responsibility of protecting citizens at national scale? Because can banks do it alone? Can RBI do it alone? Or are you dependent on upstream and downstream signals, upstream signals like that you get from telcos because a lot of these things originate from the channels?

How should responsibility be structured? to protect people and to help.

Ratan Kumar Kesh

I think Vipin spoke about that point that look, for a fraud to happen, we know that somebody would have gone to an e -commerce platform to make a payment and that payment is coming through a payment channel and the account is held in Bandhan Bank and the payment is made to let’s say for a product or through some platform the payment is made to an access bank credit card. Now and then the fraudster is actually sitting somewhere out there who is pretty much somebody amongst us. Now I will give you a very funny story of I lived in Mumbai for 20 plus years the local trains are full of used to be those days pickpockets. I came from I was living out of India and came back and I was going to meet a friend of mine that was my first trip in the local train and my purse got stolen and he said don’t worry how much money I said that’s okay but I had credit cards and all of that so somehow managed to block those cards and he says let’s go to the police station I said but I had my identity cards there he says don’t worry it’s fairly ethical pickpockets in Mumbai you go and tell the police you have to tell which train which local train from what to what and what exactly it must have opened probably so I said from Borivali to Andhri it must have happened in between it was a whatever 950 local train so after two days the police called me and handed over me the identity cards so so there was nothing else there but then I got back the identity card so I think that’s like the police has an ability to really find out who the people are and I find it quite baffling to figure out that this fraudster is somewhere there the telephone numbers are issued multiple of them by a telecom authority the other pan cards were issued which are used to get this sort of account opening and video QIC and the telephone numbers yet we are not able to find out who these people are and technology has to protect all of that which as all of us are saying that we are trying our best to protect and the bad part and the sad part is that whenever there is a fraud happens and a customer goes to the regulator like the ombudsman and it says 20 lakh fraud it’s okay which bank is went from it says it went from HDFC bank where did it fall first it fell in Bandhan bank okay two of you together 10 10 lakhs pay it and be done with this that’s easy isn’t it I mean we of course are regulated entity regulator has no choice but to sort of do whatever best they can do and which we do and that’s absolutely fine we must have had some lacunae in our process but how do the whole chain has to work together I mean including the citizens instead of becoming gullible in terms of having the awareness about banking product the banks and the payment companies and the country has to create more awareness the police, cyber police and the local police they have to work together minister of home affairs is working very hard in terms of really making it happen so it’s an ecosystem problem and I think if all of us come together, all of us create more awareness and make it really ruthless probably that’s the only way to happen otherwise it’s no easy

Anshuman Kar

Thank you, thanks a lot I am told time is up I wanted to solicit two questions as well from the audience but in the interest of time I will just summarize this session in fact we have all talked about going kind of end to end it’s not just identification using the AI models but the prevention, the elimination and ultimately holding the scampers accountable with law enforcement and that comes a big part there is law and the enforcement of the law they can sometimes mean two different things so ultimately one of the things I hope just one more point in the name of hyper personalization the amount of data that you collect and the amount of data that you collect and the amount of data that you collect and the amount of data that you collect and we have an ability to go back to Neha and say, you know what, you’ve been searching for a home.

I can tell you which is the right home for you. That’s great. Neha feels delighted because she can actually choose the right house. But the same data is getting misused to do other things. And as much as I can collect data as a bank or a real estate company, Proster can also collect the same data. So as you said, the AI is on both sides and they are like more offensive. And so it’s a question of who stays a few steps ahead of the other, right? What is striking from this discussion, hopefully, and I’m summarizing is like, as you can hear, everyone is doing something, right? Oh, please. Please go ahead. We can take one or two questions quickly.

Sure, please. Can you please help the gentleman with the mic? This is not the best way.

Audience

So there should be some integrated approach. So I think the government of India is already working on that and they have a digital payment intelligence platform. So it’s not for that purpose or you are referring to some other issue.

Anshuman Kar

Sorry, is that a question? I think the question is there are some government initiatives.

Audience

Initiative is already there. Is it not enough? To have an integrated model for this fraud protection and other thing. Because you said. financial institutions are having their own trend model and accordingly they are protecting their customer this thing but if we have that integrated model and government of India through their one company there they have initiated with the collaboration because that RBI is working on mule hunter through RBI mule hunter is the initiative yeah from RBI in his hub so that is already all the banks are doing and they are doing with their own three month five month data individually not a complete umbrella like that so for that everything is coming in the this digital payment intelligence platform so with this initiative Anna that it is so which you have referred will not be addressed to

Bipin Preet Singh

yes yes absolutely I think you know at least I feel that there is going to be there’s a strong potential because for the first time data across the financial economy is being used for the first time and it is being used for the first time ecosystem will come together in one place and I think that is a big deal and then once that data comes together then hopefully the best people will work on it and understand patterns at a national scale because the problem in digital is everything is connected so you have to study it in an integrated manner and I am very

Anshuman Kar

Thank you for that question and we are obviously hoping that all of these show results but at the same time we are talking about national but scammers are not limited to even national geographies they are international as well so the scope and the breadth the surface area of the threats are only expanding so we have to really look beyond and if I may from my personal experience one of the things is in the world of AI data is actually the differentiator not so much the models because all public and the willingness to share the data itself is a potential barrier real time data and this is where it is not about just banks and financial institutions it’s also potentially telecom, right, because they see a lot of initial signals in terms of messages being sent and communication and so forth, as Vikram just talked about as well.

So let me summarize again this session.

Audience

Can I ask one question?

Anshuman Kar

May I request you to take it offline, please, because of the time constraint, if you don’t mind. Thank you. Thank you for cooperating, sir. It’s great to hear. I’m sure there’s a lot of interest in this topic and it shows the resonance of what we are discussing. So, again, I think to summarize, as you said, the attack surface is all interconnected, but our defense is right now fragmented. And therein lies the opportunity. And while the next frontier cannot be just more smarter individual AI model, it has to be really coordinated intelligence. And that obviously has to happen real time. It has to happen across the ecosystem. And it has to happen within the guidelines of a national level trust architecture.

So with that I will want to thank you to all the panelists and also for all of you to participate in this discussion and really contributing to this to shaping what the future looks like because this is really not just trust, this is the foundation of creating the digital economy and the growth that underpins it. So thank you so much. Thank you very much all the panelists and Anshuman may I all request you to stay on the stage for a quick photograph. Wow. From using technology for transaction monitoring and layering it with AI to solving for behavioral intent and offensive AI along with solving for customer friction on customer experience by layering it on technology and captive models along with regulatory tenets.

It was a very insightful and a very meaningful panel. Thank you very much each one of you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Our last session for the evening is one of the most interesting ones. It’s what we do on our third element or the third pillar of DNA, that is impact. It’s the impact spotlight, wisely .ai, our client’s perspective. Very few leaders in India telecom landscape carry the depth of experience and the institutional weight that our next speaker brings to the stage. As chairman and managing director of BSNL, he has orchestrated one of the sector’s most compelling turnarounds, driving the rollout of India’s first indigenous 4G network and restoring the organization to a clear path of growth, profitability and purpose.

With over three decades of service spanning tri the government of Tamil Nadu and an advisory role to the government of Uganda, recognized with the Visist Sanchar. For distinguished service and a gold medalist in electronics and communication engineering, he remains one of the most. consequential voices in India’s telecom and digital governance story. Ladies and gentlemen, please put your hands together for A. Robert J. Ravi, Chairman and Managing Director, BSNL.

A. Robert J. Ravi

important step we are thinking about, that’s what I was talking about. On the network side, can I bring AI? In bringing AI in the network side, I can even get patents, customer patents, calling patents, network initiatives. We were able to even see exactly how and where most of the complaints were happening in the network. And this also helped me for tweaking my entire setup today, where I’m very sure at the end of probably the study and whatever research we are right now going on the AI, as a user, if you are a BSL customer, you can intelligently speak to your RAN. When I speak, when I tell intelligently speaking, it could be various things. So you can have…

you can request a specific dedicated data specific dedicated voice traffic that means today I am in this place I need to video stream I need a 1G so not a 1G at least under 10 throughput available all the time it will be made possible so that type of a user enabled platform which we are building that will control not only from the customer angle from the network angle if this becomes successful today no customer in future when this reality actually comes in no customer could be so easily fished or smacked that’s the way which we are trying to go ahead of course going into what exactly happened was the last one we can see that the user impact we were able to authoritatively say that that so many number of connections to close to 280 million spans we have installed today.

This is on one side. Now we are integrating this particular aspect on a customer experience platform also. How do we benefit out of it? In my customer experience today also, we have brought in we have something called the AI Vani system. It’s just a Vani which comes and says, whatever you want, you can speak to the particular agent. And then we brought in something called a BSNR recharge expert system. It’s a complete AI driven. So when user suppose even as a user because spam suppose when he stopped the spam for the SMS side, next thing we have to concentrate is on the data side, which again I’m talking of course we are speaking to you all how do we do for the data side.

Data is not only on whatsapps or social media how can we expand the horizon of this particular area that’s where what we thought can I build in intelligence into the system itself so when you wanted to do probably recharge or even it works as a worm in the network to easily identify the sites which needs to be blocked which should not be available for my customers such a sort of independent intelligent network needs to be built in which we are targeting up that’s the second pillar the last pillar before I wind up is going to be the rural thing the rural side with Bharatanat coming in at a very big space rolling out of Bharatanat’s network we are trying to see when you’re closing closely going close to the customer at the end we are seeing lot of fractured coming in can I put edge data centers using this edge data centers can I really run what we call even SLMs.

Today we talk about different LLM models. These LLM models require a lot of information. The data is a key engine for it. And we are all travesty to see why should I share my data. So this is the next concept of what we bring in what we call a federated learning. So your data resides with you. I just learn your data and I federate over it. So all this is possible when I go to the rural end of the edges. So there I will be able to protect the customer to a next level. I am very sure I think we can keep talking on this very interesting topic but since the time is short I thank the organizers for giving me opportunity.

But my request to you all still there is lot of work to be done. Unless we have built a system where we confidently say our citizens that you are 100 % safe in my network. our job is not done. And this is possible only when we bring in technology and play it across in a platform which really intelligently builds this network. Thank you.

Wish Gurmukh Dev

Thank you, it’s been a wonderful evening absolutely thrilling to have two CEOs exchange and share with the audience the real life problems and how they converted into an opportunity that is going to shape the future of telecom in one part of the world followed by a panel thank you Anshuman and thank you once again to all the panelists who made the effort to come in and share their own perspectives on what could be changed structurally from a regulatory perspective, from an ecosystem collaboration perspective to customer experience without friction and lastly dear CMD Mr. Robert Ravi for sharing the deep collaboration that BSNL and Tandla platforms have come into and are trying to set a lighthouse to what could really be a customer experience driving safe and secure customer transaction thank you very much it’s been a true honor and a privilege to host everyone here thank you I am very thankful on behalf of Tanla platforms, our group companies Carex and Value First for hosting you all here Thank you very much Thank you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Wish Gurmukh Dev thanked the audience and welcomed them on behalf of Tanla Platforms and its group companies Carex and Value First.”

The knowledge base lists Wish Gurmukh Dev as the host/MC representing Tanla Platforms and its group companies Carex and Value First, confirming the report’s statement. [S1]

Additional Contextmedium

“Sanjay noted that more than five billion people are online and that digitisation is occurring at unprecedented speed.”

External data shows internet users have grown to about 5.4 billion globally, illustrating the scale mentioned and providing supporting context. [S110]

!
Correctionhigh

“He highlighted the addition of nearly two billion new internet users in South and Southeast Asia.”

The knowledge base reports only about 40 million new digital users in Southeast Asia in 2020, far less than the “nearly two billion” figure cited, indicating the claim is likely overstated. [S111]

Additional Contextmedium

“The scale of digital activity brings rising cyber‑crime, digital fraud and organised scam operations that cost billions each year.”

Estimates of global cyber-damage range from $2.3 trillion to $10.5 trillion by 2025, underscoring the magnitude of financial losses referenced. [S117]

External Sources (117)
S1
Secure Talk Using AI to Protect Global Communications &amp; Privacy — -Wish Gurmukh Dev- Host/MC representing Tanla Platforms and group companies (Carex and Value First)
S2
29, filed Jan. 22, 2010, at 9-10. — (last visited March 3, 2010) (‘Net Literacy’s programs are independently beginning to be developed by students from New …
S3
WSIS+20 Open Consultation session with Co-Facilitators — – **Jennifer Chung** – (Role/affiliation not clearly specified)
S4
Secure Talk Using AI to Protect Global Communications &amp; Privacy — -A. Robert J. Ravi- Chairman and Managing Director of BSNL, telecom leader with over three decades of service, gold meda…
S5
Secure Talk Using AI to Protect Global Communications &amp; Privacy — The main fireside chat featured Vikram Sinha, CEO of Indosat Ooredoo Hutchison, who shared how his company transformed f…
S6
Secure Talk Using AI to Protect Global Communications &amp; Privacy — -Ratan Kumar Kesh- Executive Director and Chief Operating Officer of Bandhan Bank, leading technology, operations, custo…
S7
Secure Talk Using AI to Protect Global Communications &amp; Privacy — – Sanjay Kapoor- Vikram Sinha- Ratan Kumar Kesh- Anshuman Kar – Vikram Sinha- Anshuman Kar
S8
Opening Remarks (50th IFDT) — – Moderator: No specific role or title mentioned
S9
Secure Talk Using AI to Protect Global Communications &amp; Privacy — – Bipin Preet Singh- Ratan Kumar Kesh- Neha Gutma Mahatme – Ratan Kumar Kesh- Neha Gutma Mahatme
S10
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S11
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S12
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S13
Secure Talk Using AI to Protect Global Communications &amp; Privacy — -Sanjay Kapoor- Host for fireside chat, distinguished global telecom leader, former CEO of Bharti Airtel, board member a…
S14
https://dig.watch/event/india-ai-impact-summit-2026/secure-talk-using-ai-to-protect-global-communications-privacy — First of all, again, it’s an absolute honor and privilege, especially having it with Sanjay. You know, I have a long lea…
S15
Secure Talk Using AI to Protect Global Communications &amp; Privacy — – Neha Gutma Mahatme- Bipin Preet Singh – Ratan Kumar Kesh- Bipin Preet Singh
S16
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — All kinds of fantastic applications already that we’re seeing right across the economy. We’re using increasingly agentic…
S17
Google and Microsoft launch separate Artificial Intelligence (AI) platforms for cybersecurity — Google Cloud has launched Security AI Workbench, an AI-driven security platform that combines several of the company’s e…
S18
Omnipresent Smart Wireless: Deploying Future Networks at Scale — The collection of large amounts of data for citizen services raised questions about how this information, particularly p…
S19
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — ### Cross-Industry Collaboration The discussion highlighted successful cross-industry collaboration examples, including…
S20
WS #148 Making the Internet greener and more sustainable — Nathalia emphasizes the importance of collaboration between different stakeholders to achieve a greener Internet. She su…
S21
Transforming Health Systems with AI From Lab to Last Mile — Data privacy, security and ethical safeguards Federated learning allows models to be trained on locally stored patient …
S22
AI for Good Technology That Empowers People — “So to make it even faster and achieve the sub 10 milliseconds, you actually have to bring in inference and training to …
S23
Published by DiploFoundation (2011) — Keywords: data protection regulation; call centres; adequate country; first job; e-commerce; Paraguay This framework wi…
S24
Enhancing Digital Resilience: Cybersecurity, Data Protection, and Online Safety — 3. Developing strategies to effectively reach and educate rural populations Audience: Thank you very much. I think my q…
S25
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs…
S26
WS #208 Democratising Access to AI with Open Source LLMs — Daniele Turra: Yeah, I’ll try to be very brief. So one key difference that we can see in open LLMs when it comes to t…
S27
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — ask I don’t think there is any country in the world whose government has given its citizens… In India’s context. Yes, …
S28
The AI gold rush where the miners are broke — The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economi…
S29
Panel Discussion AI in Healthcare India AI Impact Summit — Thank you for the question and for the invitation. So, you know, as you said, Switzerland and India, when you look at th…
S30
Living in an Unruly World: The Challenges We Face — Every year, large numbers of young Africans press into the labour market. If they can be provided with jobs Africa’s GDP…
S31
Seismic Shift — 1. International Monetary Fund, ‘India’s Economy to Rebound as Pandemic Prompts Reforms’, November 11, 2021, https://www…
S32
Secure Finance Risk-Based AI Policy for the Banking Sector — And these systems are fueled by vast data sets drawn from public and proprietary sources. On this foundation operate lar…
S33
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S34
AI Meets Cybersecurity Trust Governance &amp; Global Security — And that’s alarming because what’s going to happen in that context is it will focus on enterprises first. It will focus …
S35
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Anastasiya Kozakova:Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work …
S36
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — And now the next step is working with the hyperscaler is how do we commercialize these outside Saudi Aramco to the marke…
S37
From KW to GW Scaling the Infrastructure of the Global AI Economy — These investments must be monetised quickly to achieve acceptable returns, driving the need for compressed deployment ti…
S38
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we …
S39
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Multi-stakeholder Collaboration and Data Sharing**: Panelists emphasized that effective fraud prevention requires un…
S40
The State of Digital Fragmentation (Digital Policy Alert) — In terms of data governance, the analysis emphasises the need for dialogue and finding common ground for global data gov…
S41
Overview of AI policy in 10 jurisdictions — Summary: Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspire…
S42
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Real-world implementations are already emerging. ByteDance has introduced an AI-first smartphone in China that eliminate…
S43
Digital Ecosystems and Competition Law: Ecological Approach (HSE University) — Another important aspect emphasized in the provided information is the need for collaboration between different authorit…
S44
Digital democracy and future realities | IGF 2023 WS #476 — Finally, the analysis advises policymakers to be mindful of the diversity of the internet ecosystem. It suggests that po…
S45
Building inclusive global digital governance (CIGI) — The absence of concrete and positive implementation of data governance frameworks also hinders effective regulation and …
S46
Secure Talk Using AI to Protect Global Communications &amp; Privacy — A recurring theme throughout the event was balancing security measures with customer experience. Traditional approaches …
S47
Consumer protection — One notable area where AI excels isfraud detection. By relying on advanced algorithms, AI can swiftly analyse patterns, …
S48
Employing AI for consumer grievance redressal mechanisms in e-commerce (CUTS) — Artificial Intelligence (AI) has emerged as a powerful tool with the potential to revolutionise consumer governance. It …
S49
Embracing the future of e-commerce and AI now (WEF) — In conclusion, this analysis highlights the significant role that AI can play in enhancing logistics, e-commerce, and re…
S50
The State of Digital Fragmentation (Digital Policy Alert) — In terms of data governance, the analysis emphasises the need for dialogue and finding common ground for global data gov…
S51
Operationalizing data free flow with trust | IGF 2023 WS #197 — In conclusion, the analysis sheds light on various aspects related to the movement of data, privacy regulations, regulat…
S52
Comprehensive Report: World Economic Forum Panel Discussion on Cybersecurity Resilience — Current identity solutions vary widely – India has done well, Estonia has good systems, but the US still relies on local…
S53
Main Topic 3 –  Identification of AI generated content — Dr Laurens Naudts, from the AI Media and Democracy Lab at the University of Amsterdam, provided a legal perspective, dis…
S54
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Noushin Shabab:Okay, thanks, Jenny. I’m not sure if the slides, okay, great. So as my colleague perfectly stated and mos…
S55
UNSC meeting: Artificial intelligence, peace and security — Albania:Thank you, Madam President, for convening this important meeting and for bringing this issue to the Security Cou…
S56
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Julian Gorman from GSMA emphasized that combating scams requires cross-sector collaboration, noting that scammers operat…
S57
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — Johannes argues that effective fraud prevention requires assembling a ‘powerhouse of fraud fighters’ who approach the pr…
S58
Inside Visa’s war room: How AI battles $15 trillion in threats — In Virginia’s Data Centre Alley, Visaoperates a high-security fraud command centreto protect $15 trillion in annual tran…
S59
Google’s fight against AI scammers — Google initiatedlegal action againsttwo distinct groups of scammers exploiting the company’s platforms and users. The fi…
S60
FBI warns of AI-driven fraud — The FBI hasraisedalarms about the growing use of artificial intelligence in scams, particularly through deepfake technol…
S61
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — The conclusion drawn from the discussion is that there is an urgent need for greater attention and inclusivity in the de…
S62
Day 0 Event #248 No One Left Behind Digital Inclusion As a Human Right in the Global Digital Age — Brynteson’s research identifies digital exclusion as multidimensional and context-specific, often affecting overlapping …
S63
Digital Transformation for all: An Information Society that respects and protects human rights — – **Women**: Janina specifically mentioned women among vulnerable categories needing special focus The discussion repea…
S64
Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477 — The analysis also emphasizes the significance of including vulnerable populations in policy considerations. Often, vulne…
S65
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — – **Cross-Industry Collaboration and Stakeholder Engagement**: The conversation extensively covered the importance of br…
S66
Workshop 1: AI &amp; non-discrimination in digital spaces: from prevention to redress — Establish cross-sector collaboration between different types of regulators rather than siloed approaches
S67
WS #479 Gender Mainstreaming in Digital Connectivity Strategies — Different sector regulators often operate in silos with limited coordination with ministries responsible for gender, edu…
S68
AI That Empowers Safety Growth and Social Inclusion in Action — Collaborative approach between governments, industry, academia and civil society rather than siloed regulatory or self-r…
S69
Secure Talk Using AI to Protect Global Communications &amp; Privacy — “Both markets are facing rising cybercrime, digital fraud, and organized scam operations, causing billions of dollars wo…
S70
Deepfake and AI fraud surges despite stable identity-fraud rates — According to the 2025 Identity Fraud Report by verification firm Sumsub, the global rate of identity fraud has declined …
S71
AI reshapes eCommerce tasks and security — AI is set to redefineretailin 2025, offering highly personalised shopping experiences.AI assistantsare expected to manag…
S72
AI takes over eCommerce tasks as Visa and Mastercard adapt — Visa and Mastercard haveannounced major AI initiativesthat could reshape the future of e-commerce, marking a significant…
S73
Lakera secures $20M for AI protection, Gandalf helps track threats — Leaders of Fortune 500 companiesdevelopingAI applications face a potential nightmare: hackers tricking AI into revealing…
S74
https://dig.watch/event/india-ai-impact-summit-2026/secure-talk-using-ai-to-protect-global-communications-privacy — Thank you everyone. Thank you very much. Thank you. Once again, ladies and gentlemen, a very good evening and welcome to…
S75
From KW to GW Scaling the Infrastructure of the Global AI Economy — Prefabricated systems and reference designs are essential for scaling at speed while addressing skill development challe…
S76
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — And now the next step is working with the hyperscaler is how do we commercialize these outside Saudi Aramco to the marke…
S77
Leveraging AI4All_ Pathways to Inclusion — Despite significant progress, several challenges remain unresolved. The fundamental scaling problem persists across sect…
S78
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques a…
S79
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Multi-stakeholder Collaboration and Data Sharing**: Panelists emphasized that effective fraud prevention requires un…
S80
Comprehensive Report: World Economic Forum Panel Discussion on Cybersecurity Resilience — Miebach argues that improving identity verification systems globally is a critical investment that both private and publ…
S81
Building Inclusive Societies with AI — Strengthen National Rural Livelihood Mission for better worker aggregation and quality improvement
S82
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Real-world implementations are already emerging. ByteDance has introduced an AI-first smartphone in China that eliminate…
S83
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued)/ part 2 — Mozambique: Thank you, Mr. Chair, for giving us the floor. At the outset, allow me to express our profound appreciation …
S84
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S85
Opening remarks — At the outset of the event, the speaker extends a warm welcome to attendees, expressing delight at seeing both veteran p…
S86
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S87
[Opening] IGF Parliamentary Track: Welcome and Introduction — The tone is consistently formal, welcoming, and optimistic throughout. It maintains a diplomatic and collaborative atmos…
S88
Launch / Award Event #52 Intelligent Society Development &amp; Governance Research — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm abo…
S89
WS #70 Combating Sexual Deepfakes Safeguarding Teens Globally — The discussion maintained a serious, urgent, and collaborative tone throughout. Speakers demonstrated deep concern about…
S90
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S91
WS #6 Bridging Digital Gaps in Agriculture &amp; trade Transformation — The tone was largely optimistic and solution-oriented. Speakers were enthusiastic about the potential of the Internet Ba…
S92
AI, Data Governance, and Innovation for Development — The overall tone was optimistic and solution-oriented, with speakers focusing on practical ways to overcome obstacles th…
S93
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S94
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S95
Global Perspectives on Openness and Trust in AI — The discussion maintained a thoughtful, critical, and collaborative tone throughout. While panelists raised serious conc…
S96
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S97
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S98
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — The tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enthusiastic and …
S99
Keynote-N Chandrasekaran — The tone is consistently optimistic, ambitious, and forward-looking throughout. The speaker maintains an enthusiastic an…
S100
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S101
AI in education: Leveraging technology for human potential — The tone is consistently optimistic and inspirational throughout, with Mills maintaining an enthusiastic and visionary a…
S102
Closing Ceremony and Orientation for WAIGF 2025 — Audience: Good evening everyone. I am Abdul Idris, a Nigerian. I’m a program analyst from National Assembly Service. Tha…
S103
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Garg also referenced observations about the contrast between current AI systems requiring gigawatts of power and hum…
S104
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Nobuhisa Nishigata:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ….
S105
Radio and TV broadcasting: Diplomacy going live — Franklin D. Roosevelt introduced the so-called’fireside chats’, i.e. radio talks addressing the problems and successes o…
S106
Comprehensive Report: President Trump’s Address to the World Economic Forum in Davos — The session began with opening remarks by Laurence D. Fink, who provided framing around making capitalism more inclusive…
S107
Invest India Fireside Chat — And I made this statement for India. India, AI is pivotal to drive economic productivity, military power, and informatio…
S108
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — -Aman Khanna: Vice President of the Asia Group (mentioned as moderator for upcoming fireside chat session) -Nitin Bajaj…
S109
MALAYSIA DIGITAL ECONOMY BLUEPRINT — The immense speed and reach of digitalisation in recent years are unprecedented. The size of the digital economy in 2017…
S110
Leaders TalkX: Securing the Digital Realm: Collaborative Strategies for Trust and Resilience — Preetam Maloor from the ITU presented a sobering comparison between the digital landscape in 2005 and 2024. He pointed o…
S111
40 million new digital users in Southeast Asia in 2020 — A recent report published by Google, Temasek Holdings and Bain & Company found that an estimated 40 million people from …
S112
Digital inclusivity – Connecting the next billion — Dr. Bhanu Neupane from UNESCO discussed the organisation’s initiatives to preserve linguistic and cultural diversity onl…
S113
Open Forum #13 Bridging the Digital Divide Focus on the Global South — Dr. Chern Choong Thum from Malaysia’s Ministry of Communications provided a public health perspective, stating that “dig…
S114
The Government’s AI dilemma: how to maximize rewards while minimizing risks? — This system has notably reduced the digital divide and provided benefits to economically weaker sections, including rura…
S115
#205 L&amp;A Launch of the Global CyberPeace index — Suresh Yadav: Thank you, Vinit. I hope you can hear me, Vinit, if you can. Loud and clear, we can hear you. Thank you ve…
S116
India’s digital economy expected to contribute over 20 percent to GDP — EXCERPT :During the two-day G20-Digital Innovation Alliance summit in Bengaluru, Union Minister of State for Electronics…
S117
Pathways to De-escalation — Damage estimates of $2.3 trillion increasing to $10.5 trillion by 2025, representing 8-9% of global GDP John Defterios …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
W
Wish Gurmukh Dev
2 arguments82 words per minute1069 words773 seconds
Argument 1
Wisely.ai platform delivering real‑time protection in multiple markets
EXPLANATION
Wish introduced Wisely.ai as Tanla’s agentic AI platform that is already live and protecting users in real time across several operators and banks. The platform aims to identify, prevent and eliminate spam and scam at scale.
EVIDENCE
Wish stated that Wisely.ai is live and delivering real impact at Indosat in Indonesia, at BSNL in India, and with leading banks in India, safeguarding millions of users in real time every single day [9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk notes that Wisely.ai is live and delivering real-time protection for Indosat, BSNL and banks [S1].
MAJOR DISCUSSION POINT
AI‑driven anti‑fraud platform deployment
AGREED WITH
Vikram Sinha, Ratan Kumar Kesh, Anshuman Kar, Neha Gutma Mahatme
Argument 2
Cross‑industry collaboration emphasized as essential
EXPLANATION
Wish highlighted that Tanla’s success rests on close partnership with customers, regulators, telco partners and the broader ecosystem, stressing that collaboration across sectors is vital to combat fraud. He later reiterated the need for coordinated effort among industry players.
EVIDENCE
Wish said the core principles of Tanla include collaboration, noting work with customers, regulatory ecosystem, telco partners and broader ecosystem to stay ahead of the curve [6]. He also later called for cross-industry collaboration as essential during the transition to the panel session [153].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
WEF Business Engagement Session highlights cross-industry collaboration as key, and WS #148 stresses multi-stakeholder cooperation [S19][S20].
MAJOR DISCUSSION POINT
Ecosystem partnership importance
AGREED WITH
Anshuman Kar, Ratan Kumar Kesh, Bipin Preet Singh, Audience
A
A. Robert J. Ravi
3 arguments129 words per minute757 words349 seconds
Argument 1
AI‑driven network services (AI Vani, recharge expert) improve security
EXPLANATION
Ravi described AI‑powered services such as the AI Vani voice assistant and a recharge expert system that enhance customer experience while providing security features. These tools enable intelligent routing and protection against spam and scams within the network.
EVIDENCE
Ravi explained that the AI Vani system allows users to speak to a specific agent and that a BSNR recharge expert system is a complete AI-driven solution, both contributing to security and user experience [382-386].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk records Ravi describing the AI Vani voice assistant and a recharge-expert system that enhance security [S1].
MAJOR DISCUSSION POINT
AI integration into network services
Argument 2
Federated learning keeps user data local while training models
EXPLANATION
Ravi introduced federated learning as a technique where user data remains on the device while the model learns from aggregated insights, preserving privacy while improving AI capabilities. This approach is especially relevant for rural deployments.
EVIDENCE
Ravi described federated learning as a method where data resides with the user, the system learns from it, and the model is federated over the data without moving it centrally [392-395].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Transforming Health Systems with AI discusses federated learning for privacy-preserving model training [S21], and AI for Good notes it as an enabler for edge AI [S22].
MAJOR DISCUSSION POINT
Privacy‑preserving AI training
Argument 3
Edge data centres and LLMs for rural protection
EXPLANATION
Ravi highlighted the use of edge data centres combined with large language models (LLMs) to deliver AI services in rural areas, enabling low‑latency protection and personalized experiences for underserved users. This strategy aims to bridge the digital divide while enhancing security.
EVIDENCE
Ravi mentioned deploying edge data centres and leveraging LLMs to protect customers in rural regions, noting the need for local processing and AI capabilities at the edge [387-391].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI for Good mentions bringing inference to the edge for sub-10 ms latency, supporting edge data-centre use [S22], and Enhancing Digital Resilience outlines strategies for rural outreach [S24].
MAJOR DISCUSSION POINT
AI infrastructure for rural inclusion
V
Vikram Sinha
8 arguments145 words per minute1305 words537 seconds
Argument 1
$5 bn loss & 65 % weekly spam exposure in Indonesia
EXPLANATION
Vikram shared alarming statistics from a 2024 Global Anti‑Scam Association report, indicating that Indonesians lost $5 billion to scams and that 65 % of the population faces spam or scam weekly. These figures motivated Indosat to prioritize anti‑fraud measures.
EVIDENCE
He cited a report showing $5 billion lost by Indonesians in 2024 [47] and that 65 % of Indonesians experience spam or scam on a weekly basis [50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk cites the $5 bn scam loss and 65 % weekly spam exposure in Indonesia [S1].
MAJOR DISCUSSION POINT
Scale of fraud in Indonesia
Argument 2
Indosat‑Tanla AI model reduces churn and lifts ARPU
EXPLANATION
Vikram reported that after deploying the AI model with Tanla, Indosat saw its average revenue per user (ARPU) grow 9 % versus a 3 % industry average, and churn fell dramatically, demonstrating the commercial benefit of AI‑driven fraud protection.
EVIDENCE
He highlighted that ARPU grew 9 % while the industry grew 3 % and churn dropped from 3.6-3.7 % to 1.6 % after the AI rollout [87-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk reports ARPU growth of 9 % versus 3 % industry and churn dropping to 1.6 % after AI rollout [S1].
MAJOR DISCUSSION POINT
Business impact of AI anti‑fraud
AGREED WITH
Sanjay Kapoor, Anshuman Kar
Argument 3
Full‑stack AI factory with GPU clusters for model training
EXPLANATION
Vikram explained that Indosat has built a complete AI infrastructure, including its own GPU cluster (featuring H100 GPUs) to train large models, enabling rapid development and deployment of anti‑spam/ scam solutions.
EVIDENCE
He described a full-stack AI factory, a GPU cluster with GB200 H100 GPUs, and the importance of compute power for training data [122-128].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes a full-stack AI factory and GPU cluster in the Indosat-Tanla partnership [S14], and Sovereign AI for India describes large GPU deployments supporting such infrastructure [S25].
MAJOR DISCUSSION POINT
Technical foundation for AI
Argument 4
ARPU grew 9 % vs industry 3 % after AI rollout
EXPLANATION
Vikram reiterated the revenue uplift, noting that the AI‑enabled service helped Indosat outperform the broader market, reinforcing the ROI narrative.
EVIDENCE
Quarter-four ARPU grew 9 % compared with a 3 % industry increase, as shown in his investor deck [87-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk reports ARPU growth of 9 % versus 3 % industry after the AI rollout [S1].
MAJOR DISCUSSION POINT
Revenue uplift from AI
Argument 5
Churn fell from 3.6 % to 1.6 %
EXPLANATION
He pointed out that churn among customers with more than 90 days of tenure dropped from roughly 3.6‑3.7 % to 1.6 % after AI implementation, indicating higher customer satisfaction and retention.
EVIDENCE
Vikram noted churn reduction from 3.6-3.7 % to 1.6 % following the AI model deployment [91-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk records churn dropping from 3.6-3.7 % to 1.6 % following the AI deployment [S1].
MAJOR DISCUSSION POINT
Retention improvement
Argument 6
ROI visible within 6‑8 months of deployment
EXPLANATION
Vikram stated that measurable financial benefits, such as ARPU growth and churn reduction, became evident within six to eight months of launching the AI solution, confirming a rapid return on investment.
EVIDENCE
He said that within six to eight months they observed impact on ARPU and churn, confirming ROI [98-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk states that measurable financial benefits, including ARPU uplift and churn reduction, were evident within six to eight months of launch [S1].
MAJOR DISCUSSION POINT
Speed of ROI realization
AGREED WITH
Sanjay Kapoor, Anshuman Kar
Argument 7
Strategic partnership with Tanla, not just a vendor
EXPLANATION
Vikram emphasized that Indosat sought a strategic partnership with Tanla, focusing on joint problem‑solving rather than a simple vendor relationship, to co‑create AI solutions for fraud mitigation.
EVIDENCE
He explained that Indosat wanted a partner, not a vendor, and identified Tanla as a strategic partner after meeting Uday, aligning commitments for a global case study [80-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk emphasizes that Indosat sought a strategic partnership with Tanla rather than a simple vendor relationship [S1].
MAJOR DISCUSSION POINT
Nature of collaboration
Argument 8
Churn reduction indicates improved experience alongside security
EXPLANATION
Vikram linked the drop in churn to both enhanced security and a smoother customer experience, suggesting that AI‑driven protection can simultaneously boost satisfaction and loyalty.
EVIDENCE
He connected churn reduction (to 1.6 %) with delivering better experience alongside security improvements [91-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure Talk links the churn reduction to a better customer experience together with enhanced security [S1].
MAJOR DISCUSSION POINT
Security and CX synergy
R
Ratan Kumar Kesh
5 arguments181 words per minute1453 words480 seconds
Argument 1
Senior citizens and account‑mule fraud across India
EXPLANATION
Ratan described how senior citizens are heavily targeted by scams and how fraudsters exploit bank accounts as mules, moving stolen funds across multiple institutions, creating a systemic problem in India.
EVIDENCE
He highlighted that senior citizens and even professors are defrauded, and that account-mule fraud is facilitated by easy account onboarding via India-Stack, leading to large-scale siphoning of funds [191-199].
MAJOR DISCUSSION POINT
Vulnerable groups and mule fraud
AGREED WITH
Wish Gurmukh Dev, Vikram Sinha, Anshuman Kar, Neha Gutma Mahatme
Argument 2
Bank rule‑engine flags out‑of‑routine transactions using AI
EXPLANATION
Ratan explained that banks employ AI‑enhanced rule engines to detect transactions that deviate from a customer’s normal pattern, such as unusual withdrawal amounts or new payment types, enabling early fraud detection.
EVIDENCE
He detailed how AI-driven rule engines identify non-routine withdrawals or rent payments and can either block or flag them for further due diligence [198-208].
MAJOR DISCUSSION POINT
AI‑based transaction monitoring
Argument 3
Rule‑engine improvements reduce fraud incidents
EXPLANATION
Ratan noted that the evolution of sophisticated rule‑engine algorithms, powered by AI, has markedly improved the ability of banks to prevent fraudulent transactions, thereby lowering incident rates.
EVIDENCE
He mentioned that AI helps build better algorithms for detecting out-of-routine activity, and that these tools are working effectively to reduce fraud [211-213].
MAJOR DISCUSSION POINT
Effectiveness of AI rule‑engine
Argument 4
Law‑enforcement gaps make tracking scammers difficult
EXPLANATION
Ratan recounted personal experiences where police investigations failed to identify fraudsters despite clear evidence, underscoring systemic challenges in law enforcement coordination and accountability.
EVIDENCE
He narrated a story about a stolen purse, police involvement, and the inability to trace the fraudsters, illustrating gaps in enforcement [315-322].
MAJOR DISCUSSION POINT
Enforcement challenges
AGREED WITH
Wish Gurmukh Dev, Anshuman Kar, Bipin Preet Singh, Audience
DISAGREED WITH
Anshuman Kar, Bipin Preet Singh
Argument 5
Banks’ out‑of‑routine alerts risk false positives affecting CX
EXPLANATION
Ratan warned that while AI‑driven out‑of‑routine alerts are valuable, they can generate false positives that inconvenience legitimate customers, highlighting the need to balance security with user experience.
EVIDENCE
He described how out-of-routine transaction detection can sometimes prevent legitimate activity, requiring enhanced due-diligence and potentially causing friction for customers [198-208].
MAJOR DISCUSSION POINT
Potential CX friction from AI alerts
DISAGREED WITH
Vikram Sinha
S
Sanjay Kapoor
1 argument137 words per minute757 words329 seconds
Argument 1
Global $1 trn scam losses and $14 tn digital payments forecast
EXPLANATION
Sanjay highlighted the massive scale of digital payments projected to reach $14 trillion by 2027, while noting that worldwide scam losses already exceed $1 trillion, framing fraud as a systemic economic risk.
EVIDENCE
He cited forecasts of $14 trillion in digital payments by 2027 [26] and global scam losses exceeding $1 trillion [31].
MAJOR DISCUSSION POINT
Macro‑level fraud and payment growth
A
Anshuman Kar
3 arguments152 words per minute1866 words733 seconds
Argument 1
$500 m estimated loss prevented in first six months
EXPLANATION
Anshuman reported that the Wisely.ai solution, within six months of launch, prevented approximately $500 million in potential fraud losses, demonstrating tangible early impact.
EVIDENCE
He stated that within six months of launch, almost $500 million in estimated losses were protected [163-164].
MAJOR DISCUSSION POINT
Early financial impact of AI solution
AGREED WITH
Vikram Sinha, Sanjay Kapoor
Argument 2
Call for coordinated intelligence across telcos, banks, fintech
EXPLANATION
Anshuman urged stakeholders from telecommunications, banking, and fintech sectors to collaborate and share intelligence in real time, arguing that fragmented defenses leave gaps that fraudsters exploit.
EVIDENCE
He emphasized the need for coordinated intelligence across telcos, banks, and fintech to thwart scams, noting the fragmented nature of current defenses [170-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
WEF Business Engagement Session highlights cross-industry collaboration for safety, and WS #148 stresses ecosystem-wide cooperation as essential [S19][S20].
MAJOR DISCUSSION POINT
Ecosystem‑wide collaboration
AGREED WITH
Wish Gurmukh Dev, Ratan Kumar Kesh, Bipin Preet Singh, Audience
DISAGREED WITH
Ratan Kumar Kesh, Bipin Preet Singh
Argument 3
Real‑time coordinated intelligence as next frontier
EXPLANATION
Anshuman concluded that the future of fraud defence lies in real‑time, cross‑industry intelligence sharing rather than isolated AI models, positioning this as the next strategic direction.
EVIDENCE
He summarized that the next frontier is coordinated, real-time intelligence across the ecosystem [349-353].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same cross-industry collaboration themes in WEF Business Engagement Session and WS #148 point to real-time, ecosystem-wide intelligence sharing as the next strategic direction [S19][S20].
MAJOR DISCUSSION POINT
Future direction for anti‑fraud intelligence
AGREED WITH
Audience, Bipin Preet Singh, Ratan Kumar Kesh, Vikram Sinha
N
Neha Gutma Mahatme
4 arguments253 words per minute502 words118 seconds
Argument 1
AI detects anomalies but not malicious intent; need behavioral analysis
EXPLANATION
Neha argued that current AI systems are good at spotting statistical anomalies but cannot infer the underlying malicious intent, emphasizing the need for deeper behavioral analysis to stop scams effectively.
EVIDENCE
She explained that AI detects anomalies but not the mal-intent or behavior, and that solving the behavior aspect is essential for effective fraud prevention [236-244].
MAJOR DISCUSSION POINT
Limitations of anomaly‑based AI
DISAGREED WITH
Vikram Sinha
Argument 2
Offensive AI evolves faster than defensive models
EXPLANATION
Neha highlighted that scammers are increasingly using AI tools (e.g., deepfakes, synthetic identities) which evolve more rapidly than defensive AI models, creating an arms race in fraud detection.
EVIDENCE
She noted that offensive AI works unconstrained while defensive AI faces privacy, regulatory, and experience constraints, making it harder to keep pace [240-242].
MAJOR DISCUSSION POINT
AI arms race
AGREED WITH
Sanjay Kapoor, Vikram Sinha, Ratan Kumar Kesh
Argument 3
Limited external data hampers detection of social‑engineering cues
EXPLANATION
Neha pointed out that while Amazon has rich internal data, it lacks visibility into external social‑engineering patterns that precede transactions, limiting the effectiveness of fraud detection models.
EVIDENCE
She mentioned that Amazon misses data on how social-engineering patterns are created outside the platform, which hampers detection [237-239].
MAJOR DISCUSSION POINT
Data gaps for social engineering
DISAGREED WITH
Bipin Preet Singh, Audience
Argument 4
Privacy, regulatory and customer‑experience constraints restrict defensive AI
EXPLANATION
Neha explained that defensive AI must operate within strict privacy, regulatory, and user‑experience boundaries, which can limit its ability to act as aggressively as offensive AI used by fraudsters.
EVIDENCE
She described constraints on defensive AI, including privacy, regulation, and customer-experience limits, contrasting them with unconstrained offensive AI [242-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Transforming Health Systems with AI notes privacy constraints on model training [S21]; AI for Good discusses regulatory limits on defensive AI [S22]; Enhancing Digital Resilience mentions the need to balance security with user experience [S24].
MAJOR DISCUSSION POINT
Regulatory and UX constraints on AI
A
Audience
2 arguments159 words per minute188 words70 seconds
Argument 1
Recent Supreme Court finding of ₹56 k cr scam losses
EXPLANATION
An audience member referenced a recent Supreme Court judgment that quantified scam‑related losses at roughly ₹56,000 crore, underscoring the massive scale of fraud in India.
EVIDENCE
The audience cited the Supreme Court judgment mentioning 54-56 000 crore lost to scams and described it as a dacoity-like magnitude [183-190].
MAJOR DISCUSSION POINT
Judicial acknowledgment of fraud scale
Argument 2
Digital Payments Intelligence Platform already launched, but integration still needed
EXPLANATION
The audience noted that while India has launched a Digital Payments Intelligence Platform to aggregate fraud data, full integration across banks and other stakeholders remains incomplete.
EVIDENCE
They mentioned the existing Digital Payments Intelligence Platform and questioned whether integration is sufficient, highlighting ongoing gaps [332-340].
MAJOR DISCUSSION POINT
Partial data‑sharing implementation
AGREED WITH
Wish Gurmukh Dev, Anshuman Kar, Ratan Kumar Kesh, Bipin Preet Singh
DISAGREED WITH
Neha Gutma Mahatme, Bipin Preet Singh
B
Bipin Preet Singh
3 arguments158 words per minute949 words358 seconds
Argument 1
RBI’s Digital Payments Intelligence Authority for ecosystem data sharing
EXPLANATION
Bipin referenced the RBI’s initiative to create a Digital Payments Intelligence Authority, which would facilitate data sharing across the payments ecosystem to improve fraud detection at a national level.
EVIDENCE
He mentioned the RBI’s Digital Payments Intelligence Authority as a crucial step for ecosystem-wide data sharing [279-283].
MAJOR DISCUSSION POINT
Regulatory data‑sharing mechanism
AGREED WITH
Anshuman Kar, Audience, Ratan Kumar Kesh, Vikram Sinha
DISAGREED WITH
Anshuman Kar, Ratan Kumar Kesh
Argument 2
Generic AI models underperform; need custom data sets
EXPLANATION
Bipin argued that off‑the‑shelf AI models trained on industry‑wide data often perform poorly for specific fintech use‑cases, necessitating custom models built on proprietary data.
EVIDENCE
He explained that generic AI models have poor performance and that MobiQuik builds its own models trained on its own data sets for better results [295-299].
MAJOR DISCUSSION POINT
Need for domain‑specific AI models
Argument 3
AI must avoid creating friction for legitimate users
MAJOR DISCUSSION POINT
Balancing security with user experience
Agreements
Agreement Points
Wisely.ai and related AI anti‑fraud solutions are already delivering real‑time protection and measurable financial impact across multiple markets.
Speakers: Wish Gurmukh Dev, Vikram Sinha, Anshuman Kar
Wisely.ai platform delivering real‑time protection in multiple markets Indosat‑Tanla AI model reduces churn and lifts ARPU $500 m estimated loss prevented in first six months
Wish highlighted that Wisely.ai is live and protecting users in Indonesia, India and with banks [9]; Vikram reported that the AI model boosted ARPU by 9 % versus 3 % industry and cut churn to 1.6 % [87-92]; Anshuman noted the solution prevented about $500 m of losses within six months of launch [163-164].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry evidence shows AI-driven fraud defenses generate measurable ROI, as demonstrated by large-scale deployments such as Visa’s AI-powered fraud command centre protecting trillions of dollars in transactions [S58] and broader findings that AI excels at rapid pattern analysis for fraud detection [S47].
Cross‑industry and ecosystem collaboration is essential to combat digital fraud and scams.
Speakers: Wish Gurmukh Dev, Anshuman Kar, Ratan Kumar Kesh, Bipin Preet Singh, Audience
Cross‑industry collaboration emphasized as essential Call for coordinated intelligence across telcos, banks, fintech Law‑enforcement gaps make tracking scammers difficult RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Digital Payments Intelligence Platform already launched, but integration still needed
Wish stressed partnership with customers, regulators and telcos as a core principle [6]; Anshuman called for coordinated, real-time intelligence across telcos, banks and fintech [170-176][349-353]; Ratan highlighted ecosystem challenges and the need for cooperation among banks, telcos and law enforcement [191-199][311-315]; Bipin referenced the RBI’s Digital Payments Intelligence Authority to enable ecosystem-wide data sharing [279-283]; the audience pointed out the existing Digital Payments Intelligence Platform but noted integration gaps [332-340].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy discussions stress the need for cross-sector collaboration, highlighting four pillars that include ecosystem data sharing via APIs and joint risk assessment [S56], and broader calls to break down silos across industry, government and civil society [S65][S66][S68].
AI is a key tool against fraud but faces an arms‑race with offensive AI used by scammers.
Speakers: Sanjay Kapoor, Vikram Sinha, Neha Gutma Mahatme, Ratan Kumar Kesh
Global $1 trn scam losses and $14 trn digital payments forecast Scammers are using AI, voice cloning, automated phishing campaigns Offensive AI evolves faster than defensive models Scammers using AI, voice cloning, automated phishing campaigns
Sanjay highlighted the scale of AI-powered scams and the need for leadership [31][60-64]; Vikram noted that scammers are using AI and voice cloning, prompting the AI solution [79-80]; Neha explained that offensive AI evolves faster than defensive models and is unconstrained by privacy or regulatory limits [240-242]; Ratan also mentioned scammers using AI and voice cloning [79-80].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent alerts from law-enforcement and tech firms describe an escalating AI-driven fraud arms race, with the FBI warning of deep-fake scams [S60] and Google taking legal action against AI-based scammers [S59]; this underscores the dual-use nature of AI in security contexts.
Integrated, real‑time data sharing is required to overcome fragmented defenses against fraud.
Speakers: Anshuman Kar, Audience, Bipin Preet Singh, Ratan Kumar Kesh, Vikram Sinha
Real‑time coordinated intelligence as next frontier Integrated approach needed for fraud protection RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Ecosystem cooperation needed across banks, telcos, fintech Strategic partnership rather than vendor relationship
Anshuman identified real-time coordinated intelligence as the next frontier for fraud defence [349-353]; the audience called for an integrated approach and questioned the sufficiency of existing platforms [332-340]; Bipin emphasized the RBI’s Digital Payments Intelligence Authority to enable ecosystem data sharing [279-283]; Ratan stressed the need for ecosystem-wide cooperation to tackle fraud [191-199]; Vikram described the partnership with Tanla as strategic rather than a simple vendor relationship, highlighting joint problem-solving [80-84].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses highlight fragmented data-governance regimes and call for real-time, cross-border data sharing to close gaps in fraud detection, citing the need for coherent data-free-flow frameworks and broader ecosystem integration beyond existing platforms [S45][S50][S51][S56].
Protecting vulnerable groups such as senior citizens, women and low‑income users is a shared priority.
Speakers: Wish Gurmukh Dev, Vikram Sinha, Ratan Kumar Kesh, Anshuman Kar, Neha Gutma Mahatme
Wisely.ai platform delivering real‑time protection in multiple markets Middle‑income, lower‑income women, elderly women affected Senior citizens and account‑mule fraud across India Parents scared of fraud, especially senior citizens AI must address vulnerable populations in the behavioral journey
Wish highlighted that the victims were middle-income, lower-income women and elderly [48-49]; Vikram echoed concern for women and elderly in Indonesia [48-49]; Ratan described senior citizens being heavily targeted and defrauded [191-194]; Anshuman noted that his parents (senior citizens) are scared of fraud and avoid ATM cards [233-235]; Neha stressed that vulnerable populations are especially exposed to scams and need protection [236-240].
POLICY CONTEXT (KNOWLEDGE BASE)
International policy forums emphasize inclusive digital security, urging specific safeguards for seniors, women and low-income populations in cyber-security and consumer protection strategies [S61][S62][S63][S64].
AI‑driven anti‑fraud measures deliver measurable business ROI (ARPU growth, churn reduction, loss prevention).
Speakers: Vikram Sinha, Sanjay Kapoor, Anshuman Kar
Indosat‑Tanla AI model reduces churn and lifts ARPU ROI visible within 6‑8 months of deployment $500 m estimated loss prevented in first six months
Vikram reported ARPU growth of 9 % versus 3 % industry and churn dropping to 1.6 % after AI rollout [87-92]; Sanjay asked about ROI and highlighted the economic impact of scams [31]; Anshuman quantified the financial benefit of the solution as $500 m prevented losses within six months [163-164].
POLICY CONTEXT (KNOWLEDGE BASE)
Empirical studies and industry reports confirm that AI-based fraud mitigation improves key business metrics, with documented ARPU growth and churn reduction in telecom and banking sectors, reinforced by Visa’s reported loss-prevention outcomes [S58] and broader AI fraud detection benefits [S47].
Similar Viewpoints
Both emphasize that AI investment must show rapid financial returns given the massive scale of digital payments and fraud losses, with Vikram citing concrete ROI metrics and Sanjay framing the broader economic risk [31][87-92].
Speakers: Vikram Sinha, Sanjay Kapoor
ROI visible within 6‑8 months of deployment Global $1 trn scam losses and $14 trn digital payments forecast
Both point out limitations of current AI‑based detection: Ratan notes that rule‑engine alerts can miss intent and cause friction, while Neha stresses that defensive AI cannot keep pace with offensive AI and lacks behavioral insight [198-208][240-242].
Speakers: Ratan Kumar Kesh, Neha Gutma Mahatme
AI detects anomalies but not malicious intent; need behavioral analysis Offensive AI evolves faster than defensive models
All three stress the necessity of a unified data‑sharing platform to enable real‑time fraud detection, highlighting existing initiatives but also gaps in integration [349-353][279-283][332-340].
Speakers: Anshuman Kar, Bipin Preet Singh, Audience
Call for coordinated intelligence across telcos, banks, fintech RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Digital Payments Intelligence Platform already launched, but integration still needed
Both underline that fragmented defenses are insufficient and that multi‑stakeholder collaboration is critical to combat scams effectively [6][153][170-176].
Speakers: Wish Gurmukh Dev, Anshuman Kar
Cross‑industry collaboration emphasized as essential Call for coordinated intelligence across telcos, banks, fintech
Unexpected Consensus
Agreement on protecting senior citizens and other vulnerable groups across telecom and banking sectors.
Speakers: Vikram Sinha, Ratan Kumar Kesh, Anshuman Kar, Neha Gutma Mahatme
Middle‑income, lower‑income women, elderly women affected Senior citizens and account‑mule fraud across India Parents scared of fraud, especially senior citizens AI must address vulnerable populations in the behavioral journey
While telecom leaders typically focus on network and service issues, both Vikram (telco) and Ratan (bank) explicitly highlighted the impact of scams on senior citizens and low-income users, a concern more commonly raised by consumer-focused participants, indicating a cross-sector consensus on protecting vulnerable demographics [48-49][191-194][233-235][236-240].
POLICY CONTEXT (KNOWLEDGE BASE)
Regulatory dialogues have repeatedly called for sector-wide safeguards for seniors and other at-risk groups, linking financial services and telecom under shared consumer-protection mandates [S61][S62][S63][S64].
Recognition that existing regulatory data‑sharing initiatives (Digital Payments Intelligence Platform/Authority) are insufficient without broader ecosystem integration.
Speakers: Bipin Preet Singh, Audience, Anshuman Kar
RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Digital Payments Intelligence Platform already launched, but integration still needed Call for coordinated intelligence across telcos, banks, fintech
Although the RBI initiative was presented as a solution, all three participants concurred that the platform alone does not achieve full integration, revealing an unexpected shared view that further coordinated effort is required [279-283][332-340][349-353].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of current data-sharing frameworks note their limited scope and stress the need for wider ecosystem participation, aligning with critiques of fragmented governance and calls for holistic data-governance models [S45][S50][S65][S66].
Overall Assessment

There is strong consensus that AI‑driven anti‑fraud solutions like Wisely.ai are already delivering real‑time protection and measurable business benefits, but their effectiveness depends on cross‑industry collaboration, integrated data sharing, and attention to vulnerable users. Participants across telecom, banking, fintech and regulatory domains align on the need for coordinated intelligence and rapid ROI, while also acknowledging challenges such as the AI arms race and privacy constraints.

High consensus on the importance of AI, collaboration, data sharing, and protecting vulnerable groups, with moderate consensus on the sufficiency of current regulatory mechanisms. This alignment suggests a favorable environment for joint initiatives, policy support, and investment in shared AI infrastructure to strengthen digital trust.

Differences
Different Viewpoints
Sufficiency of data sharing for fraud detection
Speakers: Neha Gutma Mahatme, Bipin Preet Singh, Audience
Limited external data hampers detection of social‑engineering cues RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Digital Payments Intelligence Platform already launched, but integration still needed
Neha argues that Amazon lacks visibility into external social-engineering data, limiting fraud detection [236-244]. Bipin points to the RBI’s Digital Payments Intelligence Authority as a mechanism to enable ecosystem-wide data sharing [279-283]. The audience notes that a Digital Payments Intelligence Platform exists but questions whether its integration is sufficient [332-340]. The three positions reveal a disagreement on whether current data-sharing initiatives are adequate for effective fraud prevention.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates highlight that current data-sharing mechanisms fall short of providing the granularity and timeliness needed for effective fraud detection, echoing concerns raised about regulatory silos and the need for expanded, real-time data exchange [S45][S50][S56][S66].
AI’s ability to address malicious intent versus only detecting anomalies
Speakers: Neha Gutma Mahatme, Vikram Sinha
AI detects anomalies but not malicious intent; need behavioral analysis Indosat‑Tanla AI model reduces churn and lifts ARPU, indicating effective protection
Neha states that AI can spot statistical anomalies but cannot infer malicious intent, calling for deeper behavioral analysis [236-244]. Vikram, by contrast, presents the AI model as delivering tangible business benefits and protecting customers, implying that AI alone can solve the problem [66-70][87-92]. This creates a tension between viewing AI as a partial tool versus a comprehensive solution.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions differentiate between AI’s anomaly-detection capabilities and its potential to proactively counter malicious intent, with ethical frameworks urging deeper integration of intent-aware models in cybersecurity [S54][S47].
Impact of AI on customer experience – friction versus benefit
Speakers: Ratan Kumar Kesh, Vikram Sinha
Banks’ out‑of‑routine alerts risk false positives affecting CX Churn fell from 3.6 % to 1.6 %, indicating improved experience alongside security
Ratan warns that AI-driven out-of-routine transaction alerts can generate false positives, creating friction for legitimate users [198-208]. Vikram counters by highlighting a sharp churn reduction after AI deployment, interpreting it as evidence that security improvements have enhanced customer experience [91-92]. The two speakers disagree on whether AI implementation currently harms or helps the user journey.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholder analyses stress the balance between security and user friction, noting that AI-driven solutions can both reduce friction for legitimate users and inadvertently introduce new barriers if not carefully designed [S46][S58].
Who should own responsibility for national‑scale fraud protection
Speakers: Anshuman Kar, Ratan Kumar Kesh, Bipin Preet Singh
Call for coordinated intelligence across telcos, banks, fintech Law‑enforcement gaps make tracking scammers difficult RBI’s Digital Payments Intelligence Authority for ecosystem data sharing
Anshuman asks which entity ultimately owns the responsibility for protecting citizens at scale [310-315]. Ratan highlights fragmented law-enforcement capabilities and the difficulty of tracing fraudsters despite multiple stakeholders [316-322]. Bipin points to a regulatory solution via the RBI’s Digital Payments Intelligence Authority [279-283]. The speakers diverge on whether the lead should be regulatory, industry-driven, or a joint effort.
POLICY CONTEXT (KNOWLEDGE BASE)
International panels propose shared stewardship models, assigning roles to regulators, industry consortia and civil society to collectively manage national-level fraud risks [S56][S57][S65][S68].
Unexpected Differences
Cross‑industry collaboration versus siloed enforcement
Speakers: Wish Gurmukh Dev, Ratan Kumar Kesh
Cross‑industry collaboration emphasized as essential Law‑enforcement gaps make tracking scammers difficult
Wish repeatedly stresses that collaboration across customers, regulators, telcos and the broader ecosystem is vital for combating fraud [6][153]. Ratan, however, recounts concrete failures of police and regulatory coordination, highlighting fragmented enforcement that undermines collaborative goals [315-322]. The contrast between the aspirational call for partnership and the on-the-ground reality of siloed enforcement was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
A recurring theme in policy forums is the need to move from siloed enforcement to coordinated, cross-industry collaboration, as advocated in multiple IGF and WEF sessions calling for integrated governance structures [S65][S66][S67][S43].
Overall Assessment

The discussion reveals several substantive disagreements: the adequacy of current data‑sharing mechanisms, the limits of AI versus the need for behavioral insight, the trade‑off between security and customer friction, and the question of which stakeholder should lead national fraud protection. While all participants share the overarching goal of reducing fraud and building digital trust, they diverge on the most effective pathways to achieve it.

Moderate to high – the disagreements are not outright conflicts but reflect differing priorities, assumptions about technology efficacy, and views on institutional responsibility. These divergences suggest that achieving coordinated, effective anti‑fraud solutions will require clear policy frameworks, stronger data‑governance mechanisms, and balanced designs that address both security and user experience.

Partial Agreements
Both agree that fraud must be reduced and customer trust enhanced, but Vikram emphasizes a bilateral strategic partnership with Tanla as the solution, whereas Anshuman advocates for a broader, ecosystem‑wide coordinated intelligence framework [80-84][170-176].
Speakers: Vikram Sinha, Anshuman Kar
Indosat‑Tanla AI model reduces churn and lifts ARPU Call for coordinated intelligence across telcos, banks, fintech
Both recognize AI’s role in spotting irregular activity, yet Ratan focuses on rule‑engine implementation within banks, while Neha stresses that anomaly detection alone is insufficient without understanding intent and social‑engineering behavior [198-208][236-244].
Speakers: Ratan Kumar Kesh, Neha Gutma Mahatme
Bank rule‑engine flags out‑of‑routine transactions using AI AI detects anomalies but not malicious intent; need behavioral analysis
Both see a national data‑sharing platform as essential, but Bipin views the RBI authority as the forthcoming solution, while the audience questions whether the existing platform is already sufficient, indicating differing views on the stage of implementation [279-283][332-340].
Speakers: Bipin Preet Singh, Audience
RBI’s Digital Payments Intelligence Authority for ecosystem data sharing Digital Payments Intelligence Platform already launched, but integration still needed
Takeaways
Key takeaways
Digital fraud is massive and growing: $5 bn loss in Indonesia in 2024, 65 % of Indonesians face weekly spam/scam, global scam losses exceed $1 trn, and India’s Supreme Court highlighted ₹56 k cr lost to scams. AI‑driven platforms (Wisely.ai, Indosat‑Tanla AI model) can deliver real‑time protection, reduce churn, and boost ARPU, demonstrating measurable business impact. ROI from AI anti‑fraud solutions becomes visible within 6‑8 months, with examples such as 9 % ARPU growth vs 3 % industry average and $500 m loss prevented in six months. Effective fraud defence requires ecosystem collaboration and data sharing across telcos, banks, fintechs, regulators, and law‑enforcement; strategic partnerships (e.g., Indosat‑Tanla) are preferred over simple vendor relationships. Current AI models face limitations: offensive AI evolves faster than defensive models, external data on social‑engineering is scarce, privacy/regulatory constraints limit defensive AI, and generic models underperform without custom data. Balancing security with customer experience is critical; reduced churn indicates success, but false positives and friction remain concerns. Future technological directions include federated learning, edge AI, and large language models to protect rural users while keeping data local.
Resolutions and action items
Indosat commits to continue partnership with Tanla, co‑developing and training AI models on its GPU cluster for spam/scam detection. Tanla to support Indosat with full‑stack AI factory and real‑time threat intelligence (2 bn spam instances, 2.3 m scammers flagged). Panelists agree to pursue greater data sharing across the payments ecosystem; RBI’s Digital Payments Intelligence Authority to be leveraged for national‑scale intelligence. Explore implementation of federated learning and edge data‑centres to extend protection to rural/edge users (as outlined by BSNL’s A. Robert J. Ravi). Banks and fintechs to continue enhancing rule‑engine and out‑of‑routine transaction monitoring, and to consider integrating external threat feeds from telcos.
Unresolved issues
How to create a truly integrated, nation‑wide fraud‑prevention model that combines data from telcos, banks, fintechs, and regulators in real time. Mechanisms for overcoming data‑visibility gaps on social‑engineering cues outside proprietary platforms. Effective coordination with law‑enforcement to identify and prosecute scammers; current enforcement gaps remain. Balancing defensive AI constraints (privacy, regulation, CX) with the rapid evolution of offensive AI. Specific governance framework for sharing sensitive data while respecting privacy and regulatory limits.
Suggested compromises
Adopt a strategic partnership model (e.g., Indosat‑Tanla) rather than a pure vendor relationship to share risk, expertise, and data. Implement AI solutions that prioritize low‑friction user experience to reduce churn while still providing security (e.g., calibrated alerts, selective friction). Use federated learning to keep user data on‑device while still benefiting from collective model improvements, addressing privacy concerns. Combine centralized threat intelligence (RBI platform) with decentralized, industry‑specific models to balance comprehensive coverage and domain‑specific accuracy.
Thought Provoking Comments
In early 2024, the Global Anti‑Scam Association reported that $5 billion was lost by Indonesians, and 65 % of Indonesians face spam or scam on a weekly basis.
This stark data quantifies the human and economic impact of scams, turning an abstract risk into a concrete, board‑level business imperative.
It shifted the conversation from general concerns about fraud to urgent action, prompting Sanjay to ask how the issue was elevated to the board and leading Vikram to describe the strategic partnership with Tanla.
Speaker: Vikram Sinha
We didn’t want a vendor; we wanted a partner who could work with us and use AI to solve this real problem.
Highlights a strategic approach to technology adoption—prioritizing deep collaboration over transactional vendor relationships.
Guided the discussion toward the importance of ecosystem partnerships, influencing later panel members to stress data sharing and coordinated intelligence.
Speaker: Vikram Sinha
Our quarterly results show ARPU grew 9 % versus a 3 % industry average, and churn for serious‑base customers fell from 3.6 % to 1.6 % after deploying the AI solution.
Provides concrete ROI evidence linking AI‑driven fraud protection to financial performance, addressing the board’s typical focus on P&L impact.
Validated the business case for AI investment, prompting Sanjay and the audience to explore scalability and ROI, and set a benchmark for other panelists.
Speaker: Vikram Sinha
Scams are a behavioral journey that starts long before a payment; we lack visibility into that data, and human psychology evolves faster than our models—offensive AI is unconstrained while defensive AI faces privacy and regulatory limits.
Identifies fundamental limitations of current AI defenses and introduces the concept of an arms race between offensive and defensive AI.
Deepened the technical discussion, leading panelists to acknowledge the need for broader data sharing and more adaptive models, and set the stage for Bipin’s call for a national intelligence platform.
Speaker: Neha Gutma Mahatme
99 % of the scams our customers report are not money stolen from us but from other banks; without ecosystem‑wide data sharing, we can’t detect the patterns. The RBI’s Digital Payments Intelligence Authority could be the key.
Emphasizes that fraud is a systemic, cross‑institutional problem and that regulatory‑driven data collaboration is essential.
Shifted the conversation from individual company solutions to a policy and regulatory perspective, reinforcing Anshuman’s earlier question about integrated approaches.
Speaker: Bipin Preet Singh
Fraudsters rent bank accounts for a fee, turning ordinary customers into ‘mules’; this account‑rental model is a major, under‑addressed threat vector.
Introduces a novel fraud mechanism that goes beyond phishing, highlighting the need for new detection and prevention strategies.
Prompted the panel to consider broader ecosystem responsibilities and the importance of law‑enforcement coordination, echoed later by Ratan’s police anecdote.
Speaker: Ratan Kumar Kesh
Is the problem really getting better? Or is it getting worse? And why?
Serves as a pivotal framing question that moves the discussion from anecdotal evidence to a systematic analysis of trends and root causes.
Reoriented the panel’s focus toward evaluating the trajectory of fraud, leading each participant to contribute perspectives on technology, regulation, and consumer behavior.
Speaker: Anshuman Kar
We are building federated learning models so data stays with the user while we learn from it, enabling AI at the edge for rural customers without compromising privacy.
Introduces an advanced AI paradigm (federated learning) as a solution to data‑privacy concerns while extending protection to underserved areas.
Expanded the conversation beyond fraud detection to broader AI applications in telecom, highlighting future‑proofing strategies and influencing the closing synthesis about coordinated intelligence.
Speaker: A. Robert J. Ravi
Overall Assessment

The discussion was driven forward by a series of high‑impact statements that moved the dialogue from abstract concerns about digital fraud to concrete, data‑backed business outcomes and systemic solutions. Vikram’s loss statistics and ROI figures forced the board‑level urgency, while Neha’s articulation of AI’s limitations and Bipin’s call for ecosystem‑wide data sharing reframed the problem as a national, cross‑industry challenge. Ratan’s insight on account‑rental fraud and the moderator’s probing question created a turning point toward deeper analysis of underlying mechanisms. Finally, Ravi’s vision of federated learning pointed to innovative, privacy‑preserving pathways forward. Collectively, these comments reshaped the conversation, introduced new problem dimensions, aligned stakeholders on the need for coordinated intelligence, and set a forward‑looking agenda for AI‑driven trust in the digital economy.

Follow-up Questions
Is the existing government initiative (Digital Payments Intelligence Platform / RBI Mule Hunter) sufficient as an integrated model for fraud protection across the ecosystem?
The audience asked whether the current national‑level platform provides enough coverage, indicating a need to evaluate its effectiveness and possible gaps.
Speaker: Audience (question to panel)
Who ultimately owns responsibility for protecting citizens at national scale – can banks act alone, can RBI act alone, or is coordination with upstream/downstream signals (e.g., telecom) required?
Clarifying governance and accountability is essential for building a coherent, nation‑wide fraud‑prevention framework.
Speaker: Anshuman Kar (directed to Ratan Kumar Kesh)
Why are we not able to stop scams across the whole customer journey despite AI patterns in commerce and payments?
Understanding the gaps in end‑to‑end detection will help design more comprehensive anti‑fraud solutions.
Speaker: Anshuman Kar (to Neha Gutma Mahatme)
How should AI models be calibrated to balance fraud protection with customer‑experience friction?
Finding the optimal trade‑off between security and usability is critical for scaling AI‑driven fraud controls.
Speaker: Anshuman Kar (to Bipin Preet Singh)
How effective is the anti‑phishing tool Carix (and its DLT integration) in preventing scam SMS, and can it be scaled?
Assessing Carix’s performance and scalability will inform decisions on broader deployment.
Speaker: Ratan Kumar Kesh
What are the barriers and opportunities for real‑time data sharing between telecom operators and financial institutions to improve fraud detection?
Data silos limit detection capabilities; research is needed on technical, regulatory, and privacy challenges of cross‑industry data exchange.
Speaker: Bipin Preet Singh; also Neha Gutma Mahatme
How can defenders keep pace with offensive AI used by scammers, given constraints of privacy, regulation, and customer experience?
The arms race between offensive and defensive AI requires study of model adaptability, legal limits, and ethical considerations.
Speaker: Neha Gutma Mahatme
How can federated learning be implemented in rural edge data centers to protect customers while preserving data privacy?
Exploring federated learning could enable AI benefits without centralizing sensitive user data, especially in underserved regions.
Speaker: A. Robert J. Ravi
What is the long‑term impact of AI‑driven fraud prevention on key financial metrics (ARPU, churn, overall P&L) beyond the initial six‑to‑eight‑month horizon?
Understanding sustained ROI is vital for continued investment and board confidence.
Speaker: Sanjay Kapoor (to Vikram Sinha)
How can mule accounts (used to launder money across banks) be detected and mitigated more effectively?
Mule accounts represent a systemic risk; research into detection patterns and inter‑bank collaboration is needed.
Speaker: Ratan Kumar Kesh
What mechanisms are needed for international coordination of fraud detection, given that scammers operate across borders?
Cross‑border threats require harmonized standards, data sharing, and joint enforcement strategies.
Speaker: Anshuman Kar (implied)
How does model performance differ when trained on proprietary telco/fintech data versus industry‑wide datasets, and what are best practices for model sharing?
Evaluating the trade‑offs informs decisions on collaborative model development versus proprietary approaches.
Speaker: Bipin Preet Singh
What is the impact of customer education and awareness programs on reducing fraud incidence, especially among vulnerable populations?
Behavioral factors are highlighted as a root cause; studying education effectiveness can guide outreach strategies.
Speaker: Ratan Kumar Kesh
How can telecom signals (e.g., spam calls, WhatsApp messages) be integrated with financial fraud detection systems to create a unified defense?
Integrating communication‑channel data with payment‑channel analytics could close detection gaps and improve real‑time response.
Speaker: Anshuman Kar (to Ratan Kumar Kesh)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Transforming Agriculture_ AI for Resilient and Inclusive Food Systems

Transforming Agriculture_ AI for Resilient and Inclusive Food Systems

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened by the Netherlands, Indonesia and the OECD examined how artificial intelligence can make food systems more transparent, responsible and inclusive, bringing together government, industry, academia and international organisations [1-3][4-5].


The Dutch ambassador highlighted AI’s rapid development in agriculture, noting its potential to boost productivity, reduce environmental impact and strengthen climate resilience, and described the Netherlands’ strong AI ecosystem and precision-farming successes such as up to 90 % water savings and disease-control models [13-24][25-29][30-34]. He also stressed the Netherlands’ commitment to support low- and middle-income countries through ICT-agri collaborations, tailor-made solutions for smallholders, and an inclusive AI agenda that aligns with the summit’s “people, planet and progress” motto [35-38][40-46][47-50].


The OECD representative pointed out that volatile shocks-from droughts to conflicts-make resilience a global priority, and cited evidence that AI-enabled precision spraying can cut pesticide use by 30 % and computer-vision weed detection can halve herbicide application without yield loss [54-62][63-66]. She warned that adoption remains uneven, with a digital divide evident between countries such as Australia (96 % digital tool use) and Chile (12 %), and identified barriers including high costs, limited skills, fragmented data governance and lack of trust [68-71][72-76]. To address these gaps, the OECD is developing an AI policy toolkit and a digital-governance framework that promote transparency, explainability and responsible data sharing for farmers and regulators [80-88][91-93].


Indonesia’s speaker described the archipelagic challenges of uneven ICT infrastructure, talent distribution and climate risks, and outlined AI use cases such as soil-nutrient prediction, optimal fertilizer and water dosing, intelligent farming, weather forecasting and logistics optimisation across its 17 000 islands [150-166][167-176][177-185][186-192]. He presented a national AI roadmap built on seven pillars-regulation, ethics, investment, data, innovation, talent development and use cases-and a “quad-helix” governance model that engages government, industry, academia, media and communities to ensure no stakeholder is left behind [196-203].


The industry expert warned that AI is often applied indiscriminately, urging a problem-driven approach that first secures high-quality data, clear objectives and market pathways, and suggested establishing sector-specific centres of excellence to tackle food-waste and cold-chain inefficiencies [213-224][228-247][250-257]. A researcher highlighted three persistent obstacles-data scarcity, farmer mistrust and limited scalability-and illustrated projects such as the World Cereal mapping initiative and low-tech chatbot advisory services that aim to embed AI in smallholder contexts [267-277][278-286][295-303]. He emphasized that building robust data infrastructure and actively involving farmers are essential for AI models to be effective at the grassroots level [304-308].


The moderator concluded that the discussion underscored AI’s vast potential for resilient, inclusive food systems, but that real impact depends on problem-focused development, trustworthy data practices and coordinated public-private partnerships [309-317].


Keypoints

Major discussion points


AI as a catalyst for higher productivity, sustainability and climate-resilient agriculture – The Dutch ambassador highlighted that digitalisation and AI can “significantly increase food productivity and reduce food losses” and cited concrete use-cases such as “water savings of up to 90 % through smart irrigation, optimal crop yields with minimal input, and predictive models for disease control” [13-21][29-30]. The OECD representative reinforced these benefits, noting that “AI-enabled precision spraying has reduced pesticide use by up to 30 % … and AI is revolutionising plant breeding, shortening cycles and delivering climate-adaptive varieties” [60-62].


The need for inclusive AI and bridging the digital divide – FAO’s Dejan warned that “inclusiveness and the digital divide was still strong… if a farmer or communities are outside of the digital ecosystem, they suddenly are outside of any ecosystem” [112-118]. He also gave a positive example of an Indian phone-based advisory service that “lowers the entry barrier to knowledge” [120-124]. The OECD added that “farmers and regulators need transparency … but fragmented data-governance frameworks introduce complexity” and that “structural barriers including high cost, limited digital skills, and lack of trust” hinder uptake [73-76][68-72].


Indonesia’s specific AI challenges and its national roadmap – Professor Sumari described the country’s “17 000 islands, 36 % land, 64 % water” and the resulting “telecommunication … infrastructure gaps and unequal distribution of AI talent” [148-166]. He outlined a “seven-pillar AI roadmap” that combines horizontal AI governance with sector-specific rules, stresses a “quad/hex helix” ecosystem of government, industry, academia, media and communities, and stresses transparency, explainability and sustainability [190-203].


Public-private collaboration and the role of sector-focused centres of excellence – Debjani Ghosh argued that “we throw AI at every problem… we need to know exactly what we are solving for” and that “industry must align on a clear problem statement and have a route to market” [206-214][224-247]. She proposed “a centre of excellence … to solve specific problems such as cold-chain logistics or climate-resilient crops” to avoid duplicated pilots and to scale impact [252-258].


Practical barriers to deployment and examples of low-tech-friendly solutions – Dr. Pratihast identified three core obstacles: “data scarcity, trust, and scalability” [278-286]. He illustrated ongoing work such as the “World Cereal Project” for global crop mapping and a “chat-bot in local languages for cocoa farmers” that combines computer-vision advisory with low-tech connectivity [295-300][301-307].


Overall purpose / goal


The session was convened to bring together government, industry, academia and international organisations to examine how artificial intelligence can be harnessed to make food systems more transparent, responsible, resilient and inclusive, while identifying the concrete challenges-data sharing, governance, infrastructure and equitable access-that must be overcome to ensure AI benefits are broadly shared [1-4][52-55].


Tone of the discussion


The conversation began with a formal, optimistic tone, emphasizing partnership and the promise of AI [1][6-10]. As speakers progressed, the tone shifted to cautiously realistic, acknowledging significant gaps, digital exclusion and trust issues [112-118][73-76]. Throughout, the tone remained constructive and collaborative, with participants offering concrete examples, policy frameworks and calls for coordinated action rather than criticism [148-166][206-214][278-306].


Speakers

Sara Rendtorff Smith


– Expertise: International policy, AI governance, food systems


– Role/Title: Session moderator, representing the OECD


– Affiliation: OECD (moderator) [S13]


Harry Verweij


– Expertise: AI and digitalization in agriculture, food security


– Role/Title: (Representative of the Netherlands)


– Affiliation: Netherlands


Dejan Jakovljevic


– Expertise: Digital agriculture, data informatics, AI for food systems


– Role/Title: CIO and Director, Digitalization and Informatics Division


– Affiliation: Food and Agriculture Organization of the United Nations (FAO) [S7]


Arwin Datumaya Wahyudi Sumari


– Expertise: AI applications in agriculture, knowledge-based AI frameworks, AI policy


– Role/Title: Indonesian Air Force officer; Professor at the State Polytechnic of Malang; Co-inventor of the Knowledge Growing System


– Affiliation: State Polytechnic of Malang, Indonesia [S3]


Debjani Ghosh


– Expertise: Frontier technologies, AI architecture, policy for inclusive AI


– Role/Title: Distinguished Fellow; Chief Architect of NITI Frontier Tech Hub; Former role with NASCOM


– Affiliation: NITI Aayog, Government of India [S1][S2]


Arun Pratihast


– Expertise: AI research for low-tech farming environments, data scarcity, trust and scalability of AI solutions


– Role/Title: Senior Researcher


– Affiliation: Wageningen University Environmental Research [S11]


Speaker 5


– Expertise: –


– Role/Title: –


– Affiliation: –


Additional speakers:


His Excellency Ambassador Fawai – Ambassador-at-Large and Special Envoy for AI, Kingdom of the Netherlands (mentioned in opening remarks).


Madam Gorshan – Co-chair of the sixth working group on economic growth and social good (referenced by Harry Verweij).


Admiral Samari – Co-chair of the sixth working group on economic growth and social good (referenced by Harry Verweij).


Ms. Goss – Name appears in the transcript; role not specified.


Professor Ramesh Chand – Esteemed member of NITI Aayog, expert in agriculture (referenced by Debjani Ghosh).


Full session reportComprehensive analysis and detailed insights

Sara Rendtorff Smith opened the session, introducing a multi-stakeholder panel on AI for transparent, responsible and inclusive food systems [1-5]. She noted that the panel included representatives from government, industry, academia and international organisations, among them Prof Arwin Datumaya Wahyudi Sumari (who was introduced by the moderator as “Professor Arvind Sumari”), Dayan Jakoblevich – Director of the Digital FAO and Agro-informatics Division (FAO Chief Information Officer) – and other experts [1-5].


His Excellency Ambassador Harry Verweij of the Kingdom of the Netherlands then outlined the Dutch vision of AI as a catalyst for higher productivity, lower environmental impact and greater climate resilience in agriculture [13-24]. He cited precision-farming examples – smart irrigation that can save up to 90 % of water, AI-driven optimal yield models and predictive disease-control tools [25-34] – and stressed that, despite its small size, the Netherlands is a global agro-innovation hub, anchored by firms such as ASML, NXP and Philips [27-28]. The ambassador highlighted Dutch support for low- and middle-income countries through ICT-agri collaborations, co-creation of tailor-made solutions for smallholders and SMEs, and an inclusive AI agenda aligned with the summit’s motto “People, Planet and Progress” [35-46]. He thanked India for hosting the summit, referenced the Indian Prime Minister’s speech to underline the inclusive agenda, and reaffirmed Dutch readiness to help Indonesia pursue OECD accession [45-48][47-50].


Sara, speaking for the OECD, emphasized that today’s volatile shocks – droughts, floods, pests, conflicts and economic crises – make resilience a global priority [54-55]. She presented evidence that AI-enabled precision spraying can cut pesticide use by up to 30 % without yield loss and that computer-vision weed detection can halve herbicide application [60-61]. AI is also accelerating plant breeding, producing climate-adaptive varieties such as drought-tolerant sorghum and hybrid rice with yield gains of +25 % under end-season drought [62-66]. Additional benefits include improved supply-chain traceability, market transparency and smart logistics [66-68]. Adoption, however, is highly uneven (96 % of Australian farmers vs 12 % of Chilean farmers using digital tools) [70-71], with barriers that include high costs, limited digital skills, fragmented data-governance frameworks and trust deficits [72-76]. To address these gaps, the OECD is releasing an AI policy toolkit – built on the OECD AI Policy Navigator, covering more than 2,000 policies across 80 jurisdictions and publicly available at osd.ai [80-84]; this effort is complemented by work on digital governance in agriculture [86-88] and the “global AI impact comments” deliverable of the summit [86-88]. Embedding trustworthy-AI principles within an enabling ecosystem is part of the same OECD digital-governance work [88-94].


After the introductions, Dejan Jakovljevic (FAO, Director of the Digital FAO and Agro-informatics Division) set the scene by warning that “inclusiveness and the digital divide was still strong” and that farmers outside the digital ecosystem risk being left out of AI-driven solutions [112-118]. He showcased a low-tech phone-call advisory service from India that provides multilingual, real-time guidance on shrimp cultivation, pest and disease management, thereby lowering the entry barrier to AI-based knowledge [120-124]. Jakovljevic argued that anticipatory AI – early-warning tools, decision-support “situation rooms” and predictive analytics – is essential to protect the roughly 700 million people who still lack food security [127-136][137-139].


Prof Arwin Datumaya Wahyudi Sumari described Indonesia’s archipelagic challenges: 17 000 islands, a 36 % land / 64 % water split, exposure to the Ring of Fire, uneven ICT infrastructure, time-zone disparities and an unequal distribution of AI talent [148-166]. He outlined a suite of AI-driven use cases, including soil-nutrient prediction for new rice fields, optimisation of fertilizer and water dosing, “intelligent farming” that integrates sowing, growth monitoring and harvest logistics, short-term weather forecasting to prevent crop failures, and logistics optimisation that could reduce transport costs and price disparities between islands [167-192]. Indonesia’s national AI roadmap rests on seven pillars – regulation, ethics, investment, data, innovation, talent development and use-cases – and is governed by a “quad-helix” model that brings together government, industry, academia, media and communities to ensure no stakeholder is left behind [196-203].


Industry expert Debjani Ghosh cautioned against “throwing AI at every problem” and urged a problem-driven approach that first defines clear objectives, secures high-quality data and establishes market pathways [206-224]. She identified food-waste reduction – through smarter logistics, cold-chain management and real-time distribution – as a priority leverage point [228-242] and proposed the creation of sector-specific Centres of Excellence (e.g., for cold-chain optimisation or climate-resilient crops) to align industry, data and commercialisation routes [252-258].


Dr Arun Pratihast highlighted three persistent obstacles to AI impact at the grassroots level: data scarcity and poor sharing, farmer mistrust of AI recommendations, and limited scalability of solutions that work only in high-tech environments [267-286]. He illustrated these points with the World Cereal Project, which aims to map global crop areas but suffers from missing data from major producers, and with a multilingual chatbot for cocoa farmers that combines computer-vision disease detection with low-tech connectivity [295-307]. He argued that robust data infrastructure – treated as a core component of the AI ecosystem – and active farmer participation are essential for models to be effective and trustworthy [304-308].


In her closing remarks, Sara thanked the participants and summarised the key take-aways: AI can markedly increase productivity, reduce inputs (water - 90 %, pesticides - 30 %), and enhance climate resilience; anticipatory tools can help predict and mitigate shocks; yet adoption remains uneven because of digital exclusion, data gaps, trust deficits and scalability issues. Realising AI’s promise will require problem-focused development, transparent and explainable models, responsible data practices and coordinated public-private-multi-stakeholder partnerships – echoing the consensus that inclusive governance and capacity-building are indispensable [309-317][52-55].


Overall, the panel expressed strong agreement that AI holds great potential for more productive, sustainable and resilient agriculture. Different speakers emphasized complementary aspects – the Dutch ambassador on productivity and environmental impact, Ms Ghosh on waste reduction, and Dejan Jakovljevic on low-tech, anticipatory solutions – underscoring the need for blended approaches that combine advanced AI capabilities with low-tech delivery channels, robust multi-helix governance and targeted public-private mechanisms to bridge the digital divide and ensure that AI benefits are equitably shared.


Session transcriptComplete transcript of the session
Sara Rendtorff Smith

Session started. Thank you. the Netherlands, and Indonesia, as you’ll see reflected on the panel. And together with our distinguished panelists, we’ll explore how artificial intelligence can support the transition towards food systems that are more transparent, responsible, and inclusive. So this session is bringing together leaders from government, industry, academia, and international organizations to examine both opportunities and the practical challenges ahead from data sharing and infrastructure to governance frameworks and the partnerships needed to ensure that AI benefits are broadly shared. And before we begin the panel discussion, it’s my honor to invite His Excellency, Ambassador Fawai, Ambassador -at -Large and Special Envoy for AI of the Kingdom of the Netherlands, who will deliver welcome remarks. Welcome, Ambassador.

Harry Verweij

Thank you, Sarah. Is this working? Yeah. Thank you all for sharing this wonderful moment for me because we’re here with Madam Gorshan and Admiral Samari from Indonesia. Together we formed the chair and co -chair of the sixth working group on economic growth and social good in preparation for the summit. And I just wanted to say how much I was impressed with you, Madam Gorshan, how you managed the working group and how the outcomes were drafted and delivered, especially also delivered in the plenary. It’s not up to me, but I say well done. Really great. But thank you very much. It was really a wonderful journey with you. So, ladies and gentlemen, the use of digitalization and artificial intelligence in agriculture is developing rapidly.

It offers enormous opportunities to increase the productivity and sustainability of local food production. It offers opportunities to improve nature conservation and to foster a sustainable foster climate resilience in an inclusive and sustainable way. When this is all – when this – it also contributes to the autonomy and stability of countries. For the Netherlands, strengthening global food security is a strategic priority. Reliable, sustainable, and affordable food systems are essential for societal stability, economic development, and particularly in vulnerable regions. The ambitions in our digitalization agenda for agriculture, nature conservation, and food are to connect digitalization to the transition of agriculture needed for more food security, reduction of environmental impact, and climate resilience via public and private investments. Our primary focus on increasing productivity with lower environmental impact and improving climate adaptation, strengthening the resilience of food systems through response.

use of AI and digital technologies. Concerning today’s topic, the Dutch ambition is to enhance food security by making food systems more resilient and sustainable for all stakeholders. In my vision, digitalization and AI are powerful tools for that. They have already proven that they can significantly increase food productivity and reduce food losses. In addition, AI solutions can enhance the efficiency and resilience of food systems by supporting farmers to respond to sustainability requirements, make risk assessments, implement sustainable farming practices, and enable them to provide trustworthy and quality data sets about those efforts to be shared throughout the supply chain. The Netherlands has a strong AI ecosystem. Thanks to our technical universities and partners, we have a strong ecosystem of AI and companies like ASML, NXP, and Philips.

Despite its relatively small size, the Netherlands is not only a huge trader in agricultural produce, but also a global key player in agro -innovation and technology development due to the interaction between plant and animal science and technological knowledge systems in the Netherlands. Companies, science and government invest mutually in solutions for societal challenges. Examples include precision farming with AI, such as water savings of up to 90 % through smart irrigation, optimal crop yields with minimal input, and predictive models for disease control. To support digitalization in the agricultural sector in low – and middle -income countries, the Netherlands facilitates Dutch ICT agribusinesses to collaborate with businesses and startups there. And as you are… We are aware in the Netherlands that strong ICT ecosystems and highly innovative agricultural ecosystems come together.

ICT agricultural solutions combine the in -depth agricultural knowledge and advanced technology development in my country. Examples are applications for early warning of pests and diseases, optimization of water use and optimized plant breeding processes. Dutch companies and knowledge institutions are open to co -work on tailor -made solutions. Every country has its own typical local challenges and requires tailor -made solutions. Today special attention will be drawn to AI -powered solutions for small farmers and SMEs in producing countries in order to enhance their access to global agricultural supply chains while protecting their data. Our goal is to improve the ICT ecosystem and improve the ICT ecosystem in our country. We are committed to work together on this through knowledge sharing, co -operation and collaboration.

creation and capacity building so that AI solutions are locally relevant, inclusive and accessible to farmers. The need for an inclusive AI has also been central to our discussions in the working group of the Economic Growth and Social Group leading up to the summit. It fits well the summit motto, people, planet and progress. So I would like to thank India for its leadership in focusing on an inclusive AI future and underline that the Netherlands stands ready to contribute by forging concrete partnerships, sharing knowledge and technology while striving for measurable results in order to ensure that AI serves all of humanity. And I recall the Honourable Prime Minister’s speech in Flendry to which he alluded as well.

Ladies and gentlemen, we are honored to organize this important event together with the OECD, the go -to organization when it comes to AI governance, and to discuss the opportunities for international knowledge sharing and cooperation with FAO, the Wageningen University in the Netherlands, and the distinguished co -chairs of the Working Group on Economic Growth and Social Growth, India and Indonesia. We warmly thank India for hosting this summit and look forward to continuing and strengthening our cooperation in the field of AI and agriculture, both bilaterally and within the global partnership on AI. We also thank our co -chair Indonesia for continuing cooperation and we would like to highlight our appreciation and firm support of Indonesia’s ambition to join the OECD and its commitment to global standards and evidence -based policymaking.

International knowledge sharing and cooperation is needed to accelerate the development and application of new technologies. With the help of trustworthy AI. Having AI. And agricultural ecosystems on the agenda in this important AI summit is extremely valuable and a. forward in order to make a positive impact for all stakeholders. I wish you a fruitful meeting and look forward to our conclusions, and thank you for this opportunity to listen. So the floor is now Sarah.

Sara Rendtorff Smith

Thank you, Ambassador. And on behalf of the OECD, I just want to thank once again the Netherlands for the leadership in convening this timely discussion. And as was just reflected in the Ambassador’s remarks, the Netherlands is obviously a pioneer in advancing food and agriculture innovation, and we are so delighted to have them as co -chairs as well of the OECD FAO Advisory Group on Responsible Agricultural Supply Chains. From the OECD’s perspective, we clearly see this dynamic of agriculture and food systems today operating in an increasingly volatile environment, and farmers face a wide variety of shocks, from droughts, floods, pests, to conflicts and economic crises. With growing frequency and severe… and so therefore strengthening resilience while also ensuring inclusion, as was also stressed by Ambassador Federe, is really an urgent global priority that I hope we can talk about today.

AI in this regard offers significant potential. We’re seeing AI systems and tools being applied to optimize the use of critical resources, as was already mentioned, such as water, fertilizer, and pesticides, and also to reduce environmental pressure while enhancing productivity. The OECD and JPEI, which also met today in a ministerial session, have been examining AI use cases in agriculture with a focus on the EU and on Southeast Asia, and we continue these dialogues. And what we’re seeing there is that the evidence from real -world deployment is really, really promising. So, for example, AI -enabled precision spraying has reduced pesticide use by up to 30 percent, and this is actually without compromising yield. while computer vision green on brown systems can cut herbicide used by up to half by targeting only the weeds that require the treatment and thus not the crops.

And in addition, we’re seeing how forecasting, monitoring, and early detection of climatic and biological threads means that AI systems can strengthen our capacity to respond to crises before they even escalate, so some degree of preemption. AI is also revolutionizing agricultural innovation itself and supporting more efficient plant breeding that can develop climate -adaptive variety in a fraction of the traditional time. And here we also have some interesting data seeing in Central Europe that researchers have identified drought -tolerant traits in crops such as sorghum and chickpea that boost yields by up to 25 % during end -season drought. And in Asia, meanwhile, we’re also seeing global AI hybrid rice platform demonstrating how AI can shorten breeding cycles by predicting optimal parent combinations and enhancing resilience in one of the world’s most vital staple crops.

Beyond the farm gate, AI is also reinforcing the resilience of our entire food supply chains. And AI -enabled traceability, market transparency, and smart logistics can reduce losses, improve compliance, and strengthen food safety systems. Evidence from these digital traceability initiatives across the OECD members demonstrates a growing maturity of exactly these systems, so something really to look out for. But technology alone, as we know, does not ensure impact, and so adoption is where we’re really looking now, and that remains quite uneven still. And this is obviously why we’re all here in Delhi. So while we’re seeing in Australia that 96 % of farmers are using digital tools, the same number for Chile is just 12%. And this is highlighting a digital divide that could deepen existing inequalities if we don’t look to address it.

There’s also important challenges in the use of AI, and this goes back to sort of the core work of the OECD, looking not just at the benefits but also the challenges associated with AI. Farmers and regulators need transparency in how AI systems make their decisions, but at the same time fragmented data governance frameworks introduce complexity to the use of AI tools that support the trade, traceability, and resilient food supply chains across the border. And this highlights the need for greater interoperability, which is also a theme at this summit. So structural barriers including high cost, limited digital skills, and lack of trust. These are some of the things that continue to slow the uptake of AI.

So bridging these gaps, which should be a priority for all of us, requires investment in connectivity and other digital infrastructure, in skills and affordable solutions. So smallholders, women, farmers in remote areas who play a critical role in enhancing global food security, they’re able to also benefit from AI’s potential. And farmers must be able that their data is collected, shared, and used responsibly. So in this area, the OECD is working to help countries put in place policies that promote these objectives through an AI policy toolkit. And this toolkit will provide practical, context -specific guidance to countries. The toolkit builds on our policy navigator. If you haven’t already visited it, it’s on osd .ai. And it so far covers more than 2 ,000 policies across 80 jurisdictions.

So this is where you can find examples. Examples of national AI strategies, but also in specific sectors. And we continue to update this, and for anyone in this room representing a country not represented, we encourage you to visit and to also contribute your policies. We’re also advancing work on digital governance in agriculture. This is within GPAY that I mentioned earlier, a priority there, where we examine governance models across countries and their applications for responsible digital transformation more broadly. We also see strong complementarities with the global AI impact comments, which is a key deliverable of this summit, and which shares concrete use cases of AI with known impact and scaling potential. So for the OECD advancing trustworthy AI consistent with our OECD AI principles requires a strong enabling ecosystem alongside technological progress.

And what we’re seeing is that if we succeed, we’re really in a position to raise productivity. sustainably and also strengthen resilience in agricultural supply chains, including by ensuring that the benefits of innovation are widely shared and existing divides are not deepened in the process. So I really look forward to this panel’s insights to help us take this conversation forward, looking at practical pathways to achieve this vision. And with this, it’s my pleasure to introduce our esteemed panel. Many have traveled far to be here. So first, I would like to introduce Professor Arvind Sumari, who is an Indonesian Air Force officer and professor at the State Polytechnic of Malang. Welcome. And also we have with us, next to Professor Sumari, we have Mr.

Dayan Jakoblevich. He’s Chief Information Officer and Director of Digital FAO and Agroinformatics Division at FAO of the United Nations, based in Rome. We also have with us… We have with us today the pleasure of having Debjani Ghosh, Ms. Debjani Ghosh. Distinguished Fellow and Chief Architect of NITI Frontier Tech Hub. And finally, it’s my pleasure to introduce Dr. Arun Pratihast, Senior Researcher at Wageningen University Environmental Research. So welcome to this session. And what we will see today is each of our speakers bringing a unique perspective on how AI can help build food systems that are resilient and inclusive, which is the topic of the session. And after the panel discussion, I will also be giving the floor to anyone in the room who might have questions.

So now let’s begin. I’ll hand the floor over to Dan, who will set the scene for the conversation. Dan, you have the floor.

Dejan Jakovljevic

Thank you very much. And I would like to welcome everyone on behalf of the Food and Agriculture Organization. I thank you to our hosts here. The summit from India, but also ECD and. government of the Netherlands ambassador thank you when we look at agri -food I heard in the interventions before about the agriculture and the food we look at agri -food systems from the FAO perspective why because the food itself as if we look at the agriculture food is one product but not only one so there is a whole ecosystem behind agriculture of products that are not necessarily food and they are equally important when we make considerations when we look at for example at the water use transport and many others so in from agri -food systems perspective AI brings us fantastic opportunities and if we look at our topic today in terms of inclusiveness and resilience and inclusion and inclusion and inclusion and resilience and inclusion and resilience and inclusion and inclusion and inclusion and inclusiveness and resilience and resilience and resilience inclusiveness is still a big issue if we just think back back maybe two, three years before the, let’s say, chat GPT came out, the inclusiveness and the digital divide was still strong and present.

And the key issue is that it used to be possible to exist outside of the digital ecosystem. We all know we could maybe go to the bank, but nowadays it’s not. So if a farmer or communities are outside of the digital ecosystem, they suddenly are outside of any ecosystem almost. And now with the AI, it makes it even worse. So this is something we need to continue to press on and jointly in making sure that everybody has equal opportunity within the digital ecosystems. And on the positive, let’s say, note, on the positive, let’s say, note, on the AI when it comes to inclusiveness. We see very encouraging opportunities with AI. What I mean by that is we can, in fact, lower the entry barrier to knowledge.

Just two days ago, I’ve seen here actually this opportunity at the event, great advancements, the new tool that was produced by government of India where farmers can, with a phone call, as not everybody has a smartphone, can get advisory in the area of agriculture, from shrimp cultivation to pest diseases and similar. So this is great. The service can be in many languages. So this is a fantastic opportunity example where AI can help us actually lower the entry point to the AI. In the same time, for governments, it’s even more so difficult. to have the capacity to build the AI infrastructure to provide such services. So this is, again, I think one area, and forums like this help us consider what it takes to build it.

When we look at the resilience specifically, I was very happy to hear in the previous openings you mentioned resilience in terms of, Jeff and from Ambassador, we heard on anticipation. So I would say this is the key word. The key word is anticipation. So anticipate the shocks to the agri -food systems that impacts the food security. We know we have natural disasters. We know we have also conflicts. We have many different factors that impact agri -food systems. So building the systems that are capable of absorbing the shocks of these situations and anticipating. Anticipatory actions to when the shocks happen, what can be done to kind of. go over these shocks. So this is where AI can be a great enabler, where we can then, with new capabilities, anticipate these shocks, and with the help of data and our joint work, really, put together decision -making tools, anticipatory tools, situation rooms, to be able to quickly not only anticipate, but when something happens, we don’t really improvise, but we have tools in hand to address these situations.

We still have about 700 million people without food on the table today. So from this perspective at FAO, and I’m sure we shared the same sense of urgency to actually do something. So I wanted to say from this perspective, we are very grateful to be part of this conversation and thank you for your time. And we can work together in finding the new solution. So I thank you for that. and I’m looking forward to our panel. Thank you.

Sara Rendtorff Smith

intelligence research group and are the co -inventor of the Knowledge Growing System, a cognitive artificial intelligence framework designed to enable adaptive and evolving decision making. So from Indonesia’s vantage point, we’d be interested to hear where you see the most significant AI capability gaps across the agricultural system and where you see the greatest opportunities at the same time for AI to make food supply chains more efficient and resilient, something we also heard as a priority. And we also know that Indonesia is one of the countries advancing an ambitious AI agenda. So if you could briefly outline also the key pillars of Indonesia’s AI roadmap, this is of interest and to explain how you are balancing horizontal AI governance with more sector -specific regulation in agriculture.

Over to you. Thank you.

Arwin Datumaya Wahyudi Sumari

Thank you, Sarah. First, I would like to deliver my appreciation and congratulations to the host, India, and also my chair, Ms. Goss, and also my dear colleagues from the land ambassador harry first letter for coaching our working group together and also other speakers and Sarah thank you and our audience regarding your question about the artificial intelligence for Indonesia as we already know together that Indonesia is not only the agriculture but also maritime nation we we were self -sufficient in in rice about 20 30 years ago and then it wasn’t a I for making our country had sufficient in in rice but nobody I is something that that can make our program to be to become a self -sufficient country in right can be achieved.

We are much aware that the ideology is developing very fast, not only in America or Europe, but also in Asia, especially in Indonesia. This rapid and democratic application across all agricultural potential areas presents significant challenges, especially given the potential location which are separated by ocean. And you already know that Indonesia has 17 ,000 islands separated by ocean. We only have 36 % of land, 64 % of water, and 100 % of air. And this is a challenge for us. If you don’t believe me, you can count the numbers of our islands. And this is a challenge for us. And we also have another challenge. We are living above the ring of fire. There are also other challenges for our people of Indonesia. And as I mentioned previously, this gap is further widened by lack of democratically supporting AI infrastructure, such as telecommunication.

We have three different times region, the west region, center region, and eastern region. And each one has different one hour, one to another. And also, there is a problem with unequal distribution of AI talent. I think the problem is not only in Indonesia, but also all over the world. In terms of the biggest opportunities for utilization. AI in the food supply chain, especially in agriculture country like Indonesia. efforts to do such as like we can use AI for prediction of soil condition and nutrition before opening new land for agriculture. Our president has a program to open almost 1 million hectares of new rice files. 1 ,000 hectares in some big island of Indonesia in order to get the safe efficiency in the next five years.

And then we also use the AI for prediction of the most appropriate food crops given the soil condition and nutrition of existing agricultural land. We have seven dozen islands and each island has different soil condition, different soil nutrition. And you can use AI to predict what kind of nutrition, what kind of soil condition, what kind of vitamin that belongs to that soil. So we can predict the proper crops, the proper plants that have to be planted in that area. The second one about optimizing the most optimal fertilizer content to produce the best harvest result as well as optimizing the volume of water required according to the type of fertilizer given. Some of my students, they did some experiment how to predict the percentage of fertilizer combined together to get the most optimum production of any kind of crops.

Even if it is corn, rice, or sweet potato. And then we also can use AI for intelligent farming. We don’t say smart farming. Smart is not really intelligent. Intelligent is different. There is knowledge that has to be grown in the system. So intelligent farming is just like a human. They grow their knowledge within their brain. By optimizing the seed planting in the land so that plants can grow and develop healthily to produce the best products to optimization of the harvest process until delivery to logistic warehouses. So it’s just like end -to -end mechanism. And then we also can predict the weather dynamics just as a short step of the flood and something like that. So we can predict the weather dynamics to obtain the right conditions.

So that’s the vision for planting seed and reducing the level of crop failures. The crop failures that… This often happens if the farmer, they fail to predict what kind of pest, what kind of, what type of the soil and everything. And then the last one, optimizing the logistic transportation route to reduce the operational and other unnecessary costs. You can count how much operational costs to deliver the crop production from one island to another island in Indonesia. The price in the eastern area can be double or triple times in eastern area. So if we buy rice in eastern area only $1, it can be $3. $5, $6 in. eastern area. So that’s why we need AI to optimize the transportation and logistic transportation routes.

Whether it is from water, from the ocean or sea, and also from the air. Regarding the policy and regulation, you asked about the air roadmap, right? And then about how to balance the horizontal AI government with sector -specific agriculture, right? Yeah, we are proud. UNESA is proud to be a leader in our region, exploring how AI policy and regulation can be powerful tools for promoting trustworthy AI, especially in critical verticals like the agricultural sector. This one. Agriculture is very important to UNESA because most of the people in Indonesia, they are farmers not only in Java Island but also in other big islands in Indonesia if you see, there are five big islands in Indonesia from western area like Sumatra and then Java and the southern area we have Borneo in the central, also Sulawesi, or Celebes and the biggest one in the eastern area is Papua Island still have so much area that can be explored to become a rice field our national AI roadmap is not merely a technological blueprint it is a strategic framework designed to create an ecosystem that harnesses AI for inclusive and resilient system, including food system, so there are two keywords in here inclusive and resilient inclusive means it must be transparent AI must be transparent, AI must be explainable.

We’ve been having problems with the neural network -based system that the black box cannot be explained in plain. And then the second was Sicilian. This is very important for agricultural -based nation. So the implementation of AI needs a strong and sustainable national ecosystem, like my dear colleague, Ambassador, first of all mentioned about ecosystem. The AI cannot be implemented, cannot be applied without a strong and sustainable ecosystem that collaborate all stakeholders, not only government, but also business, industries, communities, media, and also academia. so we have a concept of helix maybe you ever heard about quad helix, five helix, six helix that’s very important so when we are developing the ANS roadmap the government in this case Ministry of Digital Information and Communication and Digital Affairs is open a voluntary contribution from all stakeholders not only the government but also from industry, academia media and communities so our roadmap has seven pillars that include AI regulation AI ethics, that’s important the third one is investment like it was mentioned before about financing when I was working the attending the US forum in AI export they mentioned about financing financing is very important, without that there is no AI ecosystem financing and investment and then the third AI data, the fifth one AI innovation and then the next one AI talent development the last one is AI use case so because we embrace all stakeholders so we assure there is no one left behind.

Thank you.

Sara Rendtorff Smith

Thank you very much professor and we can come back to those in more detail later perhaps in the Q &A but I really want to thank you for sharing the promising use cases from Indonesia, very instructive I think for this discussion and now I would like to turn over you talked about the helix and how we work together to have the industry perspective from Ms. Ghosh India as we mentioned also co -chairs the summit working group and so I’d be interested to hear now that we’re seeing AI as quickly becoming foundational to agricultural productivity and food security but the big question now is whether as we mentioned it will deepen inequalities or indeed democratize the opportunity so from your vantage point Ms.

Ghosh what practical steps are needed to broaden access to AI capabilities so that emerging economies and smallholder farmers can also benefit and fully participate and as adoption accelerates hopefully broadly how should public -private partnerships evolve to scale responsible AI deployment and prevent the AI divide? Thank you.

Debjani Ghosh

It’s a very long answer question. I’ll try and keep my answer very short. But before I do that I have to acknowledge the presence of Yeah, okay. But before I do that I have to acknowledge the presence of I think one of the The biggest experts in this field of agriculture in this room, Professor Ramesh Chand, who’s also a very esteemed member of NITI Aayog. And I requested him not to come for this session. I’m going to be too nervous if you’re going to be sitting right in front of me. But yeah, let’s see if we live up to his expectations or not. You know, the biggest problem with AI today is that we throw AI at every problem that exists.

And we expect that something will happen out of it. Right. And as a result, we generalize the technology a bit too much. See, the thing with AI is if you really want to unlock the technology, you have to know what exactly are you solving for? What problems? And then you have to go deep because there are so much that has to come together for AI to work. For example, is the data in place? How good is the quality? Is the ecosystem in place? Are capabilities in place? So AI requires investments. And AI is a pretty deep investment overall. Right. So it’s very important to understand what problems do you want to solve with AI. And I think that’s one of the biggest issues today because we are not taking the time to think through it.

We keep saying AI is the magical world for everything. Right. So now let’s look at the food system. And I hope I’m correct. Professor Chan, I’ve learned this a bit from you also. But I think the biggest issue today is while the world is producing enough food to feed, I think, 8 billion people. But there are still millions and millions. Who are hungry. So there’s a paradox. And I think when you start breaking it down further to understand the exact problems as to why this exists. distribution? The entire access to food, do you have access to food so there is surplus and there is deficiency and then you don’t have a bridge to ensure that there is distribution happening at real time that is needed.

And what this results in is tremendous amount of food shortage, food wastage. And some of the culprits when we think of it, of course geopolitical wars are a big culprit, conflicts are a big culprit but climate is another big culprit. So this is how you sort of at least how I, because I look at everything from a tech lens, I’m by no means an expert in the domain but when I look at it from a technology lens and I say how do I best apply the technology to this problem, this is the domain that we have to play with. So now when you look at it if I have to say where do I want to go deep the problem to solve for at least when I look at all of this is the biggest problem to suffer in the food supply chain according to me right now just purely looking at it from a tech lens is the wastage.

How do I bring down food wastage? What role can AI play to bring down food wastage? So then you start looking at logistics, you start looking at supply, the cold chains that exist globally or not. You start looking at trade, you start looking at geopolitical agreements because all of that will come into mind. Now in terms of industry coming together to solve for AI again if you want the best out of industry you have to ensure that there is alignment on the problem statement you want to solve. Otherwise everyone will come and everyone will do the same pilot everywhere. That’s what’s happening today. When you look at AI executions around India and around the world, and because of the AI commons that we have built, every country is trying out the same thing, farmer advisory, right?

Every country is trying it out, but why is it not scaling? Why are we not solving for other problems? So again, it’s very important to identify what is the problem statement? How do you ensure that when industry gets involved, there is a route to market? And there is a route to commercialization because that becomes very important for industry. And one of the things that we advocate is coming up with maybe a center of excellence, a center of innovation that is identified to solve specific problems. I think one of the problems today we have with COEs are you have AI COEs, you have blockchain COEs. I really don’t understand what that means. But what if we had a COE to say that how do we ensure that the cold chain problem is solved across the country?

How do we have a COE that ensures that climate resilient crops in XYZ areas can be grown, right? And then bringing the industry together to say that how do we collaborate to create, I think gives you the right kind of outcomes. Thank you

Sara Rendtorff Smith

very much, Mishkosh. And this is a perfect segue, I think, to our next speaker, turning to the research community and how to really bridge research into advanced AI to more practical tools. So, Dr. Pratihast, I would like to turn to you now for, you know, some examples of, you know, how these advanced AI tools can really be made to good use in more low -tech farming environments. And maybe you can give us some concrete examples, what distinguishes those who succeed from those who don’t. Maybe speaking also to some of the points that Mishkosh raised. Thank you. Thank

Arun Pratihast

you. Thank you for invitation. It’s very timely discussion. And of course, always when we talk about AI, we often talk about the technology, how fast the model are, how big the data set they can handle, what are the parameters. That we always talk. But if you think about the food system, and of course, Terry mentioned that, you know, food system have different layers. And bottom of this layer is basically a smallholder farmers. And that farmer operate in a different environment. If you look at it last year, there’s billions of euro investment has been done in the tech industry to build more models. Is the same thing happen to the smallholder farmers? No. So there is often there is a problem that what we want to solve in the server room or computer, it doesn’t work in the field.

Right. So we. Really need to think how. the AI or model which we are really developing that is applicable to the grassroots level. And so within the Wakeningen and personally, I have been working in Asia, Africa and Latin America. And one of the problems, basically, there is three problems we are facing in this whole AI domain nowadays. First is really data scarcity. Still, there is not enough data. The data is not shared. As you mentioned, there is no ecosystem. There is no fair infrastructure where data can be shared. And that hinders the model. The model works on a global scale, but when you want to work on the local scale, it doesn’t work. It doesn’t provide the input that is expected for the smallholder farmers.

Second is the trust. Often, the farmers don’t own, and then, of course, the… the model and the farmer’s expectation is different and then there’s often not much trust how to apply this in the local level. That’s why most of this advisory is failed. Farmer doesn’t follow the advisory because it doesn’t make sense. And then third thing is scalability. Often we think that scale is not only the technical scale. Like you process something fast doesn’t mean that it can apply the same way. So we need to really think differently. And that’s why like we started I give a couple of three concrete examples. One example about food security. We need to understand what is the map.

Where are the crops? There is no global map that is accurate enough. So with the help of European Space Agency four years ago we started the World Cereal Project where we try to map the global crop length. So we started the World Cereal Project the World Cereal Project and we started still the maps are not perfect because India, China, many countries they don’t share their data so there is no data and if there is no data we have fantastic model we have built very nice geo -embedding with NASA harvest but applicability of this model is still very low in this country second thing is about high tech solution in low tech environment for example chocolate industry cocoa, agroforestry is really suffering from the climate change and we have established many advisory services but not from the researcher or tech perspective but engaging farmer perspective and that works we build basically chatbot with their language that really understand what they need and how we can translate their problem they know which disease are coming so we are using computer vision from their lens and then we are training and that works So there are a couple of things which we really see that if you really want to make these things working, you need to make sure that these solutions should work in low -tech environment.

Most of the things, connectivity has gone up. People are on social media, but still data is not there. Data infrastructure is not there. And always tech industry or like we as a modeler, whatever we call. So we see always data as the input and output. Data should be as the infrastructure. We should engage farmers in that infrastructure. And then only we can achieve the

Sara Rendtorff Smith

Thank you very much. And I think with this, unfortunately, we’re coming to a close on time. I think maybe the speakers can be kind enough to stay a little bit after if there are questions. We won’t have much time for Q &A. but just to thank you all for really providing a diverse set of perspectives for the timely discussions to the ambassador of the Netherlands for framing this important discussion and I think some of the key takeaways perhaps is that there is vast potential and we saw the Indonesian perspective of all these very concrete examples also Dejan talking about potential for anticipatory action and we heard about this global and even domestic paradox of food insecurity when there really is enough food but it may not be distributed enough or properly and also I think importantly that to have impact with AI we need to make sure that it is problem driven that it is driven by the local context and the farmers who need to use it and maybe lastly a very important point which is exactly core to the work we do at the OECD that to drive this adoption we also need to ensure that there is trust in what is produced.

And this requires, obviously, a number of factors, such as explainability and transparency and so on, and also responsible data collection. But just with that, let me thank the panelists for their rich inputs. Please do stick around a little bit for some questions, maybe in the margin. And thanks again to the Kingdom of the Netherlands for co -hosting this event with the OECD. Thank you.

Speaker 5

Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (17)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Sara Rendtorff Smith opened the session, introducing a multi‑stakeholder panel on AI for transparent, responsible and inclusive food systems.”

The knowledge base lists Sara Rendtorff Smith as the session moderator representing the OECD, confirming her role in opening the panel [S3].

Additional Contextmedium

“The Netherlands has a strong ICT ecosystem combined with an innovative agricultural ecosystem, making it a global agro‑innovation hub anchored by firms such as ASML, NXP and Philips.”

Source S12 describes the Netherlands’ strong ICT and highly innovative agricultural ecosystems, supporting the claim of a Dutch agro-innovation hub, though it does not name specific firms [S12].

Additional Contextmedium

“AI‑enabled precision spraying can cut pesticide use by up to 30 % without yield loss.”

An autonomous spraying robot reported in S100 can reduce pesticide use by up to 95%, providing additional context that AI-driven spraying can achieve reductions even greater than the 30% cited [S100].

Additional Contextlow

“Smart irrigation can save up to 90 % of water.”

S31 discusses precision-agriculture techniques that optimise water use, confirming that AI-based irrigation can dramatically reduce water consumption, though it does not specify the 90% figure [S31].

Confirmedmedium

“AI‑driven tools such as remote sensing, drones and predictive analytics enhance precision agriculture practices.”

Source S22 lists remote sensing, drones, and predictive analytics as AI-powered tools that improve precision agriculture, confirming the claim [S22].

External Sources (101)
S1
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Debjani Ghosh- Distinguished Fellow at NITI Aayog, former role with NASCOM
S2
Panel Discussion: 01 — -Debjani Ghosh- Distinguished Fellow, Niti Aayog (role: moderating the ministerial conversation)
S3
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — -Arwin Datumaya Wahyudi Sumari: Indonesian Air Force officer and professor at the State Polytechnic of Malang, co-invent…
S4
Knowledge Café: Youth building the digital future – WSIS+20 Review and Beyond 2025 — – **Speaker 5** – Role/expertise not specified Speaker 5: Sure. So what we talked about as a group is we discussed this…
S6
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued) – session 5 — The Chair’s instrumental role in facilitating consensus-centric discussions has been recognised with gratitude by South …
S7
AI for food systems — – **Dejan Jakovljevic**: CIO and Director, Digitalization and Informatics Division, Food and Agriculture Organization of…
S8
WSIS prepares for Geneva as momentum builds for impactful digital governance — As preparations intensify for the World Summit on the Information Society (WSIS+20) high-level event, scheduled for 7–11…
S9
Open Forum #18 Digital Cooperation for Development Ungis in Action — Dejan Jakovljevic: Yes, of course. Before I mentioned the project, first of all, thank you for inviting me also to the s…
S10
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — – Harry Verweij- Arwin Datumaya Wahyudi Sumari- Sara Rendtorff Smith – Harry Verweij- Dejan Jakovljevic- Sara Rendtorff…
S11
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — -Arun Pratihast: Senior Researcher at Wageningen University Environmental Research -Speaker 5: Role/title not mentioned
S12
https://dig.watch/event/india-ai-impact-summit-2026/transforming-agriculture_-ai-for-resilient-and-inclusive-food-systems — He’s Chief Information Officer and Director of Digital FAO and Agroinformatics Division at FAO of the United Nations, ba…
S13
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — -Sara Rendtorff Smith: Session moderator, representing the OECD -Speaker 5: Role/title not mentioned
S14
AI for Good – food and agriculture — Dongyu Qu: Excellencies, ladies, gentlemen, good morning. A year ago, we all gathered for the Previous AI for Good Summi…
S15
Building Climate-Resilient Systems with AI — But here’s what we came up with. The first one, I mean, this is a kind of bottom line, but it’s important. AI does have …
S16
WS #279 AI: Guardian for Critical Infrastructure in Developing World — 2. Establish public-private partnerships for knowledge transfer and technology access. 5. Increase collaboration and kn…
S17
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And that’s clearly something we try to do. And, of course, in addition, we need absolutely to have computer facility at …
S18
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — In conclusion, AI has the potential to transform the consumer landscape by empowering consumers and assisting regulators…
S19
Opening of the session/OEWG 2025 — Kazakhstan: Thank you for giving the floor. At the outset, Kazakhstan would like to express its sincere gratitude to y…
S20
Opening of the session — – Tailored capacity building initiatives Kazakhstan: Thank you, Chair, for giving the floor. Mr. Chair, distinguished d…
S21
WS #55 Future of Governance in Africa — Speaker 4: Moderator, excellencies, ladies and gentlemen, let me say that it is an honor today to address you. And I’…
S22
Sustainable development — AI-powered tools like remote sensing, drones, and predictive analytics can enhance precision agriculture practices. They…
S23
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — Development | Economic Sensors and drones collect real-time data, while machine learning models optimize irrigation, pe…
S24
Digital divides &amp; Inclusion — However, the cost of internet access remains a significant barrier in some parts of Africa, notably in The Gambia where …
S25
Open Forum #5 Bridging digital divide for Inclusive Growth Under the GDC — The Minister of Lesotho emphasized the need to eliminate duplication among various digital initiatives and called for st…
S26
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S27
Main Session 2: The governance of artificial intelligence — Kakkar stressed the importance of meaningful multi-stakeholder participation and strengthening mechanisms like the Inter…
S28
Open Forum #30 High Level Review of AI Governance Including the Discussion — Legal and regulatory | Development Moving from Principles to Practice The toolkit will be an online interactive tool a…
S29
WS #123 Responsible AI in Security Governance Risks and Innovation — Legal and regulatory | Human rights principles She emphasizes that UN-sponsored platforms like UNIDIR’s RAISE and IGF p…
S30
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S31
WS #49 Benefit everyone from digital tech equally &amp; inclusively — – Mobile apps that provide farmers with real-time weather data and crop management advice. Ricardo Robles Pelayo: So, …
S32
AI Meets Agriculture Building Food Security and Climate Resilien — Artificial intelligence | Environmental impacts | Social and economic development He highlights the use of century‑long…
S33
The State of Digital Fragmentation (Digital Policy Alert) — The representation of developing countries in data governance is a crucial concern. It argues that existing data governa…
S34
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Adamma Isamade stresses the need for a multistakeholder approach in policymaking. She argues that policies often lack in…
S35
WS #205 Contextualising Fairness: AI Governance in Asia — Milton Mueller: I turned the mic on. I just turned it on. That helps, right? So can we get too hyper-contextualized?…
S36
Ministerial Roundtable — Despite progress in digital transformation, there are critical gaps in AI governance, with only 21% of governments world…
S37
The Global Power Shift India’s Rise in AI &amp; Semiconductors — “So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resource…
S38
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — So I think the accountability on humans is what we have to focus on. And going back to your question, if you’re talking …
S39
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S40
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — **Judith Okonkwo** provided crucial insights into practical challenges of implementing AI technologies across different …
S41
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And if they don’t, they’ll still make decisions, but they’re not going to be very good decisions. You know? So the secon…
S42
AI for agriculture Scaling Intelegence for food and climate resiliance — A lot of questions in the same question. So what I’ll do is I’ll just first take you through the initiatives. First of a…
S43
AI in Action: When technology serves humanity — Again, the farmers themselves remain decision-makers. They weigh the advice against their experience, their land, and th…
S44
High-Level Dialogue: The role of parliaments in shaping our digital future — AgriTech is something extremely important. We all suffer from food safety issue and this relates to everything else, so …
S45
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — The presentation demonstrates high internal coherence with a balanced perspective that acknowledges both the potential a…
S46
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — – Digital tools should be employed to improve outcomes for these at-risk groups, who often lack sufficient employment an…
S47
Pre 3: Exploring Frontier technologies for harnessing digital public good and advancing Digital Inclusion — AI systems reflect the quality and inclusiveness of their underlying data and decision-making processes. Currently, both…
S48
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Algorithms are not just applications of mathematical codes that support the digital world. They are part of a complex po…
S49
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — 1 ,000 hectares in some big island of Indonesia in order to get the safe efficiency in the next five years. And then we …
S50
Survival Tech Harnessing AI to Manage Global Climate Extremes — “It has to be a hybrid model which has to be connected with the physical systems of the various sensor fabric and the sa…
S51
AI Meets Agriculture Building Food Security and Climate Resilien — Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve better. And unde…
S52
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Low to moderate disagreement level. The speakers largely agree on the need for proper data foundations, leadership invol…
S53
AI 2.0 Reimagining Indian education system — High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers, representing d…
S54
The Global Power Shift India’s Rise in AI &amp; Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S55
What policy levers can bridge the AI divide? — ## Sector-Specific Applications **The Philippines** developed their strategy with strong presidential leadership and mu…
S56
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Abeer Alsumait: assistive technologies, but there are challenges like a very minor issue might also be a kind of we ca…
S57
WS #270 Understanding digital exclusion in AI era — – Florent: Professor of law at the University of Zurich – Mbongi Nimsimangasori: Postdoctoral researcher with the Johan…
S58
Digital divides &amp; Inclusion — Another important issue highlighted in the analysis is the lack of accessibility and inclusion for people with disabilit…
S59
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S60
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S61
WSIS Action Line C2 Information and communication infrastructure — Data quality and governance as fundamental requirements Legal and regulatory | Human rights Regulatory Frameworks and …
S62
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — High level of consensus on fundamental principles with constructive disagreement on implementation details. This suggest…
S63
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — Cloud strategy requiring European/French infrastructure but inadequate market supply to meet public administration needs…
S64
WS #98 Towards a global, risk-adaptive AI governance framework — Sector-specific and use case-specific governance may be needed rather than one-size-fits-all approaches
S65
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Low level of fundamental disagreement with moderate differences in implementation strategies. The speakers largely agree…
S66
Scaling AI for Billions_ Building Digital Public Infrastructure — The conversation highlighted the critical importance of building proper foundations before implementing AI capabilities,…
S68
Press Conference: Closing the AI Access Gap — An important aspect of the alliance’s work is the creation of relevant international frameworks and public-private partn…
S69
Multistakeholder Partnerships for Thriving AI Ecosystems — The panel revealed sophisticated understanding of how different stakeholders must collaborate whilst maintaining distinc…
S71
AI for Good – food and agriculture — AI-powered advisory services have reduced costs from $30 to $3 per farm with potential to reach $0.30. Partnership with …
S72
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — 1 ,000 hectares in some big island of Indonesia in order to get the safe efficiency in the next five years. And then we …
S73
Digital divides &amp; Inclusion — In conclusion, the digital divide between the developed and developing world is a significant issue that requires attent…
S74
What is it about AI that we need to regulate? — Based on discussions across multiple IGF 2025 sessions, several fundamental assumptions about digital inclusion need cha…
S75
Open Forum #5 Bridging digital divide for Inclusive Growth Under the GDC — Key unresolved challenges include bridging funding implementation gaps, developing mechanisms for harmonizing existing n…
S76
Panel Discussion: 01 — When asked to rate global AI infrastructure progress on a scale of one to ten, Minister Patria gave it 6 out of 10, high…
S77
Regional Leaders Discuss AI-Ready Digital Infrastructure — Hamam Riza, co-chair of Indonesia’s National AI Roadmap 2030 and president of the Collaborative Research and Industrial …
S78
Huawei’s dominance in AI sparks national security debate in Indonesia — Indonesia is urgently working tosecure strategic autonomy in AIas Huawei rapidly expands its presence in the country’s c…
S79
The Global Power Shift India’s Rise in AI &amp; Semiconductors — “So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resource…
S80
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — Development | Economic Guo advocates for strengthened collaborative mechanisms that bring together multiple stakeholder…
S81
From India to the Global South_ Advancing Social Impact with AI — -Public-Private Partnership for Scale: Emphasis on collaboration between government, industry, and academia to create em…
S82
https://dig.watch/event/india-ai-impact-summit-2026/transforming-agriculture_-ai-for-resilient-and-inclusive-food-systems — Every country is trying it out, but why is it not scaling? Why are we not solving for other problems? So again, it’s ver…
S83
Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161 — Certain barriers, such as low budgets, less technical focus in decision-making teams, and low priority given to smaller …
S84
Challenges and solutions for broadband infrastructure deployment in developing countries, rural and remote areas — Innovations to overcome deployment barriers and labour scarcities were covered, including the use of pre-connectorized o…
S85
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — **Judith Okonkwo** provided crucial insights into practical challenges of implementing AI technologies across different …
S86
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Brandon Mello introduced a sobering statistic: 95% of AI pilots never reach production deployment. The primary barriers …
S87
Main Session on Artificial Intelligence | IGF 2023 — Finally, it was suggested that an independent multi-stakeholder panel should be implemented for important technologies t…
S88
WS #102 Harmonising approaches for data free flow with trust — This discussion, moderated by Timea Suto, brought together experts from various sectors to explore the challenges and po…
S89
DPI High-Level Session — The World Summit on the Information Society (WSIS) hosted a session that brought together a diverse group of stakeholder…
S90
Panel 2 – Anticipating and Mitigating Risks Along the Global Subsea Network  — So whoever’s happy to take my question. So last year, just piggybacking off of John’s question on the panel yesterday on…
S91
High Level Dialogue with the Secretary-General — He mentions the potential of artificial intelligence as a tool for development if used equitably.
S92
Global Digital Governance &amp; Multistakeholder Cooperation for WSIS+20 — Ernst Noorman: Good morning everyone. Very much welcome to this session. First of all, my name is Ernst Noorman. I’m the…
S93
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — The tone was consistently collaborative, optimistic, and forward-looking throughout the session. Delegates maintained a …
S94
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-morning-session-part-2 — Minister Vaishnav, Excellencies, ladies and gentlemen, let me begin by giving our thanks and expressing our sincere appr…
S95
Parallel Session A5: Achieving Sustainable and Resilient Transport and Logistics including inSIDS — Nowadays, countries face multiple simultaneous crises, such as health, environmental, and geopolitical conflicts.
S96
Opening Ceremony | GSCF 2024 — Moreover, the contributions of international bodies like the International Maritime Organization (IMO) and the United Na…
S97
Emerging Markets: Resilience, Innovation, and the Future of Global Development — Egyptian Minister Al-Mashat reported Egypt’s achievement of 5.5% growth despite regional conflicts and reduced Suez Cana…
S98
UNSC meeting: Peace, climate change and food insecurity — Climate change amplifies existing environmental, economic, social and security vulnerabilities Climate change is increa…
S99
Strategy — – Forecasted Weather Data: AI is helping the farmer to stay updated with data related to weather forecasting. The foreca…
S100
Foreword — Through the Asterix project the enterprise has developed an autonomous spraying robot, AX-1. The robot uses deep learnin…
S101
National Strategy for Artificial Intelligence — The government will also initiate a new ‘Intelligent irrigation’ pilot project using artificial intelligence to develop …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
H
Harry Verweij
3 arguments143 words per minute989 words414 seconds
Argument 1
AI can boost yields, reduce inputs, and support climate adaptation
EXPLANATION
Harry highlights that digitalisation and AI in agriculture can dramatically increase productivity while lowering environmental impact. He stresses that AI tools already demonstrate higher yields, reduced food losses and help farmers meet sustainability and climate‑resilience goals.
EVIDENCE
He notes that AI offers “enormous opportunities to increase the productivity and sustainability of local food production” and to “improve nature conservation and to foster a sustainable climate resilience” [13-16]. He further states that AI solutions have “significantly increase[d] food productivity and reduce[d] food losses” and can support farmers with risk assessments, sustainable practices and trustworthy data sharing across the supply chain [23-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s role in reducing greenhouse-gas emissions and enhancing climate-resilient agriculture is highlighted in [S15]; precision-agriculture tools that optimise inputs and lower environmental impact are described in [S22] and [S23].
MAJOR DISCUSSION POINT
AI’s potential to improve agricultural productivity, sustainability, and resilience
AGREED WITH
Sara Rendtorff Smith, Arwin Datumaya Wahyudi Sumari, Dejan Jakovljevic, Arun Pratihast
DISAGREED WITH
Debjani Ghosh, Sara Rendtorff Smith
Argument 2
Collaborative partnerships, knowledge sharing, and bilateral cooperation to spread AI benefits
EXPLANATION
Harry emphasizes the importance of international cooperation, citing the Netherlands’ work with the OECD, FAO, Indonesia and India to accelerate AI adoption in agriculture. He calls for concrete partnerships that share knowledge, technology and measurable results for the benefit of all humanity.
EVIDENCE
He thanks the OECD as “the go-to organization when it comes to AI governance” and mentions cooperation with FAO, Wageningen University, India and Indonesia, highlighting bilateral and multilateral collaboration to spread AI benefits [44-48]. He also thanks India for hosting the summit and reaffirms the Netherlands’ readiness to contribute through partnerships and knowledge sharing [45-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public-private partnerships and international knowledge-sharing mechanisms are advocated in [S16]; multi-stakeholder platforms for AI governance are discussed in [S29]; the OECD AI Incidents Monitor illustrates collaborative oversight in [S30].
MAJOR DISCUSSION POINT
Governance, policy, and inclusive frameworks for responsible AI deployment
AGREED WITH
Sara Rendtorff Smith, Arwin Datumaya Wahyudi Sumari, Debjani Ghosh
Argument 3
Public‑private partnerships, capacity building, and co‑creation of tailored solutions
EXPLANATION
Harry argues that scaling AI in agriculture requires joint public‑private efforts, capacity‑building programmes and solutions customised to local challenges. He stresses that inclusive AI ecosystems and co‑working between governments, businesses and academia are essential.
EVIDENCE
He describes the Dutch ambition to enhance food security through AI, noting the need for “knowledge sharing, co-operation and collaboration, creation and capacity building so that AI solutions are locally relevant, inclusive and accessible to farmers” [30-38]. He adds that the Netherlands facilitates Dutch ICT agribusinesses to collaborate with startups in low- and middle-income countries and commits to work together on tailored solutions [39-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for PPPs and capacity-building programmes is emphasized in [S16] and [S20]; policy-toolkit support for co-creating solutions is provided by the OECD interactive toolkit described in [S28].
MAJOR DISCUSSION POINT
Practical steps and collaborative mechanisms to scale AI responsibly
AGREED WITH
Arwin Datumaya Wahyudi Sumari, Debjani Ghosh, Sara Rendtorff Smith
DISAGREED WITH
Sara Rendtorff Smith
S
Sara Rendtorff Smith
4 arguments94 words per minute2039 words1289 seconds
Argument 1
AI optimizes resource use, cuts pesticide/herbicide use, and enhances traceability
EXPLANATION
Sara outlines how AI‑enabled precision tools can reduce the amount of agro‑chemicals applied and improve supply‑chain transparency. She points to real‑world deployments that achieve substantial input savings while maintaining yields.
EVIDENCE
She cites AI-enabled precision spraying that “reduced pesticide use by up to 30 percent” without compromising yield, and computer-vision systems that “can cut herbicide used by up to half” by targeting weeds only [60]. She also notes that AI-enabled traceability, market transparency and smart logistics can “reduce losses, improve compliance, and strengthen food safety systems” [66-67].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven remote sensing, drones and predictive analytics that optimise water, fertilizer and pesticide applications are outlined in [S22]; similar optimisation of irrigation and pesticide use is noted in [S23]; mobile apps delivering traceability and real-time advice are cited in [S31].
MAJOR DISCUSSION POINT
AI’s potential to improve agricultural productivity, sustainability, and resilience
AGREED WITH
Harry Verweij, Arwin Datumaya Wahyudi Sumari, Dejan Jakovljevic, Arun Pratihast
Argument 2
Uneven adoption, digital divide, high costs, limited skills, and trust issues
EXPLANATION
Sara draws attention to the stark disparities in digital tool usage among farmers worldwide, highlighting structural barriers that hinder AI uptake. She stresses that without addressing cost, skills and trust, AI could widen existing inequalities.
EVIDENCE
She compares adoption rates, noting that “96 % of farmers are using digital tools” in Australia versus only “12 %” in Chile, illustrating a digital divide [70-71]. She then lists “high cost, limited digital skills, and lack of trust” as structural barriers that slow AI uptake [72-76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The unequal pace of digital transformation, with one-third of the world left behind, is reported in [S14]; high internet costs limiting access are documented in [S24] and [S25]; trust and skill gaps are implicit in the discussion of digital exclusion in [S14].
MAJOR DISCUSSION POINT
Barriers and challenges to AI adoption in agriculture
AGREED WITH
Dejan Jakovljevic, Harry Verweij
DISAGREED WITH
Harry Verweij
Argument 3
Need for transparent, explainable AI, interoperable data governance, and policy toolkits
EXPLANATION
Sara argues that trustworthy AI requires transparency, explainability and coherent data‑governance frameworks. She promotes the OECD’s AI policy toolkit as a practical resource for countries to develop responsible AI policies.
EVIDENCE
She points out that “farmers and regulators need transparency in how AI systems make their decisions” and that fragmented data-governance frameworks create complexity, calling for greater interoperability [73-78]. She then describes the OECD AI policy toolkit, which provides context-specific guidance and covers over 2,000 policies across 80 jurisdictions, accessible at osd.ai [80-87].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Transparency, explainability and responsible data stewardship for farmer confidence are stressed in [S3]; the UN Security Council’s call for explainable AI appears in [S26]; the OECD AI policy toolkit providing guidance is described in [S28] and the AI Incidents Monitor in [S30].
MAJOR DISCUSSION POINT
Governance, policy, and inclusive frameworks for responsible AI deployment
AGREED WITH
Harry Verweij, Arwin Datumaya Wahyudi Sumari, Debjani Ghosh
Argument 4
International knowledge‑sharing platforms and OECD AI policy toolkit to guide implementation
EXPLANATION
Sara highlights the role of global knowledge‑sharing mechanisms, such as the OECD’s AI policy toolkit and other platforms, in supporting countries to adopt AI responsibly. She stresses that these resources help align policies, share best practices and monitor impact.
EVIDENCE
She explains that the toolkit “will provide practical, context-specific guidance to countries” and that it builds on the OECD policy navigator covering more than 2,000 policies [80-87]. She also mentions the broader work on digital governance in agriculture within GPAY and the global AI impact comments that share concrete use cases with scaling potential [88-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The OECD AI policy toolkit and its interactive database are detailed in [S28]; the AI Incidents Monitor further supports knowledge-sharing in [S30]; UN-sponsored multi-stakeholder platforms for AI governance are highlighted in [S29] and [S16].
MAJOR DISCUSSION POINT
Practical steps and collaborative mechanisms to scale AI responsibly
AGREED WITH
Harry Verweij, Arwin Datumaya Wahyudi Sumari, Debjani Ghosh
D
Dejan Jakovljevic
3 arguments140 words per minute738 words316 seconds
Argument 1
AI enables anticipatory actions for shocks and disaster response
EXPLANATION
Dejan stresses that AI can help agricultural systems anticipate and prepare for shocks such as natural disasters or conflicts. By providing early‑warning tools and decision‑support platforms, AI enables proactive rather than reactive responses.
EVIDENCE
He defines “anticipation” as the key word, describing how AI can help “anticipate the shocks to the agri-food systems” and support “anticipatory actions” through data-driven decision-making tools, situation rooms and rapid response mechanisms [127-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of “anticipation” for agri-food shocks is foregrounded in [S3]; AI’s contribution to climate-resilient systems and early-warning is discussed in [S15]; FAO’s focus on better production and nutrition through data-driven tools is mentioned in [S9].
MAJOR DISCUSSION POINT
AI’s potential to improve agricultural productivity, sustainability, and resilience
AGREED WITH
Sara Rendtorff Smith, Harry Verweij, Arwin Datumaya Wahyudi Sumari
Argument 2
Digital exclusion of farmers; need for low‑tech access such as phone‑based advice
EXPLANATION
Dejan points out that many farmers lack digital connectivity, making them vulnerable to exclusion from AI‑driven services. He highlights phone‑based advisory tools as a low‑entry solution that can reach farmers without smartphones.
EVIDENCE
He notes that “it used to be possible to exist outside of the digital ecosystem” but now “if a farmer or communities are outside of the digital ecosystem, they suddenly are outside of any ecosystem” [112-117]. He then describes a recent Indian government tool that allows farmers to receive advice via a phone call in multiple languages, lowering the entry barrier to AI services [120-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A phone-call advisory service for farmers in India is described in [S7] and reiterated in [S12]; the broader digital divide affecting one-third of the global population is noted in [S14].
MAJOR DISCUSSION POINT
Barriers and challenges to AI adoption in agriculture
AGREED WITH
Sara Rendtorff Smith, Harry Verweij
DISAGREED WITH
Arwin Datumaya Wahyudi Sumari, Harry Verweij
Argument 3
Phone‑based advisory services as low‑entry AI tools for inclusive access
EXPLANATION
Dejan reiterates the value of phone‑based advisory services as an inclusive AI application that can reach farmers lacking smartphones or internet access. Such services can deliver multilingual guidance on crops, pests and other agronomic issues.
EVIDENCE
He cites the same Indian government initiative where “farmers can, with a phone call, … get advisory in the area of agriculture” covering topics from shrimp cultivation to pest diseases, and notes that the service works in many languages [120-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Indian phone-based multilingual advisory platform exemplifies low-entry AI and is referenced in [S7] and [S12]; the need for inclusive low-tech solutions is reinforced by the digital-exclusion discussion in [S14].
MAJOR DISCUSSION POINT
Practical steps and collaborative mechanisms to scale AI responsibly
A
Arwin Datumaya Wahyudi Sumari
3 arguments109 words per minute1277 words698 seconds
Argument 1
AI predicts soil conditions, optimal crops, fertilizer, water needs, weather, and logistics
EXPLANATION
Arwin outlines a suite of AI applications for Indonesia, ranging from soil‑nutrient prediction to crop‑selection, fertilizer optimisation, intelligent farming, weather forecasting and logistics routing. These tools aim to increase yields, reduce crop failures and cut transport costs across the archipelago.
EVIDENCE
He describes AI use for “prediction of soil condition and nutrition” to guide new rice fields, for “prediction of the most appropriate food crops” per island, for “optimising fertilizer content and water volume” [165-172], for “intelligent farming” that optimises seed planting and harvest processes [174-181], for “weather dynamics” prediction to avoid crop failures [182-184], and for “optimising logistic transportation routes” to reduce operational costs between islands [187-193].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled remote sensing and predictive analytics for soil, crop selection and water optimisation are covered in [S22]; sensor-driven optimisation of irrigation, pesticide and planting schedules is detailed in [S23]; weather forecasting and logistics routing tools are mentioned in [S31].
MAJOR DISCUSSION POINT
AI’s potential to improve agricultural productivity, sustainability, and resilience
AGREED WITH
Harry Verweij, Sara Rendtorff Smith, Dejan Jakovljevic, Arun Pratihast
DISAGREED WITH
Dejan Jakovljevic, Harry Verweij
Argument 2
Infrastructure gaps, uneven AI talent distribution, and data scarcity across regions
EXPLANATION
Arwin highlights Indonesia’s geographic fragmentation and uneven digital infrastructure, which limit AI deployment. He points to disparities in telecom coverage, regional time‑zone differences and a shortage of AI talent as major constraints.
EVIDENCE
He notes that Indonesia consists of “17,000 islands” with only “36 % of land, 64 % of water” and that each region has different time zones, creating challenges for coordination [150-162]. He also mentions the “problem with unequal distribution of AI talent” and the lack of democratic AI infrastructure such as telecommunications [159-164].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
High data-cost barriers and digital-infrastructure gaps in low-income regions are highlighted in [S24] and [S25]; the unequal pace of digital transformation worldwide is reported in [S14].
MAJOR DISCUSSION POINT
Barriers and challenges to AI adoption in agriculture
Argument 3
Indonesia’s AI roadmap with seven pillars (regulation, ethics, investment, data, innovation, talent, use cases) and a multi‑stakeholder “helix” approach
EXPLANATION
Arwin presents Indonesia’s comprehensive AI strategy, structured around seven pillars and a collaborative “helix” model that brings together government, industry, academia, media and civil society. The roadmap seeks to create a trustworthy, inclusive AI ecosystem for agriculture and other sectors.
EVIDENCE
He explains that the roadmap includes pillars for “AI regulation, AI ethics, investment, AI data, AI innovation, AI talent development, and AI use case” and that it follows a “helix” approach involving multiple stakeholders, with the Ministry of Digital Information and Communication coordinating voluntary contributions [196-203].
MAJOR DISCUSSION POINT
Governance, policy, and inclusive frameworks for responsible AI deployment
AGREED WITH
Sara Rendtorff Smith, Harry Verweij, Debjani Ghosh
A
Arun Pratihast
2 arguments152 words per minute690 words271 seconds
Argument 1
AI‑driven global crop mapping and farmer‑friendly chatbots improve advisory services
EXPLANATION
Arun describes two initiatives: a global crop‑mapping effort using satellite data, and multilingual chatbots that deliver agronomic advice to smallholders. Both aim to overcome data gaps and provide actionable information in low‑tech settings.
EVIDENCE
He recounts the “World Cereal Project” launched with the European Space Agency to map global crop areas, noting challenges due to countries not sharing data [299-300]. He also details a chatbot built in local languages that uses computer vision to diagnose diseases and give advice to cocoa farmers, demonstrating a farmer-centric AI service [300-304].
MAJOR DISCUSSION POINT
AI’s potential to improve agricultural productivity, sustainability, and resilience
AGREED WITH
Harry Verweij, Sara Rendtorff Smith, Arwin Datumaya Wahyudi Sumari, Dejan Jakovljevic
Argument 2
Data scarcity, lack of trust, and scalability problems hinder impact
EXPLANATION
Arun identifies three core barriers to effective AI in agriculture: insufficient and non‑shared data, low trust from farmers in AI recommendations, and difficulties scaling solutions from pilot to widespread use.
EVIDENCE
He lists “data scarcity” and the lack of shared data as a major issue, followed by “trust” problems where farmers do not follow AI advice, and finally “scalability” challenges where technical speed does not translate into broader impact [278-291].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of trustworthy, transparent AI and interoperable data governance for farmer confidence is discussed in [S3]; trust and explainability concerns are reiterated in [S26] and [S27]; digital-divide challenges that affect scalability are noted in [S24].
MAJOR DISCUSSION POINT
Barriers and challenges to AI adoption in agriculture
D
Debjani Ghosh
2 arguments156 words per minute887 words339 seconds
Argument 1
Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization
EXPLANATION
Debjani stresses that AI projects must start with a well‑defined problem and coordinated industry involvement to move beyond pilots. She proposes sector‑specific Centers of Excellence (CoEs) to focus resources on high‑impact challenges such as cold‑chain waste.
EVIDENCE
She argues that “the biggest problem today is we are not taking the time to think through it” and that industry needs a clear problem statement and a route to market [213-258]. She specifically suggests a CoE for the cold-chain problem to ensure coordinated solutions across the country [252-258].
MAJOR DISCUSSION POINT
Governance, policy, and inclusive frameworks for responsible AI deployment
AGREED WITH
Harry Verweij, Arwin Datumaya Wahyudi Sumari, Sara Rendtorff Smith
DISAGREED WITH
Arwin Datumaya Wahyudi Sumari
Argument 2
Establishing sector‑specific centers of excellence to tackle challenges like cold‑chain waste
EXPLANATION
Debjani proposes creating dedicated CoEs that target particular agricultural bottlenecks, using the cold‑chain waste issue as an example. Such centres would bring together stakeholders to develop, test and commercialise AI solutions at scale.
EVIDENCE
She outlines the concept of a COE focused on solving the cold-chain problem, asking how to ensure that “climate resilient crops” and “cold chain” issues are addressed through coordinated industry collaboration [252-258].
MAJOR DISCUSSION POINT
Practical steps and collaborative mechanisms to scale AI responsibly
S
Speaker 5
1 argument9 words per minute4 words26 seconds
Argument 1
Closing gratitude and acknowledgment of participants
EXPLANATION
The final speaker thanks the audience and participants for their contributions, signalling the end of the session.
EVIDENCE
He simply says “Thank you. Thank you.” at the close of the meeting [318-319].
MAJOR DISCUSSION POINT
Concluding remarks
Agreements
Agreement Points
AI can significantly increase agricultural productivity, reduce inputs, and support climate adaptation and resilience
Speakers: Harry Verweij, Sara Rendtorff Smith, Arwin Datumaya Wahyudi Sumari, Dejan Jakovljevic, Arun Pratihast
AI can boost yields, reduce inputs, and support climate adaptation AI optimizes resource use, cuts pesticide/herbicide use, and enhances traceability AI predicts soil conditions, optimal crops, fertilizer, water needs, weather, and logistics AI enables anticipatory actions for shocks and disaster response AI‑driven global crop mapping and farmer‑friendly chatbots improve advisory services
All speakers highlighted that artificial intelligence offers concrete tools-such as precision spraying, soil and weather prediction, early-warning systems, and global crop mapping-that can raise yields, lower the use of water, fertilizer and pesticides, and make food systems more climate-resilient [13-16][60-61][165-172][127-136][299-304].
POLICY CONTEXT (KNOWLEDGE BASE)
National initiatives such as Maharashtra’s AI-driven agriculture program illustrate policy support for productivity and climate resilience, while projects in Indonesia and broader AI for resilient food systems underscore the strategic emphasis on climate adaptation [S42][S49][S50][S51].
Inclusive AI and the need to bridge the digital divide so smallholders and disadvantaged groups can benefit
Speakers: Sara Rendtorff Smith, Dejan Jakovljevic, Harry Verweij
Uneven adoption, digital divide, high costs, limited skills, and trust issues Digital exclusion of farmers; need for low‑tech access such as phone‑based advice Collaborative partnerships, knowledge sharing, and bilateral cooperation to spread AI benefits
Speakers agreed that current adoption is highly uneven, with high costs, skill gaps and trust barriers limiting uptake, and that low-tech solutions (e.g., phone-based advisory) are needed to reach farmers outside digital ecosystems; international partnerships are seen as a way to close these gaps [70-71][72-76][112-117][120-124][30-38].
POLICY CONTEXT (KNOWLEDGE BASE)
Frameworks emphasizing farmer decision-making and extension worker support highlight inclusive design, and policy discussions on digital inclusion stress the need for equitable access for at-risk groups and marginalized communities [S43][S46][S47][S55][S57].
Strong governance, transparency, and ethical frameworks are essential for trustworthy AI deployment in agriculture
Speakers: Sara Rendtorff Smith, Harry Verweij, Arwin Datumaya Wahyudi Sumari, Debjani Ghosh
Need for transparent, explainable AI, interoperable data governance, and policy toolkits Collaborative partnerships, knowledge sharing, and bilateral cooperation to spread AI benefits Indonesia’s AI roadmap with seven pillars (regulation, ethics, investment, data, innovation, talent, use cases) and a multi‑stakeholder “helix” approach Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization
All highlighted the necessity of clear governance structures, explainability and ethical standards, with toolkits and policy frameworks (OECD toolkit, Dutch ecosystem, Indonesian AI roadmap) and coordinated industry efforts (centers of excellence) to ensure AI is trustworthy and inclusive [73-78][80-87][44-48][196-203][252-258].
POLICY CONTEXT (KNOWLEDGE BASE)
Parliamentary engagement, ethical AI guidelines, and WSIS data-governance recommendations call for transparent, accountable AI governance structures in agri-tech [S44][S45][S61][S64][S66].
Public‑private partnerships and multi‑stakeholder collaboration are critical to scale AI solutions and build capacity
Speakers: Harry Verweij, Arwin Datumaya Wahyudi Sumari, Debjani Ghosh, Sara Rendtorff Smith
Public‑private partnerships, capacity building, and co‑creation of tailored solutions Indonesia’s AI roadmap with a multi‑stakeholder “helix” approach Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization International knowledge‑sharing platforms and OECD AI policy toolkit to guide implementation
Speakers stressed that joint public-private efforts, multi-stakeholder helix models, sector-specific centers of excellence and global knowledge-sharing platforms are needed to develop, finance and scale AI tools that are locally relevant [30-38][196-203][252-258][80-87].
AI can enable anticipatory actions and early‑warning systems to mitigate shocks to agri‑food systems
Speakers: Dejan Jakovljevic, Sara Rendtorff Smith, Harry Verweij, Arwin Datumaya Wahyudi Sumari
AI enables anticipatory actions for shocks and disaster response AI is also revolutionizing agricultural innovation itself and supporting more efficient plant breeding … early detection of climatic and biological threats AI solutions can enhance the efficiency and resilience of food systems by supporting farmers to respond to sustainability requirements, make risk assessments AI predicts weather dynamics to obtain the right conditions
All noted that AI-driven early-warning, risk-assessment and weather-forecasting tools can help anticipate natural disasters, conflicts or pest outbreaks, allowing proactive responses and reducing crop failures [127-136][61][24][182-184].
POLICY CONTEXT (KNOWLEDGE BASE)
Hybrid sensor-satellite models and AI-driven crop-prediction pilots are highlighted as core components of early-warning and climate-resilience strategies in agriculture [S49][S50][S51].
Similar Viewpoints
Both emphasize that international cooperation and clear governance tools (e.g., OECD AI policy toolkit) are essential to ensure AI benefits are broadly shared and trustworthy [44-48][73-78][80-87].
Speakers: Harry Verweij, Sara Rendtorff Smith
Collaborative partnerships, knowledge sharing, and bilateral cooperation to spread AI benefits Need for transparent, explainable AI, interoperable data governance, and policy toolkits
Both point to Indonesia’s fragmented geography and limited digital infrastructure as major barriers, calling for low‑tech, inclusive AI solutions that can work across islands with scarce talent and data [112-117][120-124][150-164].
Speakers: Dejan Jakovljevic, Arwin Datumaya Wahyudi Sumari
Digital exclusion of farmers; need for low‑tech access such as phone‑based advice Infrastructure gaps, uneven AI talent distribution, and data scarcity across regions
Both stress that without a well‑defined problem, reliable data and trust, AI pilots cannot scale; they advocate structured mechanisms (CoEs, data sharing platforms) to overcome these barriers [213-218][252-258][278-291].
Speakers: Debjani Ghosh, Arun Pratihast
Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization Data scarcity, lack of trust, and scalability problems hinder impact
Unexpected Consensus
Both industry‑focused and research‑focused speakers converge on the need for sector‑specific centers of excellence to translate AI pilots into scalable solutions
Speakers: Debjani Ghosh, Arun Pratihast
Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization Data scarcity, lack of trust, and scalability problems hinder impact
While Debjani proposes new CoEs to coordinate industry efforts, Arun, a researcher, also calls for structured platforms to address data, trust and scaling issues, indicating an unexpected alignment between industry and research perspectives on institutional mechanisms needed for impact [252-258][278-291].
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions on scaling AI beyond pilots and national strategies that establish specialized centers illustrate consensus on sector-specific CoEs as implementation hubs [S52][S55][S64].
Overall Assessment

There is strong consensus that AI holds great promise for improving productivity, sustainability and resilience in agriculture, but its benefits will only be realized if inclusive governance, transparent data practices, public‑private partnerships and capacity‑building are put in place. All speakers agree on the urgency of addressing the digital divide and on the need for coordinated, multi‑stakeholder frameworks.

High consensus across technical, governance and partnership dimensions, suggesting that future policy work can build on these shared foundations to design inclusive, trustworthy AI initiatives for food systems.

Differences
Different Viewpoints
Primary focus of AI interventions in agriculture – boosting productivity and climate resilience versus reducing food waste
Speakers: Harry Verweij, Debjani Ghosh, Sara Rendtorff Smith
AI can boost yields, reduce inputs, and support climate adaptation Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization Uneven adoption, digital divide, high costs, limited skills, and trust issues
Harry stresses AI’s role in increasing productivity, lowering environmental impact and supporting climate adaptation [13-16][23-24]. Debjani argues that the biggest problem is food waste and that AI projects should start with a clear problem definition and sector-specific centers of excellence to address issues like cold-chain waste [238-242][252-258]. Sara highlights the risk that AI could deepen existing inequalities if adoption is uneven, pointing to the digital divide and structural barriers such as high cost and limited skills [70-76]. The speakers agree AI is valuable but disagree on whether the priority should be productivity/climate benefits or waste reduction and how to structure interventions.
Preferred level of technological sophistication for delivering AI services to farmers – high‑tech data‑driven platforms versus low‑tech phone‑based advisory services
Speakers: Dejan Jakovljevic, Arwin Datumaya Wahyudi Sumari, Harry Verweij
Digital exclusion of farmers; need for low‑tech access such as phone‑based advice AI predicts soil conditions, optimal crops, fertilizer, water needs, weather, and logistics Public‑private partnerships, capacity building, and co‑creation of tailored solutions
Dejan stresses that many farmers lack digital connectivity and proposes phone-call advisory services in multiple languages as a low-entry AI solution [120-124]. Arwin describes a suite of sophisticated AI applications for soil-nutrient prediction, crop selection, fertilizer and water optimisation, weather forecasting and logistics routing [165-172][174-181]. Harry promotes partnerships to develop locally relevant AI solutions but focuses on more advanced ICT agribusiness collaborations [29-30][30-38]. The disagreement lies in the appropriate technological approach for reaching smallholders.
Governance strategy for AI deployment – a comprehensive national roadmap with multi‑pillar “helix” approach versus sector‑specific Centers of Excellence
Speakers: Arwin Datumaya Wahyudi Sumari, Debjani Ghosh
Indonesia’s AI roadmap with seven pillars (regulation, ethics, investment, data, innovation, talent, use cases) and a multi‑stakeholder helix approach Emphasis on clear problem definition, industry alignment, and centers of excellence for commercialization
Arwin outlines Indonesia’s AI strategy built around seven pillars and a collaborative helix model involving government, industry, academia, media and civil society [196-203]. Debjani proposes creating sector-specific Centers of Excellence, such as a COE for cold-chain waste, to align industry and ensure commercialization pathways [252-258]. Both aim for responsible AI but differ on whether a broad national framework or focused sectoral hubs are more effective.
Impact of AI on inequality – whether AI will deepen the digital divide or can be deployed inclusively through partnerships
Speakers: Sara Rendtorff Smith, Harry Verweij
Uneven adoption, digital divide, high costs, limited skills, and trust issues Public‑private partnerships, capacity building, and co‑creation of tailored solutions
Sara warns that AI could exacerbate existing inequalities if structural barriers like high cost, limited digital skills and lack of trust are not addressed, citing the stark contrast in digital tool usage between Australia (96 %) and Chile (12 %) [70-71][72-76]. Harry counters that inclusive AI can be achieved through strong public-private partnerships, knowledge sharing and capacity building to ensure AI benefits are broadly shared [30-38][39-43]. The speakers disagree on the net effect of AI on inequality.
Unexpected Differences
Whether AI exacerbates digital exclusion or can be a tool for inclusion
Speakers: Dejan Jakovljevic, Harry Verweij
Digital exclusion of farmers; need for low‑tech access such as phone‑based advice Public‑private partnerships, capacity building, and co‑creation of tailored solutions
Dejan argues that AI can worsen digital exclusion, stating that farmers outside the digital ecosystem are left out and that AI makes this worse [112-117]. Harry, however, expresses confidence that inclusive AI can be achieved through partnerships and capacity building, suggesting AI will bridge rather than widen gaps [30-38][39-43]. This contrast was not anticipated given the overall consensus on AI’s benefits.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on digital exclusion highlight risks of deepening divides alongside evidence that targeted policies and inclusive design can harness AI for broader societal benefit [S46][S47][S48][S57].
Overall Assessment

The participants share a common belief in AI’s potential to improve agricultural productivity, resilience and sustainability, but they diverge on priorities (productivity vs waste reduction), technological approaches (high‑tech platforms vs low‑tech phone services), governance models (national roadmap vs sector‑specific centers of excellence), and the net impact on inequality. These disagreements are moderate and revolve around implementation strategies rather than the value of AI itself.

Moderate disagreement focused on pathways and governance; implications include the need for coordinated policy frameworks that accommodate both high‑tech and low‑tech solutions, ensure inclusive governance structures, and address data, trust and capacity gaps to prevent widening digital divides.

Partial Agreements
All speakers agree that AI has a role in building more resilient and inclusive food systems, but differ on the primary pathways—whether through high‑tech precision tools, low‑tech advisory services, or comprehensive governance frameworks [13-16][23-24][55-57][60-66][70-71][112-117][196-203].
Speakers: Harry Verweij, Sara Rendtorff Smith, Dejan Jakovljevic, Arwin Datumaya Wahyudi Sumari
AI can boost yields, reduce inputs, and support climate adaptation AI optimizes resource use, cuts pesticide/herbicide use, and enhances traceability Digital exclusion of farmers; need for low‑tech access such as phone‑based advice Indonesia’s AI roadmap with seven pillars (regulation, ethics, investment, data, innovation, talent, use cases) and a multi‑stakeholder helix approach
All three emphasize the critical importance of data and trust for AI adoption. Sara promotes an OECD policy toolkit for transparent AI and data governance [80-87]; Arun highlights data scarcity and trust as core barriers [278-283]; Dejan points out that lack of digital access excludes farmers from AI services [112-117]. They concur on the need for better data governance but propose different solutions.
Speakers: Sara Rendtorff Smith, Arun Pratihast, Dejan Jakovljevic
Need for transparent, explainable AI, interoperable data governance, and policy toolkits Data scarcity, lack of trust, and scalability problems hinder impact Digital exclusion of farmers; need for low‑tech access such as phone‑based advice
Takeaways
Key takeaways
AI can significantly increase agricultural productivity, reduce inputs (water, fertilizer, pesticides), and enhance climate resilience and food‑system traceability. Real‑world AI pilots (precision spraying, smart irrigation, early‑warning services) have demonstrated measurable gains such as up to 90% water savings and 30% reduction in pesticide use. Anticipatory AI tools can help predict shocks (weather, pests, conflicts) and support rapid, pre‑emptive responses in agri‑food systems. Adoption of AI is highly uneven across regions; digital divide, high costs, limited skills, and trust deficits hinder scaling, especially for smallholders and remote communities. Data scarcity, lack of interoperable governance, and opaque “black‑box” models undermine trust and limit the usefulness of AI solutions for farmers. Inclusive, multi‑stakeholder governance (the “helix” model) and clear, sector‑specific regulation are essential to ensure AI is transparent, explainable, and equitable. Public‑private partnerships, capacity‑building, and knowledge‑sharing platforms (e.g., OECD AI policy toolkit, international working groups) are critical to scale responsible AI deployment. Tailored, low‑tech entry points such as phone‑based advisory services can broaden access for farmers lacking smartphones or internet connectivity.
Resolutions and action items
Commitment by the Netherlands to forge concrete partnerships, share knowledge and technology, and support capacity‑building for AI in low‑ and middle‑income countries. OECD will continue to develop and promote its AI policy toolkit and digital‑governance guidance for agriculture, encouraging countries to contribute their policies. Agreement to pursue sector‑specific Centers of Excellence (e.g., for cold‑chain waste reduction) to align industry efforts with clearly defined problem statements. Indonesia will advance its national AI roadmap (seven‑pillar framework) and promote a multi‑stakeholder helix approach to ensure inclusive and resilient AI deployment.
Unresolved issues
How to effectively close the digital divide so that smallholder farmers in remote or low‑income regions can reliably access AI tools. Mechanisms for ensuring trustworthy data sharing while protecting farmer ownership and privacy. Specific financing models and incentives needed to make AI solutions affordable for SMEs and small farms. Standardized methods for evaluating and scaling AI pilots across diverse agro‑ecological contexts. Details on how to operationalize interoperable data governance frameworks across borders and sectors.
Suggested compromises
Balancing horizontal AI governance (overall principles, ethics, transparency) with sector‑specific regulations tailored to agriculture’s unique needs. Adopting a multi‑helix (government, industry, academia, media, civil society) collaboration model to distribute responsibilities and avoid any single stakeholder dominating AI development. Combining high‑tech AI innovations with low‑tech delivery channels (e.g., phone‑based advisory) to ensure inclusivity while leveraging advanced capabilities.
Thought Provoking Comments
AI can be a powerful tool to increase productivity, reduce environmental impact, and strengthen the resilience of food systems, while also supporting farmers to meet sustainability requirements and provide trustworthy data across the supply chain.
Sets a broad, optimistic framing for AI in agriculture, linking technology directly to food security, climate resilience and inclusive growth, and introduces the idea of AI‑enabled data sharing as a public good.
Established the thematic baseline for the panel, prompting other speakers to position their national or organisational experiences against this vision and to discuss concrete ways to translate the promise into practice.
Speaker: Harry Verweij (Ambassador, Kingdom of the Netherlands)
AI‑enabled precision spraying has reduced pesticide use by up to 30 % without compromising yield, and computer‑vision‑based weed detection can cut herbicide use by half. Yet adoption is highly uneven – 96 % of Australian farmers use digital tools versus just 12 % in Chile – highlighting a digital divide that could deepen existing inequalities.
Combines hard evidence of AI benefits with a stark illustration of the global digital gap, moving the conversation from possibilities to urgent equity concerns.
Shifted the tone from optimism to caution, prompting panelists to address how to bridge the divide (e.g., Dejan’s anticipatory tools, Debjani’s call for problem‑driven pilots) and to consider policy mechanisms such as the OECD AI policy toolkit.
Speaker: Sara Rendtorff Smith (OECD)
The key word for resilience is *anticipation* – we need AI‑driven anticipatory tools, decision‑making rooms and early‑warning services (like the phone‑call advisory system launched by the Indian government) so that we can act before shocks hit the agri‑food system.
Introduces ‘anticipation’ as a strategic lens, reframing resilience from reactive to proactive and highlighting a concrete, low‑tech AI service that reaches farmers without smartphones.
Created a turning point toward discussing pre‑emptive governance and service design, influencing subsequent speakers (e.g., Sumari’s focus on early‑warning and predictive models, Ghosh’s emphasis on targeting specific problems such as waste).
Speaker: Dejan Jakovljevic (FAO)
Indonesia’s AI roadmap is built on seven pillars – regulation, ethics, financing, data, innovation, talent development and use‑cases – and follows a ‘quad‑helix’ model that brings government, industry, academia, media and communities together to ensure no one is left behind.
Provides a concrete, multi‑dimensional governance framework that ties horizontal AI policy to sector‑specific needs, and stresses inclusivity, transparency and ecosystem building in a highly fragmented archipelagic context.
Expanded the discussion from high‑level benefits to the practical architecture needed for implementation, prompting other panelists to reference similar multi‑stakeholder approaches (e.g., OECD’s policy toolkit, Ghosh’s COE idea).
Speaker: Arwin Datumaya Wahyudi Sumari (Professor, Indonesia)
We often ‘throw AI at every problem’ without first defining the exact problem, leading to duplicated pilots (e.g., farmer advisory apps) that don’t scale. A more effective approach is to focus on the biggest leverage point – today I see food waste – and create sector‑specific Centres of Excellence that align industry, data, and commercialization pathways.
Challenges the prevailing hype‑driven mindset, redirects attention to problem‑driven AI, and proposes a concrete institutional mechanism (COE) to avoid fragmentation and ensure impact.
Served as a pivotal critique that reframed the conversation around prioritisation and coordination, influencing later remarks about trust, scalability, and the need for focused pilots (e.g., Arun’s discussion of data scarcity and trust).
Speaker: Debjani Ghosh (NITI Frontier Tech Hub, India)
Three persistent barriers prevent AI from reaching smallholders: data scarcity and poor sharing, lack of trust in AI recommendations, and scalability that ignores low‑tech realities. Successful projects (World Cereal mapping, language‑specific chatbots for cocoa farmers) show that solutions must be built for the grassroots environment.
Synthesises systemic challenges into three clear categories and backs them with concrete examples, highlighting the gap between high‑tech models and field‑level applicability.
Deepened the analysis by linking technical obstacles to the earlier themes of equity and anticipatory action, reinforcing the need for data ecosystems and trust mechanisms discussed by the OECD and Indonesia’s roadmap.
Speaker: Arun Pratihast (Senior Researcher, Wageningen University)
Overall Assessment

The discussion began with a broad, optimistic framing of AI’s potential, but key interventions – especially Dejan’s emphasis on ‘anticipation’, Debjani’s critique of indiscriminate AI deployment, and Arun’s articulation of data, trust and scalability barriers – redirected the conversation toward concrete, problem‑driven strategies and the governance structures needed to make AI inclusive. These turning points introduced new analytical lenses (anticipatory governance, sector‑specific COEs, multi‑helix roadmaps) and prompted participants to move from abstract benefits to actionable pathways, highlighting both the promise and the systemic challenges of deploying AI in global food systems.

Follow-up Questions
How can anticipatory AI tools and decision‑support systems be developed to predict and respond to shocks (natural disasters, conflicts, etc.) in agri‑food systems?
He emphasized the need for AI‑enabled anticipatory actions, decision‑making tools, and situation rooms to handle shocks, indicating a gap in current capabilities.
Speaker: Dejan Jakovljevic
What strategies can effectively bridge the digital divide and ensure inclusive access to AI for farmers in regions with low adoption rates (e.g., Chile vs. Australia)?
Sara highlighted uneven AI adoption across countries; Dejan stressed inclusion, pointing to a risk of deepening inequalities without targeted interventions.
Speaker: Sara Rendtorff Smith; Dejan Jakovljevic
How can fragmented data‑governance frameworks be harmonized to achieve greater interoperability for AI applications across agricultural supply chains?
She noted that fragmented governance creates complexity, suggesting a need for research on interoperable standards and policies.
Speaker: Sara Rendtorff Smith
What public‑private partnership models best scale responsible AI deployment while preventing an AI divide among emerging economies and smallholder farmers?
She discussed the importance of alignment, commercialization routes, and industry collaboration, indicating a need to define effective partnership structures.
Speaker: Debjani Ghosh
Should sector‑specific Centers of Excellence (e.g., for cold‑chain logistics or climate‑resilient crops) be established, and how would they operate to coordinate industry and research efforts?
She proposed COEs focused on concrete problems, highlighting a gap in coordinated innovation hubs.
Speaker: Debjani Ghosh
What mechanisms can address data scarcity and improve data sharing infrastructure that is farmer‑centric and supports AI model development?
He identified data scarcity and lack of shared infrastructure as major barriers to effective AI solutions for smallholders.
Speaker: Dr. Arun Pratihast
How can trust be built among smallholder farmers regarding AI advisory services, including issues of model explainability and data ownership?
He pointed out mistrust and mismatched expectations as reasons advisory tools fail, indicating a need for trust‑building research.
Speaker: Dr. Arun Pratihast
What approaches ensure that AI solutions developed at scale are truly scalable and adaptable to low‑tech, grassroots farming environments?
He highlighted scalability as a challenge, noting that technical scale does not guarantee field‑level applicability.
Speaker: Dr. Arun Pratihast
How can global, high‑resolution crop‑mapping initiatives (e.g., the World Cereal Project) be improved through better data contributions from major producing countries?
He explained that missing data from countries like India and China limits map accuracy, suggesting a need for research on data‑sharing incentives and protocols.
Speaker: Dr. Arun Pratihast
What design principles enable AI‑enabled low‑tech advisory tools (e.g., multilingual chatbots) that work effectively for smallholder farmers?
He gave examples of successful chatbot solutions for cocoa farmers, indicating a research gap in replicating such tools across crops and regions.
Speaker: Dr. Arun Pratihast
What data‑protection frameworks are needed to safeguard smallholder farmers’ data while allowing its use in AI‑driven supply‑chain platforms?
He mentioned protecting farmer data as a priority for AI solutions, highlighting a need for privacy‑focused policy research.
Speaker: Harry Verweij
What financing and investment models can sustainably support AI ecosystems in agriculture, especially for low‑ and middle‑income countries?
Both referenced the importance of financing for AI ecosystems, indicating a need to explore viable funding mechanisms.
Speaker: Arwin Sumari; Harry Verweij
How can horizontal AI governance be balanced with sector‑specific regulations to promote trustworthy AI in agriculture?
He described Indonesia’s approach of combining broad AI policies with sectoral rules, suggesting a need to study best‑practice frameworks.
Speaker: Arwin Sumari
What robust impact‑measurement methodologies can assess AI interventions (e.g., pesticide reduction, yield gains) across diverse agricultural contexts?
She cited promising evidence but implied the need for systematic evaluation metrics to validate AI benefits.
Speaker: Sara Rendtorff Smith

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

The Global Power Shift India’s Rise in AI & Semiconductors

The Global Power Shift India’s Rise in AI & Semiconductors

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how AI has shifted from a niche technology to a catalyst for economic transformation, emphasizing that genuine AI leadership demands the integration of silicon, software, systems and policy [21-34]. Jaya highlighted India’s strong engineering talent, silicon-design capabilities and rapidly growing ecosystem of system and infrastructure partners, while stressing that collaboration across nations and organisations is essential [37-42]. She framed the discussion around three strategic questions: building the intellectual foundation, deepening manufacturing and supply-chain resilience, and establishing a credible sovereign AI capability [45-48].


Vivek noted that India’s AI Mission, backed by over ₹10,000 crore, tax holidays for data-centers and platforms like AI Coach, is creating credibility through large-scale deployments and the gradual development of domestic IP in AI and semiconductors [50-57]. He added that the country’s robust VLSI design ecosystem must evolve from relying on foreign IP to owning its own, a key step toward a trustworthy deep-tech sector [56-57]. Rahul observed that domestic demand for AI-enabled products is surging and that both government programmes (e.g., a ₹1 lakh crore AI fund) and private capital exceeding $100 billion are beginning to flow into data-center and manufacturing projects, though the breadth of investment remains uneven [66-79].


Thomas argued that India should move from merely seeking compute capacity to building sovereign capability, leveraging its unique data-residency needs and a large pool of startups to develop home-grown IP and niche supply-chain components such as co-packaged optics [89-104]. He suggested that India does not need to produce leading-edge 2 nm chips but can add value in adjacent technologies and AI-infrastructure deployment, positioning itself as a resilient partner in global supply chains [101-108]. On policy, Thomas advocated public-private partnerships like the U.S. “Genesis” model, where government de-risks large-scale research while avoiding direct subsidies, to align funding with grand-challenge problems and accelerate innovation [116-128][216-226].


Vivek stressed the need for strategic autonomy-clearly defining which technologies to indigenize and which to keep open-to balance national security with global collaboration [142-148]. He also pointed to expanding skilling programmes such as NASCOM’s Future Skill Prime and a shift from rote learning to creative problem-solving, arguing that massive reskilling is required to prepare the next generation for AI-driven jobs [178-188]. Rahul described India’s manufacturing path as a “vertical-stack” model where firms integrate design, fabrication and system integration, encouraging experimentation across many domains despite limited resources [153-164]. Thomas concluded that sustainability must be embedded in product design, noting AMD’s commitment to flattening the energy curve while acknowledging the need for humility and continuous correction [278-285].


The moderator wrapped up by stating that momentum alone is insufficient; coordinated sequencing, disciplined capital, institutional alignment and infrastructure depth are essential for India to realise its AI and semiconductor ambitions [198-202][255-262]. Overall, the discussion underscored that India’s AI and semiconductor future hinges on collaborative public-private effort, strategic focus on sovereign capabilities, robust talent development and sustainable execution [21-34].


Keypoints

Major discussion points


AI leadership requires a holistic, cross-disciplinary approach – true AI dominance can only be achieved when silicon, software, systems and policy are aligned; no single element is sufficient and broad collaboration is essential. [21-34]


Credibility in deep-tech hinges on large-scale, systematic investment and a balanced policy framework – India’s AI mission, data-center tax holidays, and semiconductor design strengths must be scaled up, while policy must protect strategic autonomy yet remain open to global collaboration. [50-59][131-148]


Building manufacturing depth and supply-chain resilience calls for sustained capital and focused niche capabilities – rather than trying to match the most advanced fabs, India should target areas such as optics, co-package interconnects and packaging, leveraging public-private risk-sharing to grow a robust ecosystem. [66-80][92-110][216-236]


Talent development and skilling are critical for the next-generation AI/semiconductor workforce – education must move from rote memorisation to AI-augmented, creative problem-solving, supported by programmes like Future Skill Prime and extensive startup incubators. [178-188][255-259]


Public-private partnership models (e.g., the U.S. “Genesis” project) offer a template for India – government can de-risk strategic initiatives, fund grand-challenge research, and align academia, national labs and industry without directly subsidising private ventures. [116-128][225-230]


Overall purpose / goal of the discussion


The panel was convened to assess India’s current position and future roadmap in artificial intelligence and semiconductor technologies, identify the strategic gaps (intellectual foundation, manufacturing depth, sovereign capability), and propose coordinated actions across policy, industry, academia and capital markets that will enable India to become a credible, self-reliant AI power by the 2030 horizon.


Tone of the discussion


The conversation began with an enthusiastic, forward-looking tone, emphasizing the transformative potential of AI and India’s “poised” status ([21-34]). As the dialogue progressed, speakers adopted a more analytical and realistic tone, acknowledging existing shortcomings (limited IP, capital constraints, supply-chain fragility) and the need for disciplined execution ([50-59], [66-80], [92-110]). Toward the end, the tone shifted to a constructive, solution-oriented stance, highlighting concrete programmes, public-private partnership models, and a call to action for talent and policy makers, ending on an optimistic, motivational note about the nation’s collective journey ([178-188], [255-259]).


Speakers

Rahul Garg


Role/Title: Founder and CEO of Moglix (Mr.)


Areas of Expertise: Industrial supply-chain platforms, manufacturing, industrial finance, AI infrastructure scaling


Jaya Jagadish


Role/Title: Session Moderator; veteran semiconductor industry executive with three decades of design-engineering experience (Jaya Jagadish)


Areas of Expertise: Semiconductors, AI leadership, technology strategy


Thomas Zacharia


Role/Title: Senior Vice President for Strategic Technical Partnerships and Public Policy, AMD, Inc.; former director at Oak Ridge National Laboratory (Dr. Thomas Zakaria)


Areas of Expertise: Exascale supercomputing, AI systems, semiconductor policy, public-private partnerships


Vivek Kumar Singh


Role/Title: Professor and Senior Advisor on Science and Technology, NITI Aayog (Professor Vivek Kumar Singh)


Areas of Expertise: National science & technology policy, AI strategy, semiconductor ecosystem, biomanufacturing, innovation governance


Moderator


Role/Title: Session Moderator (Moderator)


Areas of Expertise: Session facilitation


Additional speakers:


Pooja – mentioned briefly as someone who could also join the closing remarks; no specific role or expertise identified.


Subhash Suresh – referenced as former president of the U.S. National Academy of Engineering; expertise in engineering leadership and grand challenges.


Ray Kurzweil – quoted regarding longevity and AI; known as futurist and inventor, expertise in AI, futurism, and health technologies.


Medi CEO – referenced in discussion about Indian startups; name not provided, role is Chief Executive Officer of “Medi”.


Vivek Murthy – appears as a transcription error; likely refers to Vivek Kumar Singh already listed.


External source citations:


Rahul Garg – [S1]


Jaya Jagadish – [S3]


Thomas Zacharia – [S4][S5]


Vivek Kumar Singh – [S6][S7]


Moderator – [S8][S9][S10]


Full session reportComprehensive analysis and detailed insights

The moderator opened the session with a brief overview of today’s computing stack-CPUs, GPUs, SoCs and AI engines-that underpins modern systems, and introduced the three panelists: Dr Thomas Zakaria, AMD; Prof. Vivek Kumar Singh, Senior Advisor, NITI Aayog; and Mr Rahul Garg, founder-CEO of Moglix [1-15]. After welcoming the audience, the moderator announced the start of the discussion [16-17].


Jaya Jagadish set the thematic tone, observing that artificial intelligence has moved from a niche technology to a catalyst reshaping entire economies [21-25]. She argued that genuine AI leadership requires synchronising silicon, software, systems and policy-“no one aspect can really get us there” [32-34]. Emphasising India’s readiness, she highlighted the country’s engineering talent, strong silicon-design base and a rapidly expanding ecosystem of system-level partners and manufacturers [37-39]. She then framed the panel’s inquiry around three strategic pillars: building the intellectual foundation, deepening manufacturing and supply-chain resilience, and establishing a credible sovereign AI capability [45-48].


Talent development – Prof. Vivek Kumar Singh highlighted the shift from rote, memory-based learning to AI-augmented, creative problem-solving. He cited the availability of free generative-AI tools, the NASCOM Future Skill Prime platform and widespread university incubators that make this “the best time to be a student” [178-188].


When asked how the next generation should be prepared for the AI-driven future, Jaya directed the question to Prof. Singh [170-172].


Manufacturing and capital – Mr Garg shifted the focus to post-COVID supply-chain shocks that have heightened political will to localise production, noting the government’s ₹1 lakh crore AI fund and private-sector commitments exceeding $100 billion for data-centres and related infrastructure [66-73]. He acknowledged that capital remains unevenly distributed but affirmed that investment is already flowing and that demand for AI-enabled products is rising sharply across the country [70-78]. He cautioned that the ability to execute at the required speed and scale still needs to be proven [79-80].


Strategic focus for manufacturing – When asked where India should concentrate its manufacturing efforts, Dr Zakaria argued that the country should move from merely seeking compute capacity to building sovereign capability [89-92]. He distinguished “sovereignty” (keeping data and applications within India) from “resilience” (developing indigenous IP and participating in the global supply chain without necessarily mastering the most advanced 2 nm nodes) [93-103]. Zakaria identified niche, high-value segments such as co-packaged optics and AI-infrastructure interconnects as realistic entry points, noting that these components are not widely available globally and that India could “stab the jib” in these areas [104-108].


He advocated public-private partnerships (PPPs) modelled on the U.S. “Genesis” program, explaining that the public sector should provide policy direction and demand signals while de-risking large-scale research through collaborative frameworks that fund compute infrastructure, software stacks and “lighthouse” problems, without directly subsidising private ventures [116-124][216-226]. He cited U.S. programs such as Genesis and highlighted China’s 20-year HPC-to-AI trajectory as examples of how coordinated national compute initiatives can seed long-term AI leadership [119-124][123-125].


Zakaria also pointed to AMD’s Helios project, an open-standard platform that could enable Indian firms to become leading providers of specific components, illustrating how open-standard ecosystems can create a competitive edge [210-212].


Policy perspective – Prof. Singh added a complementary view, urging a “strategic autonomy” approach: clearly delineating which technologies must be indigenised for national security and which can remain open to global collaboration [141-148]. He stressed that such a framework would protect critical components while still benefiting from international knowledge exchange.


Private-sector model – Mr Garg described the Indian private-sector approach as a “vertical-stack” model, where firms integrate design, fabrication and system integration while simultaneously developing ancillary ecosystems such as clean-room facilities, chemical suppliers and packaging verification [153-164]. He argued that experimenting across many domains-“throwing darts at hundreds of problems”-will eventually reveal the few areas where India can achieve first-mover advantage, even if the nation is currently “late to the party” in semiconductor technology [153-154][165-166].


Closing remarks – The moderator stressed that momentum alone will not secure India’s AI and semiconductor ambitions; disciplined sequencing, capital allocation, institutional alignment and deep infrastructure are essential [198-202]. He also raised the question of embedding sustainability as a core design choice rather than a trade-off [274-277].


Dr Zakaria responded that AMD designs its products with an explicit goal of flattening the energy curve, acknowledging that sustainability requires humility and continuous course-correction [278-285].


When asked what single move India must execute flawlessly, Rahul Garg emphasized that success will not hinge on a single action but on a fast-follower capability combined with a global-scale ambition. He argued that India must rapidly scale capital (≈ $10-20 bn) through coordinated public-private effort to compete with larger global pools [240-247].


Key agreements (each supported by transcript citations):


– AI leadership demands a holistic ecosystem linking silicon, software, systems and policy [32-34][116-119].


– Substantial public and private financing, preferably through PPP de-risking mechanisms, is critical [56][70-78][216-221][255-262].


– India’s large talent pool provides a fast-follower advantage that must be leveraged via coordinated action [38-39][214-215][239-240].


– Developing indigenous IP and nurturing local startups are essential for sovereign capability [84-86][98-100][56-57].


Points of divergence (with supporting citations):


Scope of manufacturing: Zakaria advocates focusing on niche supply-chain components such as co-packaged optics [104-108]; Garg argues for building mid-range fabs and a vertically integrated ecosystem that includes clean-room and packaging capabilities [153-164].


Capital mobilisation: Zakaria suggests government de-risk projects without direct subsidies [216-221]; Garg highlights the need for massive pooled capital (potentially $10-20 bn) that may require more direct state involvement [70-78][214-218].


Openness vs. strategic autonomy: Singh calls for clear rules on strategic autonomy, delineating indigenisation priorities [141-148]; Zakaria’s Genesis model favours an open, collaborative research environment [225-229].


Strategic posture: Garg’s fast-follower narrative contrasts with Zakaria’s forward-looking supercomputing mission that seeds long-term capability rather than merely chasing existing technologies [119-124][214-218].


The panel concluded with consensus that India’s AI and semiconductor future hinges on coordinated public-private effort, strategic focus on high-value niche technologies, aggressive talent skilling, and embedding sustainability into design. Unresolved issues include defining the exact roadmap for private-capital mobilisation, specifying timelines for moving from niche participation to more advanced fab capabilities, establishing mechanisms for IP transfer from academia to industry, and creating metrics to monitor sustainability outcomes. Together, these insights outline a roadmap that combines ambitious policy, targeted investment and a skilled workforce to realise India’s AI sovereignty by the 2030 horizon.


Session transcriptComplete transcript of the session
Moderator

Thank you. Thank you. across CPUs, GPUs, SoCs, and AI engines that power cutting -edge compute systems worldwide. She brings a rare combination of deep silicon expertise, global product leadership, and national ecosystem engagement. She is deeply committed to talent development in the ecosystem as well. Please join me in welcoming Jaya, who will be moderating our session. Our first panelist is Dr. Thomas Zakaria, Senior Vice President for Strategic Technical Partnerships and Public Policy at AMD, Inc. Dr. Zakaria previously led Oak Ridge National Laboratory, where he oversaw the deployment of multiple world -leading supercomputing systems, including Frontier, the first exascale supercomputer. His career spans scientific discovery, national compute infrastructure, public policy, and global partnerships. Please welcome Dr. Thomas Zakaria.

Joining us is Professor Vivek Kumar Singh, Senior… advisor on science and technology at NITI IO. Professor Singh plays a central role in shaping India’s science, technology and innovation architecture. From R &D governance to university industry collaboration and state level innovation ecosystems. With a background in computer science, data analytics and experience in academic leadership at leading institutions, he bridges research depth with national policy execution. Please welcome Professor Vivek Kumar Singh. My apologies. And finally, we have Mr. Rahul Garg, founder and CEO of Moglix. Rahul has built one of India’s leading industrial supply chain platforms and has expanded into manufacturing and industrial finance, navigating the realities of scale, capital and execution in India’s industrial ecosystem. Please welcome Mr.

Rahul Garg. We will now be beginning the discussion. Thank you so much for joining us.

Jaya Jagadish

All right. Good afternoon, everyone. And I would like to extend a very warm welcome to each one of you for this session. And thank you for taking time to be here with us. So we are meeting at a moment when AI is no longer a niche technology. And these conversations have become foundational. And there is a shift in shaping the entire economies. And that’s the global impact that this technology can have. And having spent about three decades in semiconductor industry doing design engineering, I have seen compute evolve. From a single threaded processor to massively parallel AI systems. And that’s. stupendous growth that we have seen and a transformation of technology. And honestly, AI is a technology that is probably the most transformational that we will be able to see in our lifetimes.

And true AI leadership is something globally there’s a contest. Every country wants to achieve self -reliance and, you know, leadership in AI. And that’s the importance of this technology that we are talking. But true AI leadership itself happens when silicon, software, systems, and policy, all of these aspects have to come together to achieve that leadership. No one aspect can really get us there. And that’s what truly excites me for today’s session. We have experts who have the knowledge. In each of these, many of these aspects, and we will be, you know, asking questions and they’ll be sharing their perspectives on this, which I’m sure all of us will enjoy listening to. So coming to India, from what I see, India is truly well poised for this technology shift.

And we bring together engineering talent, silicon design strength, and a growing ecosystem of system and infrastructure partners, including manufacturing. But what truly defines and makes this moment different is the scale and the speed at which we are moving. So we do see a strong commitment, but what is also important is collaboration. No one country or one organization can truly achieve the results or be successful at this, but we all need to collaborate. We all need to become very aware because this is not a simple thing. It has the potential to touch human lives and humanity. At a time when we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation through this panel, today I want to look at three perspectives.

First, how do we continue to build the intellectual foundation? Second, how do we build manufacturing depth and supply chain resilience through a sustained investment model? And third, how do we build a credible, sovereign AI capability? I will get to Vivek. I’ll

Vivek Kumar Singh

Thank you, Jaya. This is a very important thing, very important question. I think India has already taken a call to go in a big way in the whole deep tech domain. And a lot of changes that we see happening in terms of AI compute, then AI data centers and so on. Recently, we all heard about the tax holidays for data centers that are going to be created in India. Also, platforms like AI Coach, because that’s very, very important. If you want to create AI applications for India, you need AI data, which is centered in India, which is for the context of India. So what I believe, when you talk about credibility and how credible we are into this deep tech domain, comprising AI, semiconductor, biomanufacturing, even other, areas what is very very important is that credibility doesn’t come only from announcements so what we what we really need to know and what we really need to do is to go at a scale and fortunately a lot of positive changes are already happening we have india ai mission we all know about that 10 000 plus crores for five years and it’s a very systematic effort where we have almost all so all seven pillars address you know all kind of needs that we need for ai and similarly if you look at semiconductors we we all know about what is happening in fabs also you know we know that india has a very strong ecosystem of uh you know vlsi design semiconductor design and so on unfortunately most of that ip is not with india but you know there’s a time when is also going to happen that india would also be owning a lot of ip so credibility i think uh for india would be very very important and this is coming not only as part of announcement but it’s it’s coming you know it’s coming it’s coming it’s coming it’s coming it’s coming it’s coming it’s coming you know as part of commitment for scaled deployments, for scaled growth, accelerated growth.

And what we see now is something, you know, which nobody could have thought of 10 years back or 5 years back. So we, I believe we are on the track and we are very much into, you know, into the whole realm of AI and semiconductors. And a lot of push is there and the whole ecosystem is evolving and we all, as we move further, we all are going to work towards, you know, creating a very, very credible ecosystem for overall growth of the sector.

Jaya Jagadish

Now, great insights. Thank you, Vivek. Now, moving to Rahul. There’s clearly a growing momentum to strengthen manufacturing in India. Given your journey, you have expanded Moglex from digital marketplaces into manufacturing and industrial financing. Do you believe the Indian private sector is truly ready financially and have the mind? set to take on long term investments that are needed?

Rahul Garg

So firstly, thank you for having me. I think the question is very pertinent because again, pre -COVID there was a very different environment, both from a geopolitics perspective, supply chain perspective. And I think the supply chain as a word started to become popular in COVID times. So and I think I take some pride in the fact that at least as Moglex, we have been part of seeing the supply chain journey in the country as well as continuing to now see the manufacturing journey. On the specific point that you raised on both from a will perspective, capital perspective, demand perspective, if you look at these three aspects of it. So I think the demand in the country clearly is growing rapidly.

And one of the changes that has happened, obviously India becoming the larger in terms of GDP size, consumer demand, people expecting faster and faster products, people want variety. products so on so forth so i think demand discretionary spend is increasing the one significant change that has happened post covid we see is that while the demand is growing there is also an increasing uh appetite for people to start building more and more manufacturing to also start to look at many of those being localized rather than just depend on global supply chains because obviously we have gone through moments where like we may not have enough mask capacity in the country we may not have enough oxygen concentrated capacity in the country and we some of those shocks kind of uh got both the private and the public sector realizing that there is a bare minimum manufacturing that needs to happen in the country for it to be truly self reliant at the population scale that we are in so i think that will has gotten generated the capital is starting to flow in i think the question on whether the capital is large enough and long term enough i think we are seeing increase increasing trend that there are clearly government will whether it is in terms of the fund that we have seen of 1 lakh crore, now 1 .2 billion dollar for specific AI deep tech, things like that.

But also private capital, which within this week, the numbers that I’m hearing is more than 100 billion dollar plus commitment from the private capital companies saying that they are going to invest into data centers, localizing, so on, so forth. So I think the capital is happening today. Is it happening broad -based? The answer may be no. No. But has it started to happen? And has it started to go from like maybe few hundred crores to few billions of dollars? So that is happening. Can we execute at the same speed and scale? Only time will tell.

Jaya Jagadish

Sure. No, there’s definitely an increased momentum. But along with manufacturing, I mean, I’m also biased more towards the design front based on my experience. I do definitely want to see lot more local startups. And Vivek just mentioned, we don’t have the IPOs. I mean, having our own IPs is one of the key steps. we need to take. So moving on, question for Thomas. If advanced fabs remain limited globally, where should India focus on in the near future? Where can we realistically create value in the next three to seven years?

Thomas Zacharia

Thank you, Jaya, and I just want to echo the sentiments that my colleagues here on the panel have mentioned, so I’ll build on that. So I think the opportunity for India is to move from compute to capability, right? I mean, that’s really where we need to be. And I’ll pick a couple of areas. So sovereignty and resilience gets intermingled. So I’m going to sort of keep those two things separate. Sovereignty is one where you are really trying to make sure that your data and your application or use cases are resident in country and it’s relevant to country. And that’s an area that is uniquely India to lead because no one else is going to do that.

It has to be done and you already mentioned the opportunity, we were with the CEO of Medi today talking about 50 ,000 startups. I don’t know how to get my head wrapped around 50 ,000 startups so I asked him, can you tell me who the top 50 are so that perhaps a company like AMD can partner with them and try to help them to mature. So that is on the sovereignty side. On the resiliency side the reality is that clearly India needs any sovereign country expects to have resiliency create their own IP and India should have the same aspiration given the scale of ambition and scale of population and here I think while we certainly should have an ambition to go up the development cycle to the leading edge of chip design.

I think there is an opportunity to also look at being part of the supply chain for leading edge deployment. So you don’t necessarily have to be at the two nanometer scale for GPUs or CPUs. There are critical technology in the deployment at scale of AI infrastructure where India can play a role. For instance, we know that the entire ecosystem is going to be driven to optics as interconnect technology, co -package optics. And there are clear supply chain that is not available globally. That is something that is being considered today. And leading candidates obviously today I would say are U .S., Japan, Malaysia. But those are the kind of niche areas where India can stab the jib.

And that is the journey where you are. really contributing to the first -of -a -kind or the nth -of -a -kind leading -edge technology. So that’s the way I would approach it.

Jaya Jagadish

Great insights. Thank you, Thomas. Now, continuing, today AI leadership is ultimately limited not by ambition, but by access to secure scalable computing resources. So, Thomas, continuing with you, you have led exascale class systems and now you’re working on sovereign AI partnerships globally. In the U .S., programs such as Genesys and broader national compute initiatives have attempted to systematically align infrastructure, research, and industrial capacity. So what lessons from these models are actually applicable to India?

Thomas Zacharia

So I think this is a great area for public -private partnership, in my view. The public part of it is a uniquely government function. Government brings both policy as well as the demand signal, particularly in the area of science and innovation, critical infrastructure, whether it is energy sector or national security, as well as uniquely government missions. And the opportunity here is to, I mean, India has supercomputing mission. I think there is an opportunity, and I think India is already thinking about deploying this national supercomputing mission and national scientific infrastructure that is on a trajectory to be at global leading scale. So today, countries like U .S. China. China is a particularly interesting example. China developed the intellectual ecosystem around HPC, which then translated to AI, over a period of 20 to 25 years.

It was intentional. And if you look at where the AI penetration, AI adoption, AI infrastructure resides globally, you can directly trace that to investments in sort of supercomputing mission that built the underlying infrastructure. So I think that is a great opportunity. Already plans are there. But it’s not a static view. So one of the things that I would encourage as we plan for the future is not plan based on where things are, but plan on where things will be by the time we deploy this kind of infrastructure.

Jaya Jagadish

That’s great. A future -looking planning is what we… Thank you, Thomas. Vivek, moving to you, from a policy standpoint, how do we balance national security concerns with openness and global collaboration?

Vivek Kumar Singh

Vivek Murthy Well, it’s a very tricky question, I would say. You know, for a country like India, we all know, you know, I mean, the kind of culture that we have in India is we have always believed in the fact that knowledge is a common good. And that is how, you know, our whole innovation ecosystem has been operating. Our universities have been creating a lot of knowledge and we all, you know, researchers, R &D persons, they have been trained with the fact that whatever you create should be for a, you know, for a common good. There were never efforts to productize them, to convert them into socioeconomic goods, to, you know, protect them with, you know, excess rights and so on.

So that was the common thing that we have been doing earlier. But what is happening now is that we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in is a completely different word.

And that is where our academia, our R &D institutions are also being asked to, you know, change the complete quotes. So it’s not only that researchers, you know, faculty members in universities, they should end up with research publication, that’s all. So it’s very, very important that you productize also. Now what is happening, see, if you talk about the culture of innovation and how you see in terms of the global world that we are in, particularly for sectors like AI and semiconductor, I think what we need to do is we need to go for a, you know, a strategic decision -making in the sense that what is it that we want to do? So, for example, there are certain sectors where, you know, the setup that we are using has certain components which may be used in some critical deployment.

So in those cases, what we need is a set of clarity of rules. What is it that we would like to indigenize? What is it? that we would like to have built on our own? And what is it that we can keep open for rest of the world, for collaboration and so on? So I think what we need is, I would say two words would be important is strategic autonomy. Autonomy in the sense that autonomy where it is needed, but at all other places where we can collaborate with the world, where we can contribute in terms of collective knowledge creation, India can always play a role and India is playing a role

Jaya Jagadish

Great. Rahul, question to you. As AI infrastructure scales, demand patterns for chips and hardware will shift. How should Indian manufacturers position themselves early? And secondly, where are the first mover advantages?

Rahul Garg

I think. we are kind of late to the party in some sense in the semiconductor and chips some say it’s two decades three decades late to the party right so and then there are couple of countries which have a disproportionate advantage not just in terms of what is more popularly known as the 2nm and two three companies dominating that but also in terms of the entire ecosystem that is required around all of those factories and chipsets and systems so on so forth so I think for us I think the India journey will be its own unique path so that’s one thing that I’ve always at least over the last 20 years I’ve seen that if you were to wait for landline to become 10 % of the population we would have not had the mobile revolution if you had to wait for credit cards then I mean like it would not have happened right so I think the in this new era that we are living I think the manufacturers will have to find few spots which may not be as obvious which given the conventional way the countries and ecosystems are built and I think one of the good advantages of events like this is you start to have a very large population and smart and talented people throw darts at hundreds of problems simultaneously and maybe five years later we will say like okay we knew that these are the three things which will work or all of that kind of thing right but I don’t think there is like a today unique path I mean definitely does seem that we need to start building capabilities and capabilities need to be built design we have capabilities we don’t have the productize capability so that is one capability which needs to be built the manufacturing capability while we are starting with some of the fabs which are in the mid zone but the entire ecosystem of chemical suppliers like clean room suppliers utility suppliers.

How do you make sure that. there is enough packaging verification and many of that ecosystem getting developed. So all of those are going to happen simultaneously. So I think the opportunity remains in all of the areas. And I think therefore, at least my encouragement to even my management and the way we are looking at Moglex and so on and so forth is, I mean, you try 10 things. Do not be scared to try one thing or two things and then you fail. And conventionally also while in the Western and so world, there have been horizontal capabilities companies have built and scaled. In India, historically over the last 15 years, every startup, every large company has built vertical stacks of companies.

So they are doing an integrated. They may be chip designed to manufacture, to systems, to product. I mean, like that’s how just the model has evolved so far. So. I think that’s what vertical stack manufacturing all. parts of the ecosystem will have to give a shot and maybe over time will become horizontal.

Jaya Jagadish

That’s great. Thank you. So, you know, I do see quite a few students in the audience. So one thing that we are now facing is kind of with this technology. What is knowledge? How do we acquire knowledge? I mean, traditionally, we go to schools, universities for that. But today it’s at your fingertips. And with that advancement of AI, it’s just going to get better. You want to learn about something, you always have it on your fingertips. So what really do we need? How do we prepare the next generations to solve the problems of the future is the question. I mean, we cannot just stick around with our traditional ways of learning. And we have to scale and adapt to the newer ways.

So question for you, Vivek, how can we prepare ourselves and equip ourselves for this next phase that’s coming?

Vivek Kumar Singh

well I would say efforts have already started so that’s the best thing and as you rightly said this is the best time to be a student you know if you take yourselves 20 years back you will always be constrained with resources the best that you have is lot of books you will have to go to a library there are books that you can’t afford and so on and books are also not on time so you have later editions and so on so what is happening now is you know with lot and lot of information information which is which is you know can be customized for you specifically then you have lot of recommended systems you have retrieval augmented generation systems you know all of this with generative and what is happening so the best part is that you have plenty of information you want to learn anything you want to acquire a skill you always have resources and most of the time these resources you really don’t have to pay for that because this lot of material that you can access for for free.

The programs, particularly for India, we have something called NASCOM’s Future Skill Prime, where you can, you know, which is an aggregator for a lot of online courses. Similarly, there are platforms across the world that you can use. Now, what is happening is that essentially what we have been doing in our universities and generic colleges and, you know, other institutions earlier is that it was largely a kind of memory -based learning where we were acquiring knowledge, we were memorizing things. But now, over a period of time, it’s a more synthetic perspective which is being, you know, percolated across institutions. So, students now are going more into that creative aspect where they’re able to create solutions for certain problems.

And with the whole ecosystem around startups, we all know India is the third largest startup ecosystem of the world. With a lot of support system that we have, most of our universities have incubators. You know, other support systems. So, this is… best time and that is why I said this is the best time to be a student if you want to do anything if you have a creative idea you always find support and there are lot of skilling programs from government of India from many other organizations many you know philanthropic supports are there so even lot of organizations which have their own products they are you know offering free of cost training to students so this is very good but what is important is largely also due to the fact that you know we keep on hearing that AI is going to cause a number of you know disruption in terms of jobs that are there because lot of jobs which were there in areas like software testing and customer support and all of that is gone but at the same time these technologies are also creating new jobs and for that you need to prepare yourself and fortunately the best part is that we have enough material enough resources enough support system that we can use to create a new job and for that you need to prepare yourself for the new kind of jobs that are going to come so this The whole revolution that we see in front of us will require massive skilling, a bit of reskilling also.

So many of my batchmates, 25 years into the careers, and they now feel that they have to reskill themselves with many, many new things. Life was very good somewhere in Silicon Valley, 25 years, a lot of money, but now they feel threatened. And that is the beauty of startups and all these new ideas that are there. So I would simply end by saying that this is the best time to be

Jaya Jagadish

Absolutely, totally agree. You know, I have to share this thing. I was actually conducting a panel discussion within AMD with the senior execs. And one of the fun questions was, if there is a machine or an equipment that you want to invent, what would it be? And the unanimous answer is, I would love to have a machine that can make me 20 years old. 20 years younger, right? So, you know, you guys are extremely lucky, make use of this opportunity to the maximum. All right. So as we come to the final leg of this discussion, India’s opportunity in AI and semiconductors is very real. But it’s also time bound. Momentum alone will not be enough. Sequencing, capital discipline, institutional alignment and infrastructure depth truly matters.

And all these areas have to work in complete alignment with each other. So, you know, let me close the session by asking each of you one more question. First one is for Rahul.

Moderator

was if there is a machine or an equipment that you want to invent, what would it be? And the unanimous answer is I would love to have a machine that can make me 20 years younger, right? So, you know, you guys are extremely lucky, make use of this opportunity to the maximum. All right. So as we come to the final leg of this discussion, India’s opportunity in AI and semiconductors is very real, but it’s also time bound. Momentum alone will not be enough. Sequencing, capital discipline, institutional alignment and infrastructure depth truly matters. And all these areas have to work in complete alignment with each other. So, you know, let me close the session by asking each of you one more question.

First. First one is for Rahul. In the global race where others are moving fast, what is the one move India must execute flawlessly to stay competitive?

Rahul Garg

I think like many other things I think it’s not one move so maybe we do everything as a Bollywood dance move right so they’re like 10 moves to everything but I think the one of the things which has happened at least from my vantage in the startup ecosystem is over the last 15 years we have become extremely good at being fast followers like maybe 15 years back if there was a product or a service in US or in Europe it would take three to five year lag to come to India and now maybe that lag is like one month 15 days I mean like so probably chat GPT within the first one month the maximum number of users are coming from India right so we have become extreme fast thanks to technology that we are fast followers the number of apps that might be built in India might be higher than most countries combined together maybe US China might be the only ones but otherwise I think India would be in the top three in terms of building all the apps in the world I think the move that needs to happen is the scale of ambition beyond India into the global platform because most of this effort that has happened in last 15 years are around kind of dominating the Indian consumer businesses applications so on so forth I think we need to up the game on global and we would require a significant amount of public private upping the game because most of the countries capital pools that we are fighting cannot be only attracted by the private players so I mean if someone is raising 100 billion dollar 200 billion dollar we need to at least start the race with 10 billion 15 billion 20 billion right which is not possible today completely in the private so I think how do we bring this and push the you capital bar, global bar together as a government and as private players I think that’s one thing I would love to see

Moderator

That’s a very valid statement Right Next question to Thomas Thomas, if we had to place one strategic bet that defines India’s position in AI and semiconductors by 2030 what should it be?

Thomas Zacharia

So I’m going to repeat what Rahul said there is, you know, the one I don’t know much about Bollywood dance moves but I would say one move is certainly ambitious I’m going to sort of regress back to a few previous questions since we have a few minutes, I thought I will sort of start with public -private alignment. Rahul mentioned that it is very, very hard for private sector to to to raise the kind of capital that’s being raised elsewhere in India. And that’s part of it is, you know, so one of the important things that government can do is to de -risk that enterprise. Now, I don’t believe that government should de -risk a private sector’s business venture by investing in that effort.

But there are unique places where government can de -risk through public -private partnerships that would enable this ecosystem to develop so that additional ventures can be taken up by private sector on their own. Because I don’t think that my taxpayer money should be used to subsidize. I mean, look, there is a role. So you mentioned Genesis. I did not describe Genesis. I don’t know whether… of you in the audience know what genesis is so I’ll take a couple of minutes to just discuss that as an opportunity to think about how to frame public private partnership so today United States spends a trillion dollars a year in R &D expenditure and roughly about 20 to 25 percent of that is government the rest is private sector now if you look at the R &D spend in the United States it’s been steadily growing at about keeping up with inflation maybe slightly above inflation 2 to 3 percent year over year but if you look at innovation output it’s been flat lined part of it is because the problems are getting more and more complicated discovering new materials cure for cancer all All those things are increasingly, significantly impactful for society, but also significantly challenging.

So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources to bring academia, national laboratories, and private sector to identify what they call lighthouse problems, so you can call it grand challenge problems, that are relevant, that is likely to move the needle across these areas. And the government is then investing substantial resources for compute infrastructure, software stack, partnering with private sector in these important problems. Because it is being done in a… open, collaborative framework, private… This work is, in my view, appropriate for government to invest because the government is not investing directly in any particular business, but the business is able to take the fruits of this collaboration to drive innovation in their own sector.

So I think that is a really good model. I think it was already alluded to. If you are a fast follower or if you follow anybody, the danger is that it may be appropriate for a business, but as a nation, anytime you follow somebody, and if that is your ambition, you are destined at best to be number two, at best, because there is always somebody ahead of you. So I think for a country with the history of India, the ambition of India, the talent of India, and now the will of India, there is nothing wrong with aspiring to be, strategically deciding where India can go. can be world leading in part of this, I mean, no country is going to dominate every aspect of this ecosystem, identifying strategically where one can be that leader globally.

And I would say there are, at least if I can speak to AMD, just as an example, we were discussing about Helios and how it is based on open standards. There are many components. It may not be GPU that you start, but there are many components there where a private sector in India can aspire to be a leading provider based on open standards so that a business like AMD or a public private sector would say, well, I can get a better product, better total cost of ownership if I can plug into that. And one last thing, I cannot let you get away with, you know, just this time. I’m being great for the youngsters. Ray Kurzweil said that today, for each of us in this room, we age only eight months for every chronological year because of advances in medical care.

And that is true because longevity, people are living longer because of better drugs, better health, living, etc. So AI has the added advantage of providing greater solutions. So it’s not just the youngsters, there is hope for us if

Moderator

Absolutely. So we are all lucky to be here at this age of AI. We are truly lucky to be in this. No, that was very insightful. Thank you, Thomas. So Vivek, a question for you. I’m going to say what is the one bold decision, but I’m going to change that to what are some of the bold decisions. We must take to ensure we don’t look back and regret five years from today.

Vivek Kumar Singh

well i think uh what the the biggest advantage that india has is of course you know a huge pool of talent so that is something that we all need to rely on that’s that’s uh that’s the most important thing for a country like india and this essentially see india has an inherent culture of innovation so it’s not that you know we’re always following or we’re looking at technologies and so on the fact is the ecosystems where we have been living in they were not you know uh geared up they were not situated in the context that we were creating products so the culture of transforming that innovation to products has not been there unfortunately for a long period of time things are changing now and probably what we need to do is to invest more in our youth to invest more in skilling to invest uh more in how do we convert the knowledge that we generate generate in our universities, in our R &D labs into actual usable products which have, you know, socio -economic impact.

So that’s the most important thing that I believe we should be looking at. Of course, given the fact that we also have an advantage that we have the advantage of scale also. So, you know, a lot of things that we have done and we have proved it in terms of the digital public infra that we have created at a population scale of size India, it matters a lot. If you go to any part of the world, particularly anywhere in Europe, if you identify yourselves from India, you know, and you are in some discussion related to IT and so on, so you will always be regarded with, you know, a lot of depth in the sense that everybody believes that India is an IT superpower largely because of the talent that we have.

So this is something that we should leverage on and we should do it. And something that we really need to invest in heavily to see what is going to come for the next generation and to provide an environment and our prime minister keeps on saying ease of doing business so that is something that we really need to look into to to enable and to create an environment where we are able to transform the knowledge that we create into usable products

Moderator

no absolutely right i mean talent skilling and ease of doing business i mean all of these are coming together for india and in fact i led the committee for future skills and i worked i got the opportunity to work with 13 other eminent leaders from industry academia across the board and one thing that stood out was if we can actually get our skilling right we can actually supply talent not just for india but globally you know that’s something that’s going to be very effective you if we can get our skilling actions right so thank you again Today’s conversation was truly insightful and inspiring. We touched upon many aspects of semiconductors and AI and the ecosystem and India’s potential as such.

And again, AI leadership will not really happen by accident. It will require a deliberate alignment across policy, industry, research, and infrastructure. And we have many strengths that we need to work on. We need to work on strengthening and leveraging for the growth that we are ambitious about. And truly what matters now is decisive execution, moving with clarity and with urgency. So it’s going to be a great journey. And I once again want to iterate that we are truly lucky to be here in this phase. And what a fantastic journey we have ahead of us. And let’s be committed to that journey of learning and advancement. Thank you also. much for attending this session. I appreciate your time.

Thank you. Do we have time for audience questions? We can take one question, one or two. Out of 500 out of 500 sessions here, this is one on semiconductor. I’m very glad that you guys organized it. Very, very insightful. A few amazing questions and a good response. Quickly to my question. I teach AI and sustainability at IIM and I cover the entire supply chain starting from the rear, going through the chip design, manufacturing, the semiconductor supply chain, essentially, and all the way to data centers and electronic use. So, sustainability is at the core of all design decisions in my class. And that’s what we are trying to teach the new management human resources in India. Your thoughts on having sustainability not as a trade -off but as a core design choice for every decision that is made either in India or some country.

Thomas Zacharia

So it’s a great question and I think every one of us, certainly I can speak for the company that I represent here and I must say I’m in India so I am going to give a shout out to the 10 ,000 AMDers in this country. AMD would not exist without you. We would not be able to do what we are doing without the contributions that they make every day. So India is already very much part of a global supply chain. Sustainability is very key. We design our products with a goal, explicit goal of flattening the energy curve because it’s easy to say we’re going to build megawatts and gigawatts, which we may because it is going to be a fundamental infrastructure in which society is going to progress.

But it’s incumbent on us to ensure that we are very, very thoughtful and committed to sustainability. I also would like to say that we have to be humble enough to know that we are not going to get everything right. I was at a U .S. National Academy meeting where Subhash Suresh, he was the president at the time, we had just rolled out the grand challenges for the 21st century. And he said, you know, if you look at it, the grand challenges of the 21st century. are attempting to solve the problems created by the solutions to the grand challenges of the 20th century. So the reality is that we don’t know what we don’t know. But yet, as long as we use sustainability as a core goal and be humble enough to know that we are not going to get it all right, then I think we cannot stop progress.

We need to continue to move forward. But know that we are not going to get everything right and course correct as we go along.

Moderator

Okay, I’m told we are out of time. Yes, actually we are running out of time. And I really appreciate for joining us for this session. And I’m very much heartfelt thankful to our distinguished guests. So as a token from Mighty’s side, I would like to give a short, I mean cute memento. Pooja, you can also join. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (2)
Confirmedhigh

“Dr Thomas Zakaria is Senior Vice President for Strategic Technical Partnerships and Public Policy at AMD.”

The knowledge base lists Dr Thomas Zakaria as Senior Vice President for Strategic Technical Partnerships and Public Policy at AMD, confirming his role [S1].

Confirmedmedium

“Prof. Vivek Kumar Singh said AI‑augmented learning shifts education from rote memorisation to creative problem‑solving.”

Future-Ready Education material describes a shift from regurgitation-based learning to critical thinking and creativity, supporting Singh’s description of the transition to AI-augmented, creative problem-solving [S79].

External Sources (80)
S1
The Global Power Shift India’s Rise in AI &amp; Semiconductors — First. First one is for Rahul. In the global race where others are moving fast, what is the one move India must execute …
S2
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — First. First one is for Rahul. In the global race where others are moving fast, what is the one move India must execute …
S3
The Global Power Shift India’s Rise in AI &amp; Semiconductors — The discussion was framed around India’s opportunity in AI and semiconductors, with the moderator establishing that AI r…
S4
Building the AI-Ready Future From Infrastructure to Skills — – Timothy Robson- Thomas Zacharia
S5
The Global Power Shift India’s Rise in AI &amp; Semiconductors — – Thomas Zacharia- Rahul Garg – Vivek Kumar Singh- Thomas Zacharia
S6
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Vivek Kumar Singh(Professor): Senior advisor on science and technology at NITI Aayog; plays central role in shaping Ind…
S7
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — Joining us is Professor Vivek Kumar Singh, Senior… advisor on science and technology at NITI IO. Professor Singh plays…
S8
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S9
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S10
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S11
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S12
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biolog…
S13
Panel Discussion Data Sovereignty India AI Impact Summit — Okay, I’m quickly coming to the third question. I think you had so many things. Supply chain trust, absolutely. Today, i…
S14
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S15
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And thank you. And maybe I will introduce a few of them. Agri -Co is transforming agriculture through digital tools that…
S16
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S17
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — So my point was that, for example, geo tagging of all the assets of your, you know, right from the power generation to t…
S18
Diplomatic policy analysis — Policy analysis serves as the backbone of diplomacy’s decision-making. It equips leaders and negotiators with the eviden…
S19
Judiciary engagement — AI implementation in judicial systems has wide-ranging effects on various stakeholders including lawyers, litigants, and…
S20
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Lastly, the analysis emphasises the importance of a cross-disciplinary approach. It highlights the necessity for collabo…
S21
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S22
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — Emphasis is placed on boosting supply chain resilience and embedding sustainability as fundamental to the private sector…
S23
The Battle for Chips — In conclusion, India’s strategic approach to developing a comprehensive semiconductor ecosystem demonstrates a commitmen…
S24
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Talent development, education and future skills
S25
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Massive Workforce Development Challenge: The industry faces a critical shortage of approximately 1 million skilled work…
S26
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The government’s response through the India AI Mission has established a shared compute framework providing access to 38…
S27
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — The participant explains that India is following the same successful approach used for DPI development, where basic buil…
S28
The Global Power Shift India’s Rise in AI &amp; Semiconductors — And again, AI leadership will not really happen by accident. It will require a deliberate alignment across policy, indus…
S29
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Jovan Kurbalija: Thank you. She’s quiet. Okay, okay. Good. Great. We heard from our excellent speakers at the very begin…
S30
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — A strategic ecosystem approach requires early use cases in areas where private sector can lead, areas where public secto…
S31
AI/Gen AI for the Global Goals — Boa-Gue mentions the African Startup Policy Framework as an example of an initiative to enable member states to develop …
S32
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S33
WS #462 Bridging the Compute Divide a Global Alliance for AI — However, other panelists emphasized the importance of local infrastructure for enabling indigenous innovation and ensuri…
S34
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “When it comes to discovery, we need to develop foundation models for proteins, RNA, cellular circuits and systems biolo…
S35
Trade Deals or Disputes? / DAVOS 2025 — 4. Investment De-risking: Ensuring stable fiscal arrangements and rule of law to encourage long-term investments. Vandi…
S36
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — -Policy and Regulatory Frameworks: Multiple panelists emphasized the critical role of government policy in reducing inve…
S37
Biology as Consumer Technology — Notably, the analysis highlights the importance of investors taking more risks, as venture funds often shy away from ris…
S38
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Moderate disagreement with significant implications. The disagreements are not fundamental conflicts but represent diffe…
S39
Day 0 Event #270 Everything in the Cloud How to Remain Digital Autonomous — This balanced approach influenced how other speakers framed their arguments, moving away from binary thinking toward mor…
S40
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — Crucially, the address underscored the importance of incorporating developing countries into the global supply chain, ad…
S41
Parallel Session D3: Supply Chain Disruptions – The Role and Response of NTFCs — In summary, the analysis accentuated TFAs as catalysts for managing and enhancing supply chain efficiency. It also under…
S42
How Investment Promotion Agencies (IPAs) and trade institutions could leverage digital tools to create sustainable supply chain partnerships’ — Cambodia has implemented the Pentagon Strategy, a new social and economic policy agenda, to combat climate change and pr…
S43
Keynote-Alexandr Wang — “That’s transformative, perhaps most especially in countries like India, where so many languages are spoken.”[11]. “That…
S44
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — Partnerships can help address the toughest challenges within a country by utilizing data-centric or artificial intellige…
S45
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — The value of cross-sector partnerships, especially during the pandemic, is emphasised. Collaborations between the public…
S46
WS #460 Building Digital Policy for Sustainable E Waste Management — Sustainability must be designed into products from the beginning rather than treated as an afterthought
S47
Empowering the Ethical Supply Chain: steps to responsible sourcing and circular economy (Lenovo) — However, there are challenges that hinder progress towards sustainability. The analysis identifies knowledge gaps in sus…
S48
Creating Eco-friendly Policy System for Emerging Technology — Decision making should be based on evidence. Her argument conveyed a positive stance towards the central role of higher…
S49
Multistakeholder Partnerships for Thriving AI Ecosystems — Not only the big players. So all those things need framework and need governance. And we have to make sure that the outc…
S50
Indias Roadmap to an AGI-Enabled Future — Researchers, founders and policy makers. At Chariot, we are proud to be one of the companies mandated to build frontier …
S51
The Global Power Shift India’s Rise in AI &amp; Semiconductors — The panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems,…
S52
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S53
WS #270 Understanding digital exclusion in AI era — The speaker stresses the need for collaboration among multiple stakeholders to address AI challenges. No single stakehol…
S54
WS #462 Bridging the Compute Divide a Global Alliance for AI — Successful collaboration requires openness, compromise, and recognition of diverse community needs rather than imposing …
S55
From KW to GW Scaling the Infrastructure of the Global AI Economy — Specific timeline and investment details for India’s semiconductor manufacturing capabilities (Semicon mission) remain u…
S56
The Battle for Chips — Additionally, India advocates for providing more opportunities, investments, and technology to countries with greater po…
S57
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — Irakli Beridze (UNICRI) This comment introduced the governance perspective into the scientific discussion, emphasizing …
S58
High-Level session: Building and Financing Resilient and Sustainable Global Supply chains and the Role of the Private Sector — Businesses are encouraged to look outside the finite Caribbean market Effective collaboration, as demonstrated by the C…
S59
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Talent development, education and future skills
S60
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Massive Workforce Development Challenge: The industry faces a critical shortage of approximately 1 million skilled work…
S61
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — The panel reached consensus on the need for fundamental educational reform to prepare students for an AI-integrated futu…
S62
AI: The Great Equaliser? — While the introduction of AI technology may result in job losses in certain sectors, it also creates new job opportuniti…
S63
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The government’s response through the India AI Mission has established a shared compute framework providing access to 38…
S64
Cyber Resilience Playbook for PublicPrivate Collaboration — – Governments can completely exit the zero-day market and avoid research dedicated to finding software vulnerabilities….
S65
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — He introduces a panel of experts from different fields
S66
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Deepali Khanna from the Rockefeller Foundation opened by framing the central challenge: the digital divide is evolving i…
S67
Keynotes — At the European Dialogue on Internet Governance (EuroDIG) 2024, the imperative of multistakeholder collaboration in shap…
S68
Opening of the session — Support expressed for paragraphs 15 and 16
S69
IGF 2019 – Dynamic coalition on blockchain technologies — After Diedrich’s presentation, the moderator opened the discussion to questions from the audience. The first question wa…
S70
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — I heard from Jingdong JD. That’s the goose named with smart has doubled the last year and the the the fourth one is to i…
S71
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S72
Closing Ceremony — This argument positions artificial intelligence as a transformative force rather than merely a technological tool. It su…
S73
Closing remarks – Charting the path forward — Bouverot argues for comprehensive inclusion in AI governance discussions, extending beyond just governmental participati…
S74
Global Enterprises Show How to Scale Responsible AI — The implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani empha…
S75
Keynote Adresses at India AI Impact Summit 2026 — Gore reinforced this assessment, noting that “India’s entry into Pax Silica isn’t just symbolic, it’s strategic, it’s es…
S76
We are the AI Generation — In her conclusion, Martin articulated that the fundamental question should not be “who can build the most powerful model…
S77
Open Forum #30 High Level Review of AI Governance Including the Discussion — Lucia Russo from the OECD emphasized three strategic pillars: moving from principles to practice, providing evidence-bas…
S78
Opening keynote — Bogdan-Martin framed the AI revolution as a pivotal moment for the current generation, calling it an opportunity to take…
S79
Future-Ready Education: Enhancing Accessibility &amp; Building | IGF 2023 — In the analysis, the speakers highlight the importance of future education being skills-oriented to prepare students for…
S80
AI (and) education: Convergences between Chinese and European pedagogical practices — **Norman Sze** (former Chair of Deloitte China) provided industry perspective on AI’s impact on professional work, notin…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Jaya Jagadish
2 arguments141 words per minute1141 words484 seconds
Argument 1
Integrated approach: AI leadership requires coordinated silicon, software, systems, and policy (Jaya Jagadish)
EXPLANATION
Jaya stresses that true AI leadership cannot be achieved by focusing on a single element; it demands the simultaneous development of silicon hardware, software ecosystems, system integration, and supportive policy frameworks. This holistic coordination is essential for a nation to become a leader in AI.
EVIDENCE
She explained that “true AI leadership itself happens when silicon, software, systems, and policy, all of these aspects have to come together to achieve that leadership. No one aspect can really get us there” [32-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Global Power Shift report stresses that AI leadership demands deliberate alignment across policy, industry, research and infrastructure, confirming the need for a coordinated silicon-software-systems-policy ecosystem [S1].
MAJOR DISCUSSION POINT
Holistic AI ecosystem coordination
AGREED WITH
Thomas Zacharia, Moderator
Argument 2
Development of local startups and indigenous IP is essential for a sovereign AI ecosystem (Jaya Jagadish)
EXPLANATION
Jaya argues that building a sovereign AI capability depends on fostering homegrown startups and creating indigenous intellectual property rather than relying on external IP. Indigenous IP is a cornerstone for self‑reliance and credibility in the AI domain.
EVIDENCE
She noted, “I do definitely want to see lot more local startups. … having our own IPs is one of the key steps we need to take” [84-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Credibility in deep-tech is linked to owning semiconductor IP and fostering domestic capabilities, supporting the emphasis on indigenous IP and local startups