Responsible AI in India Leadership Ethics & Global Impact

20 Feb 2026 18:00h - 19:00h

Responsible AI in India Leadership Ethics & Global Impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session examined how Indian corporations can move responsible AI from abstract principles to provable practice, emphasizing that trust, transparency and accountability are now foundational ( [1][34] ). Andy Parsons argued that responsible AI must become an operational discipline rather than a mere compliance slide, noting a shift toward “provable practice” ( [33][34] ). He warned of a trust crisis caused by the massive scale of generative AI and said enterprises need to prove how content was created, by which models and tools ( [38-44] ). Parsons introduced the Content Authenticity Initiative and the C2PA open standard, which embeds provenance metadata directly into media files and is backed by a cross-industry coalition including Adobe, Microsoft, BBC and others ( [55-62][66-68] ). He stressed that open, interoperable, non-proprietary standards must be implemented in working code, a point especially relevant for India’s huge digital population ( [70-74] ).


Prativa Mohapatra explained Adobe’s “ART” (accountability, responsibility, transparency) philosophy, describing how provenance checks are baked into products such as Firefly and Acrobat Assistant so that inputs are licensed and outputs can be audited ( [196-204][208-220][224-228] ). She added that coordinated legal, compliance and ethical teams are essential, and that neglecting any pillar threatens future readiness ( [235-239] ). Amol Deshpande highlighted that responsible AI must be orchestrated across all five AI layers, involve people, processes and guardrails, and cannot be a one-size-fits-all solution; instead organisations should offer a “bring-your-own-AI” framework ( [162-170][176-181][187-190] ). Vishal Anand Kanwati described NPCI’s transparent transaction-decline explanations via a language model and affirmed that governance principles such as transparency are non-negotiable for trust in payment systems ( [287-293][295-298] ). Satya Ramaswamy shared Air India’s generative-AI virtual assistant that handles millions of queries, with safety “knobs” and continuous human-in-the-loop monitoring to satisfy global aviation regulations ( [258-262][261-264] ). He argued that complying with diverse international regulations does not hinder innovation, citing the airline’s ability to launch the industry’s first AI assistant while remaining within regulatory bounds ( [341-345][350-354] ).


The panel debated whether industry-led governance can replace regulation; Amol and Vishal stressed the need for standards, awareness and industry partnerships, while both agreed that regulatory frameworks are essential to prevent AI misuse at scale ( [322-329][360-366] ). Sarika Guliani concluded that responsible AI is a commitment beyond compliance, requiring shared human values, cross-sector collaboration and alignment with the “people, planet, progress” agenda, and announced that FICCI will continue to drive the dialogue into action ( [370-376][382-383] ). The discussion underscored that responsible AI must be embedded in products, governed by open standards, and supported by both industry initiatives and regulatory oversight to realize its potential in India’s digital future.


Keypoints


Major discussion points


From principles to provable practice – the need for concrete standards and transparency


Andy emphasized that responsible AI must move beyond slide-deck principles to “provable practice” and that “you need standards, not just principles” [34-35][109-112]. He presented the C2PA open standard as a concrete example of an interoperable, non-proprietary framework that can embed provenance information directly into content [62-70].


Content provenance (C2PA) as a concrete case study for responsible AI


The Coalition for Content Provenance and Authenticity (C2PA) provides “content credentials” that travel with media, enabling users to see the full genealogy of an asset - what model created it, which tools were used, etc. [55-66]. The initiative rests on three pillars – transparency, accountability and inclusivity – likened to “nutrition labels” for digital content [75-86].


Enterprise-level implementation challenges and the “ART” governance model


Amol described responsible AI as an “orchestration of all layers” – technology, people, process and governance – and warned against a one-size-fits-all approach, stressing the need for guardrails and scalable templates [162-166][170-181]. Prativa echoed this with Adobe’s “ART” (Accountability, Responsibility, Transparency) framework, citing product-level examples such as Firefly’s built-in provenance and Acrobat Assistant’s safe-by-design workflow [196-210][221-228].


Regulation as both catalyst and requirement, balanced with industry-led standards


Andy framed regulation (EU AI Act, US state laws, India’s IT rules) as a “catalyst for good practices” [107-108], while Vishal highlighted the necessity of transparency in transaction decisions and referenced the RBI’s responsible-AI guidelines [286-293]. Satya explained how Air India complies with multiple global aviation regulators while still innovating with a generative-AI virtual assistant [341-354].


Ecosystem collaboration to bridge large enterprises and MSMEs


The panel repeatedly stressed that industry bodies (FICCI, C2PA, etc.) must disseminate frameworks so smaller players can adopt them. Amol called for “awareness → action → demonstration” and for industry partnerships to cascade guardrails downstream [322-336]. Prativa warned that without such shared standards, a stark divide will emerge between “big guys” and “MSMEs” [291-300].


Overall purpose / goal of the discussion


The session aimed to move the conversation on responsible AI in India from abstract principles to actionable, enterprise-level practices. By showcasing Adobe’s C2PA model, sharing governance approaches from Air India, RPG Group, and NPCI, and debating the interplay of regulation and industry standards, the participants sought to equip Indian corporates with concrete tools, frameworks, and collaborative pathways for deploying trustworthy, inclusive AI at scale.


Overall tone and its evolution


– The opening remarks were formal and aspirational, stressing the urgency of responsible AI [4-6].


– Andy’s presentation adopted an optimistic, solution-focused tone, highlighting a successful open-standard initiative [58-66].


– The panel discussion shifted to a pragmatic and candid tone, acknowledging real-world challenges (uneven adoption, cost, governance complexity) [90-101][162-181].


– As the conversation progressed, the tone became collaborative and constructive, with participants emphasizing shared responsibility, ecosystem support, and the need for balanced regulation [322-336][341-354].


– The closing remarks returned to an hopeful, call-to-action tone, urging continued dialogue and industry commitment [370-384].


Overall, the tone remained constructive throughout, moving from high-level inspiration to grounded, actionable discussion and ending with a collective commitment to advance responsible AI in India.


Speakers

Announcer – Event announcer/moderator


Vishal Anand Kanwati – Chief Technology Officer, National Payments Corporation of India (NPCI) – expertise in payments infrastructure and AI-driven fraud detection [S4][S5]


Sarika Guliani – Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI – expertise in AI policy and industry collaboration [S6][S7]


Dr. Satya Ramaswamy – Chief Digital and Technology Officer, Air India Limited – expertise in aviation AI applications and safety-critical systems [S8][S9][S10]


Shantari Malaya – Editor, Economic Times – expertise in technology journalism and AI policy coverage [S11][S12]


Prativa Mohapatra – Vice President and Managing Director, Adobe India – expertise in responsible AI product development and content authenticity [S13][S14]


Andy Parsons – Global Head for Content Authenticity, Adobe (runs the Content Authenticity Initiative) – expertise in content provenance and AI transparency [S15][S16]


Amol Deshpande – Chief Digital Officer and Head of Innovation, RPG Group – expertise in enterprise AI strategy and governance [S18][S19]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

Opening & Context – Adobe, in partnership with FICCI, opened the session on “Responsible AI from Principles to Practice in Corporate India” [1-2]. The moderator emphasized that India’s current digital moment demands not only rapid AI adoption but responsible deployment, with trust, transparency and accountability described as “foundational” rather than optional [4-6].


Andy Parsons – From Principles to Proven Practice


Parsons framed the central challenge: responsible AI must move from a slide-deck concept to an auditable discipline [33-35]. He warned that 2026 will be the year responsible AI becomes both a duty and an innovation opportunity [21-22] and that organisations will soon be asked not whether they are responsible, but whether they can prove it [31-32]. He highlighted the need to consider implementation cost and day-to-day operational overhead when adopting responsible-AI practices [384-386].


The regulatory backdrop he outlined included the EU AI Act, California law, and India’s new IT rules on Self-Generated-Content (SGI) [387-389]. He positioned regulation as a catalyst for good practice rather than a barrier [107-108].


Parsons described the trust crisis created by the massive scale of generative AI, noting that enterprises now produce or consume AI-generated content at “extraordinary” volumes [44-45] and that the crisis is “real … happening every day to our children” [390-392]. In India’s “world’s largest digital population” [47-50], synthetic media and misinformation are operational risks for businesses [51-52]. Without the ability to demonstrate what was made, how, and by which models, companies cannot meet corporate responsibility obligations [53-55].


To illustrate a concrete solution, Parsons introduced the Coalition for Content Provenance and Authenticity (C2PA). This cross-industry body-including Adobe, Microsoft, BBC, Sony, Qualcomm and others-has created an open, free, non-proprietary standard that embeds “content credentials” directly into media files [396-398]. The C2PA badge is already visible on LinkedIn posts, signalling provenance to viewers [393-395]. Its three pillars-transparency, accountability and inclusivity-are likened to “nutrition labels” for digital content, providing provenance information such as the generating model, tools used and camera metadata [75-86][70-74].


Parsons acknowledged practical challenges: many social-media platforms strip metadata, undermining provenance [90-98]; consumer awareness of the C2PA symbol remains low [92-95]; and the business case for provenance is challenging because it does not directly generate revenue [100-101].


Panel Introduction – Shantari Malaya – The moderator introduced the panelists (Andy Parsons, Amol Deshpande, Prativa Mohapatra, Satya Ramaswamy, Vishal Anand Kanwati).


Amol Deshpande – Orchestrating Responsible AI


Deshpande argued that responsible AI must be orchestrated across the five layers of the AI lifecycle as understood by the panel and cannot be reduced to a single checklist [162-166]. He stressed the importance of people, processes and guardrails, describing a “bring-your-own-AI” model where each function can adopt suitable templates while the enterprise provides common guardrails [176-183][187-190]. He warned against a “one-size-fits-all” solution, insisting that scalable, sector-specific templates are needed for enterprises ranging from manufacturing to services [180-183][186-188].


Prativa Mohapatra – Adobe’s ART Framework & Product Embedding


Mohapatra explained Adobe’s internal ART (Accountability, Responsibility, Transparency) governance model [196-198]. She said the first pillar of Adobe’s AI governance is “ART”: accountability, responsibility and transparency. Every new Adobe product follows a rigorous, multi-step methodology that embeds provenance at the core. For example, Firefly, Adobe’s generative-AI tool, automatically attaches a “nutrition-label” style provenance tag to every output, guaranteeing that inputs are licensed and that the resulting content can be audited for compliance [208-212][214-220]. Similarly, the Acrobat Assistant inherits the trusted PDF workflow, allowing users to trace the origin of any generated document and ensuring that high-stakes outputs are traceable and legally sound [224-228]. She emphasized that legal and compliance teams must be integrated into AI governance, otherwise an organisation may fall short of future regulatory and risk requirements [235-239].


Satya Ramaswamy – Air India’s Generative-AI Virtual Assistant


Ramaswamy shared Air India’s experience with a generative-AI virtual assistant launched in May 2023, which has handled over 13.5 million queries with a 97 % autonomous resolution rate [258-262]. The system employs “safety knobs” that can be dialled to balance user convenience against the risk of inappropriate responses; AI monitors its own performance, and customers are prompted to rate the answer’s appropriateness [260-262]. Satya explained that Air India uses generative-AI models to monitor the performance of its own virtual assistant [263-264]. The airline works with partners such as Adobe to obtain indemnity against failures [263-264] and complies with multiple international aviation regulators (EU, US FAA, Indian DGCA) without letting compliance constrain Indian innovation [341-345][350-354].


Vishal Anand Kanwati – NPCI’s Transparent Fraud-Detection


Kanwati illustrated how transparency can be operationalised in payments. NPCI has built a small language model that explains, in real time, why a transaction was declined, giving consumers clear, understandable reasons for fraud-related decisions [287-291]. He linked this practice to the RBI’s responsible-AI guidelines, stating that “the principles have to be adopted – there is absolutely no choice for us” [295-298]. For him, such transparency is essential to maintain trust in the nation’s digital payments ecosystem [286-293].


Discussion on MSMEs & Ecosystem – Both Amol and Prativa stressed that the first step for MSMEs is awareness of responsibility, followed by actionable frameworks disseminated through bodies such as FICCI [322-326][328-336]. Amol warned that large enterprises must create reusable compliance frameworks because smaller firms lack dedicated legal or AI-ethics teams [304-307]; Prativa echoed that without shared standards a stark divide will emerge between “big guys” and “MSMEs” [291-300]. The panel agreed that industry consortia should cascade templates and best-practice guidance to lower-resource organisations [328-336][316-321].


Global vs Indian Regulatory Alignment – Satya noted that complying with EU, US and Indian DGCA regulations does not stifle Indian innovation [341-345]. Vishal argued that mandatory safeguards are essential to prevent AI from “going berserk” in critical financial systems [322-327][360-366].


Regulation vs Self-Governance – A broad consensus emerged that regulation is inevitable and can act as a catalyst for good practice, but “principles alone are insufficient” – concrete, interoperable standards are required [107-108][109-112][328-336][360-366]. Tension remained between Andy’s advocacy for a universal open standard (C2PA) [62-66][70-71] and Amol’s view that industry-specific templates are necessary [180-183][328-336].


Closing – Sarika Guliani – Guliani framed responsible AI as a value-driven commitment that goes beyond a compliance checkbox, linking it to the broader “people, planet, progress” agenda [370-376]. She thanked the panelists and announced that FICCI will continue to facilitate dialogue and drive collaborative actions to translate the discussed principles into concrete industry initiatives [382-383].


Key take-aways


– Shift from abstract AI principles to provable, operational practice across people, process, technology and governance [31-35][162-170].


– Importance of open, interoperable, non-proprietary standards such as C2PA content credentials for building trust in AI-generated media [75-86][70-74][396-398][393-395].


– Adobe’s ART framework shows how accountability, responsibility and transparency can be baked into product lifecycles (Firefly, Acrobat Assistant) [196-210][221-228].


– Continuous human-in-the-loop monitoring and adjustable safety guardrails are critical for high-risk deployments (Air India, NPCI) [260-264][287-291].


– Regulation is viewed as a catalyst, not a constraint, and must be complemented by industry standards to avoid fragmented compliance [107-108][328-336][360-366].


– Ongoing challenges include metadata preservation, consumer awareness of provenance symbols, and the resource gap for MSMEs[90-98][304-307].


– Sector-specific implementations provide practical road-maps for responsible AI at scale.


Unresolved issues – Raising widespread consumer awareness of provenance symbols; providing affordable, reusable compliance toolkits for MSMEs; balancing “light-touch” regulation with mandatory safeguards; and designing detailed human-in-the-loop processes for safety-critical AI systems. The panel suggested a combined approach: baseline regulatory safeguards, open-standard adoption (e.g., C2PA), and industry-led dissemination of sector-specific templates to ensure both interoperability and flexibility [328-336][360-366][322-329].


Thought-provoking remarks – Andy’s 2026 prediction; his challenge to prove responsible AI; the description of C2PA credentials as an open, free, cross-industry standard; Amol’s “one size doesn’t fit all” reminder; Prativa’s “nutrition-label” analogy; Satya’s use of generative AI to monitor its own assistant; Vishal’s language model that explains transaction declines; and the consensus that regulation can be a catalyst rather than a hindrance [21-22][31-32][58-59][62-66][70-71][180-183][186-188][78-82][260-262][287-291][107-108][341-345].


The panel left with a shared commitment to embed open standards, sector-specific guardrails, and regulatory compliance into AI products, ensuring that responsible AI becomes a practical, measurable capability across India’s corporate ecosystem.


Session transcriptComplete transcript of the session
Announcer

Welcome to this session titled, Responsible AI from Principles to Practice in Corporate India, presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite a guest. Andy Parsons, our Global Head for Content Authenticity at Adobe.

Andy, over to you.

Andy Parsons

Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.

I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity for innovation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.

And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point, but can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.

So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.

But it’s made the trust problem absolutely impossible to address. Ignore. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that.

Hundreds of millions of people consuming digital content every day. In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to provide some leadership in the C2PA, which hopefully many of you have heard of. have heard about this week, which is the Coalition for Content Provenance and Authenticity.

And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free. So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others.

And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials. I have encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others.

And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company. It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy, I think, is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency, provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used.

Simple ideas like knowing that a photograph is actually a photograph and not generated. These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.

Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.

Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I’ve often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m sure that you will see it. I’m sure that you will see it. I’m sure that you will see it. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.

What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.

And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excited to go first, that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again, and in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things, and that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries because we have representatives from all of them on our excellent panel today.

So we’re fortunate to have leaders from. Air India, PCI, RPG Group, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.

Shantari Malaya

Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.

So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.

Right. So Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.

So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.

Amol Deshpande

Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.

It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise. It has to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.

We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.

You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple levels.

Shantari Malaya

Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Pratibha here. Welcome, Pratibha. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe has constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally and at the same time as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?

So all yours here.

Prativa Mohapatra

Okay, thank you. So I think Andy set the context and since we are here not to learn about the principles but the practices, I think everybody should go back with certain practices. So the first practice of AI governance which we practice is art, which is accountability, responsibility and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course we are in the business of content for a very, very long time and now the same content is becoming available. It’s becoming the currency which everybody’s debating. So our principles have been there for a while, but how it is actualized.

And let me say how it’s translated into our products. And by the way, it’s in our products. It’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law. You will not be getting into any liability issues. Because how you do it is by feeding.

Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines. So Acrobat has this new feature called Acrobat Assistant.

It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.

So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to… Already Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI ensure that you tick all the three. If you miss any one you might not be ready for the future.

So that’s how I see it.

Shantari Malaya

Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Prithiva. I’ll circle back to you time permitting. Let’s see how best we can get back. Dr. Satya calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.

How do these things really fall in place in terms of vision and metrics?

Dr. Satya Ramaswamy

Thank you, Shantiri. Since the audience is international, real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.

So it was a global first in the whole airline industry. Today it handles. It has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to a human agent the remaining 50 % of the contact volume comes to A .G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when we were the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.

At the same time we don’t want any jailbreak to happen we don’t want problem injection to happen we don’t want any inappropriate thing to happen so we are watching the whole performance of the Virtual Assistant A .G as we call it all the time. So we use in fact Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer so at the end of the day when we send a response we also ask the customer did it answer your question and also allow them to give their reactions is it appropriate, inappropriate and thankfully over the last 20 years it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it but now we are as the technologies are maturing for example we now have interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem.

That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.

Shantari Malaya

Excellent. And given the kind of scale you’re operating at, I think every day is a new day.

Dr. Satya Ramaswamy

Yes, it is. We face challenges. There is new, brand new every day.

Shantari Malaya

Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here or rather sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you. How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts?

One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be? Fair at the same time proactive and detective when it comes to looking at fraud. So how what are the aspects that you look at? keenly here

Vishal Anand Kanwati

I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with this success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.

Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised

Shantari Malaya

absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you something but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?

Is it getting risked?

Prativa Mohapatra

Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.

So… I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.

So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?

Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is a… similar to electricity and steam, it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem.

Shantari Malaya

Absolutely. Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face, you know, when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem, the industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit?

Amol Deshpande

Shantari, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?

Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.

Mind you, this would change. It’s not like this one guardrails construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is, I think, very, very critical.

Shantari Malaya

Than you for that, Amol. Dr. Satya, if, you know, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean, the principles, so on and so forth. And India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony

Dr. Satya Ramaswamy

Absolutely, I think taking Air India, we are an international airline so we operate in many countries for example we go to North America US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world so by nature we are geared to looking at the regulation in all parts of the world in all parts of the world and we are looking at the regulation and we are looking at the regulation and we are looking at the regulation and being in compliance.

And our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right? So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control.

And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing. the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is, this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation.

For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it

Shantari Malaya

So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts here.

Vishal Anand Kanwati

Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions, you know. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important.

While. All of us realize. it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high

Shantari Malaya

Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please

Sarika Guliani

first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content. Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done. Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken thank you so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of Vicky and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.

That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Chantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.

So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (41)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“2026 will be the year responsible AI becomes both a duty and an innovation opportunity.”

The knowledge base notes that by 2026 questions of AI responsibility and trust will move from after-thoughts to central concerns, and AI is expected to reshape management and organisational design that year, confirming the report’s view of 2026 as a pivotal moment [S104] and [S105].

Confirmedhigh

“India’s new IT rules on Self‑Generated‑Content (SGI) require transparency in AI‑generated content.”

India’s Synthetic and Generated Intelligence (SGI) regulations have been announced, mandating transparency so users can distinguish AI-generated content, matching the report’s description of the SGI rules [S108].

Confirmedmedium

“The regulatory backdrop includes the EU AI Act.”

The EU AI Act is identified in the knowledge base as a key piece of AI regulation, confirming its presence in the regulatory landscape referenced by the speaker [S109].

Additional Contextmedium

“Trust, transparency and accountability are foundational for responsible AI deployment in India.”

Other sources stress that trust infrastructure is as critical as technical infrastructure and that accountability, transparency, rule of law and explainability are essential for AI governance, providing additional context to the claim [S59] and [S102].

Additional Contextlow

“Responsible AI must move from a slide‑deck concept to an auditable discipline.”

Discussion of AI governance in 2026 highlights the need for clear accountability mechanisms and auditable practices, adding nuance to the report’s framing of responsible AI as an auditable discipline [S104].

External Sources (113)
S1
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S2
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S4
Responsible AI in India Leadership Ethics & Global Impact part1_2 — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S5
Responsible AI in India Leadership Ethics & Global Impact — -Vishal Anand Kanwati- Chief Technology Officer, National Payments Corporation of India (NPCI)
S6
Responsible AI in India Leadership Ethics & Global Impact part1_2 — The discussion concluded with Sarika Guliani from FICCI emphasising that “responsibility is not anymore a compliance che…
S7
Responsible AI in India Leadership Ethics & Global Impact — The session concluded with FICCI’s commitment to continue translating discussions into actionable frameworks. Sarika Gul…
S8
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact-part1_2 — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S9
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Dr. Satya Ramaswamy- Vishal Anand Kanvaty – Vishal Anand Kanvaty- Dr. Satya Ramaswamy Dr. Satya focuses on balancing…
S10
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S11
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact-part1_2 — So we’re fortunate to have leaders from Air India and PCI, RPCI, and the U.S. Department of Defense, and Adobe. each of …
S12
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — So we’re fortunate to have leaders from. Air India, PCI, RPG Group, and Adobe. each of whom is navigating and translatin…
S13
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S14
Responsible AI in India Leadership Ethics & Global Impact — -Prativa Mohapatra- Vice President and Managing Director of Adobe India
S15
Driving U.S. Innovation in Artificial Intelligence — 2. Amy Cohen – Executive Director, National Association of State Election Directors 3. Andy Parsons – Senior Director of…
S16
Responsible AI in India Leadership Ethics & Global Impact — -Andy Parsons- Global Head for Content Authenticity at Adobe, runs the Content Authenticity Initiative at Adobe
S17
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excit…
S18
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S19
Responsible AI in India Leadership Ethics & Global Impact — – Andy Parsons- Amol Deshpande – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S20
Opening of the session — This position was supported by multiple delegations (Switzerland, Australia, Canada) and created a clear divide with cou…
S21
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges – **Charlie Hal…
S22
Certifying humanity: Labeling content amid AI flood — These debates are no longer theoretical. Provenance-based initiatives such as theContent Authenticity Initiative (C2PA),…
S23
High-level AI Standards panel — Need for Enhanced Collaboration Among Standards Organizations The UK government advocates for an open, inclusive, multi…
S24
Closing the Governance Gaps: New Paradigms for a Safer DNS — Although regulation in the DNS industry is inevitable, it should aim to avoid fragmented jurisdictional approaches. If t…
S25
https://dig.watch/event/india-ai-impact-summit-2026/why-science-metters-in-global-ai-governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S26
Global Enterprises Show How to Scale Responsible AI — High level of consensus on core principles with nuanced differences in implementation approaches. This suggests a maturi…
S27
Closing remarks – Charting the path forward — ### From Principles to Practice A central theme was the need to move beyond abstract principles toward concrete impleme…
S28
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S29
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Clara Neppel:Thank you for having me here, it’s a pleasure. So as it comes to the polls, the question is of course what …
S30
WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder — These key comments shaped the discussion by highlighting the complexity of implementing ethical AI across different regu…
S31
AI That Empowers Safety Growth and Social Inclusion in Action — Despite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap betwee…
S32
Laying the foundations for AI governance — Low to moderate disagreement level. The speakers largely agreed on problem identification but differed on solutions and …
S33
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Auke Pals: Thank you very much, Lisa. So I hope at this point, the EU AI Act could steer us in the right direction a…
S34
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — Boosting standardization process can establish a strong lay of requirements By focusing on education, industry collabor…
S35
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — – Moe Ba- Ke Wang- Li Tian- John OMO Bocar Ba emphasized the necessity of creating unified policy frameworks that work …
S36
Unlocking potential: Addressing inclusivity barriers in e-commerce trade to deliver sustainable impact in communities everywhere (United Kingdom) — To effectively support MSMEs, GSMA emphasizes the need for greater coordination between the private and public sectors. …
S37
Enabling trade inclusion for MSMEs, women and underrepresented communities through the postal network (UPU)- UPU TradePost Forum — However, women’s representation and empowerment in MSMEs are still limited. Currently, women are sole owners of only aro…
S38
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S39
Building Indias Digital and Industrial Future with AI — This comment shifted the discussion from abstract policy concepts to concrete technical and operational realities. It pr…
S40
WS #123 Responsible AI in Security Governance Risks and Innovation — These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, a…
S41
Responsible AI in India Leadership Ethics & Global Impact part1_2 — This set the foundational tone for the entire panel discussion, moving away from abstract principles to practical implem…
S42
Main Topic 3 –  Identification of AI generated content — A pervasive sentiment of distrust could potentially undermine democratic integrity by challenging its intrinsic structur…
S43
Certifying humanity: Labeling content amid AI flood — As a result, trust is no longer formed through close inspection. Few readers have the time, expertise, or tools to verif…
S44
Skilling and Education in AI — “Five second response, I think the one action that we need to take is improve the trust infrastructure and make sure tha…
S45
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — This comment shifted the discussion from high-level policy frameworks to concrete, relatable concerns. It established a …
S46
Responsible AI in India Leadership Ethics & Global Impact — Regulation should be viewed as a catalyst for good practices rather than just reactive compliance
S47
Keynote by Uday Shankar Vice Chairman_JioStar India — Policy frameworks should reflect India’s unique ambitions and avoid wholesale adoption of Western regulatory constructs,…
S48
Can we test for trust? The verification challenge in AI — Moderate to high disagreement with significant implications. The fundamental disagreement between Yampolskiy’s pessimist…
S49
Artificial intelligence (AI) – UN Security Council — Moreover, the lack of transparency can erode public trust. If people cannot see or understand how decisions affecting th…
S50
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Jibu Elias: Thank you very much, Yoichi-san. It’s an honor to be here to share my experience building a responsible AI e…
S51
Host Country Open Stage — Context-specific solutions are essential rather than one-size-fits-all approaches
S52
Building the Next Wave of AI_ Responsible Frameworks & Standards — Moderate disagreement level with significant implications for AI deployment strategies. While all speakers agreed on the…
S53
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S54
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — – Ioanna Ntinou- Mark Gachara Example of energy efficiency passes for houses in Germany and EU that are obligatory, mak…
S55
[Parliamentary session 1] Digital deceit: The societal impact of online mis- and disinformation — As users increasingly access information through AI, it’s essential to help them critically assess these tools and under…
S56
Laying the foundations for AI governance — – Industry attitudes toward regulation: whether companies genuinely want regulation or resist it due to power concentrat…
S57
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — The discussion revealed surprisingly few direct disagreements among speakers, with most conflicts being implicit rather …
S58
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Amandeep Singh Gill: Thank you so much, Jovan, and thank you to you, Diplo Foundation, and its partners for convening th…
S59
Driving Indias AI Future Growth Innovation and Impact — “Investment also includes energy infrastructure, because without energy, there is really no compute infrastructure you c…
S60
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S61
Closing remarks – Charting the path forward — ### From Principles to Practice A central theme was the need to move beyond abstract principles toward concrete impleme…
S62
High Level Session 1: Losing the Information Space? Ensuring Human Rights and Resilient Societies in the Age of Big Tech — This provides a concrete, real-world example of how radical transparency can work in practice, moving beyond theoretical…
S63
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S64
WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder — These key comments shaped the discussion by highlighting the complexity of implementing ethical AI across different regu…
S65
Responsible AI in India Leadership Ethics & Global Impact — “So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for …
S66
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges – **Charlie Hal…
S67
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Because how you do it is by feeding. Because AI is all about the input and output. So the input has to be something whic…
S68
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — Nadja Blagojevic: Yes, very happy to. And thank you so much for having Google here. We’re very happy to be speaking with…
S69
Global Enterprises Show How to Scale Responsible AI — The implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani empha…
S70
AI That Empowers Safety Growth and Social Inclusion in Action — Despite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap betwee…
S71
Laying the foundations for AI governance — Low to moderate disagreement level. The speakers largely agreed on problem identification but differed on solutions and …
S72
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Galia :Three minutes? One minute each, three minutes together. But just to say, I think this has been a really, really r…
S73
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Auke Pals: Thank you very much, Lisa. So I hope at this point, the EU AI Act could steer us in the right direction a…
S74
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — Boosting standardization process can establish a strong lay of requirements By focusing on education, industry collabor…
S75
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — ### SME Criticality and Transformation Urgency Bocar Ba: I think I don’t have time to be controversial, but I don’t lik…
S76
Unlocking potential: Addressing inclusivity barriers in e-commerce trade to deliver sustainable impact in communities everywhere (United Kingdom) — Collaboration is emphasized as crucial for progress in Africa, specifically in facilitating cross-border payments, which…
S77
Enabling trade inclusion for MSMEs, women and underrepresented communities through the postal network (UPU)- UPU TradePost Forum — However, women’s representation and empowerment in MSMEs are still limited. Currently, women are sole owners of only aro…
S78
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S79
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S80
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S81
Building the Next Wave of AI_ Responsible Frameworks & Standards — The discussion maintained a consistently collaborative and solution-oriented tone throughout. It began with an authorita…
S82
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S83
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S84
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S85
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S86
WS #6 Bridging Digital Gaps in Agriculture & trade Transformation — The tone was largely optimistic and solution-oriented. Speakers were enthusiastic about the potential of the Internet Ba…
S87
Day 0 Event #257 Enhancing Data Governance in the Public Sector — The discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities …
S88
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — The discussion began with a technology-focused, optimistic tone about AI’s transformative potential but gradually shifte…
S89
Quantum Technologies: Navigating the Path from Promise to Practice — The discussion unfolded against a backdrop of significant global investment exceeding $40 billion in quantum technologie…
S90
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S91
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The …
S92
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S93
WS #106 Promoting Responsible Internet Practices in Infrastructure — The discussion maintained a collaborative and constructive tone throughout, with participants showing mutual respect and…
S94
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — The discussion maintained a collaborative and optimistic tone throughout, with panelists demonstrating mutual respect an…
S95
Dynamic Coalition Collaborative Session — The discussion maintained a collaborative and constructive tone throughout, with participants showing mutual respect and…
S96
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S97
High-level SIDS Ministerial Dialogue: Key Challenges and Opportunities — Concluding the address, the speaker alluded to further information that remained unshared due to time constraints. They …
S98
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S99
Open Forum #58 Safety of journalists online — The tone of the discussion was initially somber when describing the serious threats journalists face, but became more co…
S100
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S101
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S102
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — The accountability mechanisms, transparency, rule of law, and explainability are crucial
S103
Open Forum #30 High Level Review of AI Governance Including the Discussion — Melinda Claybaugh: Thank you so much for the question, and thank you for the opportunity to be here. As you were giving …
S104
AI in 2026: Learning to live with powerful systems — Early deployments of AI were often marked by ambiguity. Who is responsible when an automated system produces an error? H…
S105
How AI in 2026 will transform management roles and organisational design — In 2026, AI will transform management structures and automate tasks as companies strive to demonstrate real value. By 20…
S106
Keynote-Alexandr Wang — “We publish model cards and evaluation benchmarks and data so you can see how they work, their intended use, and how we …
S107
Toward Collective Action_ Roundtable on Safe & Trusted AI — Professor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with go…
S108
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — India’s regulatory approach has gained unexpected international acceptance, with the new Synthetic and Generated Intelli…
S109
Stricter rules and prohibited practices: Unveiling the EU AI Act’s regulatory framework — The AI Act, legislation aimed at regulating the use of AI and preventing its harmful effects,has received approval from …
S110
WS #203 Protecting Children From Online Sexual Exploitation Including Livestreaming Spaces Technology Policy and Prevention — ## Alarming Statistics on Self-Generated Content Key themes that emerged included the need for better age assurance mec…
S111
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — I mean, I think that it’s exacerbated according to the data. The only thing that I can tell you is that trust has been e…
S112
Generative AI presents the biggest data-risk challenge in history — Cybersecurity specialistswarnthat generative AI systems, such as large language models, are creating a data risk frontie…
S113
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Andy Parsons
5 arguments191 words per minute2021 words632 seconds
Argument 1
Shift to provable practice rather than abstract principles
EXPLANATION
Andy emphasizes that responsible AI must move from high‑level principles to demonstrable, operational practices that can be verified. He frames this shift as essential for enterprises to prove they are acting responsibly.
EVIDENCE
He notes that the discussion theme is “shift from principles to provable practice” and asks whether systems can actually prove responsible AI, highlighting the need for evidence rather than just policy statements [34-35][31]. He also stresses that “you need standards, not just principles” to move beyond theory [109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes that “you need standards, not just principles” and frames regulation as a catalyst for moving from high-level principles to demonstrable practice [S5]; this supports the shift to provable practice.
MAJOR DISCUSSION POINT
From principles to provable practice
AGREED WITH
Prativa Mohapatra, Sarika Guliani, Amol Deshpande
Argument 2
C2PA content credentials provide an open, interoperable standard for provenance and authenticity
EXPLANATION
Andy describes the C2PA content credentials as an open, cross‑industry standard that attaches provenance metadata to media, enabling anyone to verify authenticity across platforms.
EVIDENCE
He explains that five years of work resulted in an open standard called the C2PA content credentials, visible as a symbol on LinkedIn, which provides transparent context for videos, audio, or images and is built on a cross-industry coalition [62-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources describe C2PA as an open, free, cross-industry standard for attaching cryptographically signed provenance metadata to media [S4] and highlight its broader adoption in the ecosystem [S21][S22].
MAJOR DISCUSSION POINT
Open standard for content provenance
AGREED WITH
Prativa Mohapatra, Vishal Anand Kanwati, Amol Deshpande
DISAGREED WITH
Amol Deshpande
Argument 3
Regulation acts as a catalyst; standards are needed beyond mere statements of intent
EXPLANATION
Andy argues that regulation should stimulate good practices, but merely publishing a responsible AI commitment is insufficient; concrete standards are required to achieve real impact.
EVIDENCE
He states that regulation, such as that in India, serves as a catalyst for good practices [107] and that “you need standards, not just principles” to move beyond a website commitment [109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulation is portrayed as a catalyst for good practices rather than reactive compliance, emphasizing the need for concrete standards [S5]; collaboration among standards bodies is also urged [S23].
MAJOR DISCUSSION POINT
Regulation as catalyst for standards
AGREED WITH
Amol Deshpande, Vishal Anand Kanwati, Shantari Malaya, Dr. Satya Ramaswamy
Argument 4
Metadata stripping by platforms and low consumer awareness hinder provenance adoption
EXPLANATION
Andy points out that many social media platforms remove metadata, reducing transparency, and that consumer awareness of provenance symbols is still very low, limiting adoption.
EVIDENCE
He notes that many platforms strip metadata when content is uploaded, and that consumer awareness is early, with users unfamiliar with the provenance pin and UI elements still developing [92-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Implementation challenges are highlighted, including platforms stripping metadata and low consumer awareness of provenance symbols [S5][S4].
MAJOR DISCUSSION POINT
Barriers to provenance adoption
Argument 5
Adobe’s Content Authenticity Initiative demonstrates baked‑in provenance across creative tools
EXPLANATION
Andy highlights Adobe’s approach of integrating provenance capabilities directly into core products rather than as add‑ons, creating a foundation for trusted AI content.
EVIDENCE
He recounts that Adobe decided five years ago to embed responsible AI via content transparency into tools like Photoshop and Premiere at their core, leading to the open C2PA standard now baked into products [58-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Adobe’s integration of provenance capabilities directly into core products like Photoshop and Premiere is documented, with the C2PA standard baked into these tools [S4]; Andy’s role as Global Head for Content Authenticity at Adobe is also noted [S5].
MAJOR DISCUSSION POINT
Baked‑in provenance in Adobe tools
A
Amol Deshpande
5 arguments180 words per minute758 words251 seconds
Argument 1
Responsible AI must be orchestrated across all AI layers with people, process, and governance
EXPLANATION
Amol stresses that responsible AI cannot be isolated to a single component; it must be coordinated across the five AI layers and involve people, processes, and governance structures.
EVIDENCE
He explains that responsibility spans all five AI layers and requires orchestration of technology, people, and governance, noting the need for agility, skill-building, and guardrails across the enterprise [162-177].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for orchestration of technology, people, and governance across AI layers is emphasized in multiple sources discussing enterprise-wide AI orchestration [S8][S10] and Deshpande’s framework for scalable playgrounds, people development, and governance [S4].
MAJOR DISCUSSION POINT
Holistic orchestration of responsible AI
AGREED WITH
Shantari Malaya
Argument 2
Industry bodies must disseminate and adopt such standards to avoid fragmented compliance
EXPLANATION
Amol argues that industry consortia should share and promote open standards so that compliance does not become siloed or inconsistent across sectors.
EVIDENCE
He emphasizes that industry partnership is key for disseminating frameworks, sharing learnings through bodies like FICCI, and preventing fragmented compliance, noting that templates must be adapted per industry but shared widely [328-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for enhanced collaboration among standards organisations to prevent fragmented approaches are made in the standards-collaboration report [S23] and in discussions about avoiding fragmented jurisdictional regulation [S24].
MAJOR DISCUSSION POINT
Standard dissemination via industry bodies
AGREED WITH
Andy Parsons, Vishal Anand Kanwati, Shantari Malaya, Dr. Satya Ramaswamy
DISAGREED WITH
Vishal Anand Kanwati
Argument 3
Industry consortia and bodies (e.g., FICCI, C2PA) should share frameworks and best‑practice templates
EXPLANATION
Amol calls for collaborative ecosystems where industry groups provide reusable templates and best‑practice guides, enabling smaller players to adopt responsible AI.
EVIDENCE
He describes a demand-supply model where suppliers provide guard-rails and frameworks, which are then shared across the value chain through industry bodies, ensuring diverse sectors receive appropriate guidance [330-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of multi-stakeholder ecosystems for sharing best-practice templates and frameworks is highlighted in the standards collaboration panel [S23] and in the consensus-building analysis that notes nuanced implementation across sectors [S26].
MAJOR DISCUSSION POINT
Sharing best‑practice templates
Argument 4
Need for industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles
EXPLANATION
Amol notes that a single set of guardrails cannot suit every industry; tailored templates are required, but they should all rest on shared responsible‑AI foundations.
EVIDENCE
He repeats that “one size doesn’t fit all” and stresses the need for industry-specific templates that can be cascaded through bodies like FICCI while preserving core principles [180-182][328-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence of industry-specific templates built on shared responsible-AI foundations is provided in the analysis of sector-specific needs and core principle consensus [S26] and in the description of templates that vary by industry while preserving core guardrails [S4].
MAJOR DISCUSSION POINT
Industry‑specific responsible AI templates
DISAGREED WITH
Andy Parsons
Argument 5
RPG Group emphasizes a “bring‑your‑own‑AI” model with scalable, guarded orchestration
EXPLANATION
Amol describes RPG’s approach of allowing each function to adopt its own AI solutions within a common governance framework, ensuring scalability and safety.
EVIDENCE
He uses the phrase “bring your own AI” to illustrate that no single solution fits all, and highlights the need for scalable, safe environments with guardrails that can be practiced across the diverse RPG group [178-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RPG “bring-your-own-AI” approach, allowing business units to adopt AI within a common governance framework, is described in the session summary [S5] and aligns with Deshpande’s scalable playgrounds framework [S4].
MAJOR DISCUSSION POINT
Bring‑your‑own‑AI orchestration
P
Prativa Mohapatra
3 arguments155 words per minute1118 words432 seconds
Argument 1
Embedding “ART” (Accountability, Responsibility, Transparency) into product development and lifecycle
EXPLANATION
Prativa outlines Adobe’s internal AI governance model called ART, which embeds accountability, responsibility and transparency into every product’s development process.
EVIDENCE
She states that the first practice of AI governance at Adobe is “ART”-accountability, responsibility and transparency-and that every new product follows a secure methodology with hundreds of steps embedding these principles [196-207].
MAJOR DISCUSSION POINT
ART governance framework
AGREED WITH
Andy Parsons, Sarika Guliani, Amol Deshpande
Argument 2
Adobe’s Firefly embeds provenance “nutrition labels” to guarantee lawful, traceable outputs
EXPLANATION
Prativa explains that Adobe’s generative AI tool Firefly automatically attaches provenance “nutrition labels” to generated content, ensuring legal compliance and traceability.
EVIDENCE
She notes that Firefly embeds content traditions and nutrition labels, so any output carries provenance information that helps enterprises avoid legal liability and verify the source of data and models used [208-212].
MAJOR DISCUSSION POINT
Provenance nutrition labels in Firefly
AGREED WITH
Andy Parsons, Vishal Anand Kanwati, Amol Deshpande
Argument 3
Small and medium enterprises lack resources for dedicated AI compliance teams; large firms must create reusable frameworks
EXPLANATION
Prativa points out that MSMEs cannot afford dedicated AI compliance structures, so larger enterprises need to develop reusable frameworks that can be shared or adapted by smaller players.
EVIDENCE
She observes that small organizations cannot set up dedicated legal and compliance teams for AI, whereas large firms can shift resources and create frameworks that can be disseminated, highlighting the disparity between big and small players [304-307].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion highlights the disparity between large enterprises that can develop reusable compliance frameworks and smaller firms that lack dedicated AI compliance resources, emphasizing the role of industry bodies in bridging this gap [S5][S26].
MAJOR DISCUSSION POINT
SME resource constraints for AI compliance
D
Dr. Satya Ramaswamy
4 arguments187 words per minute1064 words340 seconds
Argument 1
Deploying generative AI with safety guardrails, continuous monitoring, and human‑in‑the‑loop feedback
EXPLANATION
Satya describes Air India’s generative AI virtual assistant, which operates with adjustable safety settings, real‑time monitoring, and post‑interaction human feedback to ensure safe, reliable service.
EVIDENCE
He details the virtual assistant’s launch in May 2023, handling 13.5 million queries with 97 % autonomous resolution, using safety knobs to balance convenience and risk, and employing AI-based monitoring plus customer feedback on appropriateness to prevent jailbreaks or inappropriate responses [258-262].
MAJOR DISCUSSION POINT
Safety‑guarded generative AI assistant
Argument 2
Compliance with multiple global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation
EXPLANATION
Satya explains that Air India operates across many jurisdictions, complying with each region’s AI‑related regulations while still innovating, showing that regulation need not stifle progress.
EVIDENCE
He notes that Air India flies to North America, Europe and India, complying with the FAA, DGCA and other regulators, and asserts that meeting these rules does not constrain Indian innovation, citing the successful launch of the industry-first generative AI assistant [341-354].
MAJOR DISCUSSION POINT
Global regulatory compliance and innovation
AGREED WITH
Andy Parsons, Amol Deshpande, Vishal Anand Kanwati, Shantari Malaya
Argument 3
Guardrails must be balanced with user convenience; over‑restrictive safety reduces service quality
EXPLANATION
Satya warns that tightening safety controls too much can degrade customer experience, so a balance is needed between protection and usability.
EVIDENCE
He explains that dialing the safety knob too high makes the service inconvenient, and that the system must stay flexible to evolving customer queries while still preventing jailbreaks and inappropriate content [260-262].
MAJOR DISCUSSION POINT
Balancing safety guardrails and user experience
Argument 4
Air India’s generative AI virtual assistant handles millions of queries with 97 % autonomous resolution, backed by safety monitoring
EXPLANATION
Satya provides concrete performance metrics of the virtual assistant, demonstrating its scale, effectiveness, and the safety mechanisms that underpin its operation.
EVIDENCE
He cites that since its launch the assistant has processed about 13.5 million queries, averaging 40 000 per day, with a 97 % autonomous handling rate and continuous safety monitoring to prevent misuse [258-262].
MAJOR DISCUSSION POINT
Scale and effectiveness of AI assistant
V
Vishal Anand Kanwati
3 arguments184 words per minute584 words189 seconds
Argument 1
Ensuring transparency, fairness, and explainability in AI‑driven transaction decisions
EXPLANATION
Vishal stresses that customers should receive clear explanations for declined or flagged transactions, promoting fairness and trust in AI‑based fraud detection.
EVIDENCE
He describes a small language model that can chat with users to explain why a transaction was declined, providing transparency and helping users understand the decision, while also aiming to keep false-positive rates low [287-291].
MAJOR DISCUSSION POINT
Transparent AI decisions in payments
AGREED WITH
Andy Parsons, Prativa Mohapatra, Amol Deshpande
Argument 2
Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments
EXPLANATION
Vishal argues that regulations are necessary to keep AI systems from causing harm, especially given the high stakes of financial transactions.
EVIDENCE
He states that regulations are required because AI can “go berserk,” citing the need for safeguards to limit false positives and protect transaction integrity, and emphasizes that such rules must be embedded across the ecosystem [360-366].
MAJOR DISCUSSION POINT
Regulatory safeguards for payment AI
AGREED WITH
Andy Parsons, Amol Deshpande, Shantari Malaya, Dr. Satya Ramaswamy
DISAGREED WITH
Amol Deshpande
Argument 3
NPCI’s AI‑driven fraud detection provides transparent explanations for declined transactions while minimizing false positives
EXPLANATION
Vishal outlines NPCI’s approach of combining accuracy with explainability, ensuring that customers understand declines and that false‑positive rates remain low.
EVIDENCE
He explains that the system aims for low false-positive rates, improves accuracy over time through data and industry collaboration, and now offers a chat-based interface that tells users why a transaction was declined, reinforcing trust [280-291].
MAJOR DISCUSSION POINT
Transparent fraud detection at NPCI
S
Sarika Guliani
1 argument141 words per minute586 words249 seconds
Argument 1
Responsible AI is a value‑driven commitment, not just a compliance checkbox
EXPLANATION
Sarika asserts that responsible AI should be rooted in shared human values and ethical commitments rather than being treated merely as a regulatory formality.
EVIDENCE
She remarks that responsibility is no longer a compliance check but a technology commitment built on shared human values, emphasizing that decisions now define what we create rather than just following rules [372-376].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion stresses that responsibility is no longer a mere compliance check but a technology commitment built on shared human values [S5].
MAJOR DISCUSSION POINT
AI responsibility as value‑driven commitment
AGREED WITH
Andy Parsons, Prativa Mohapatra, Amol Deshpande
A
Announcer
2 arguments129 words per minute129 words59 seconds
Argument 1
AI is a powerful engine for innovation and productivity in India’s digital journey
EXPLANATION
The Announcer frames AI as a key driver that can accelerate India’s digital transformation and economic growth, positioning it as a defining moment for the nation.
EVIDENCE
The opening remarks state that India stands at a defining moment in its digital journey as AI becomes a powerful engine for innovation and productivity, highlighting the strategic importance of AI for the country [2].
MAJOR DISCUSSION POINT
AI as catalyst for national development
Argument 2
Responsible deployment of AI, grounded in trust, transparency, and accountability, is essential and non‑optional
EXPLANATION
The Announcer emphasizes that the real differentiator is not the speed of AI adoption but the responsibility with which it is deployed, insisting that trust, transparency and accountability must be foundational pillars.
EVIDENCE
The speaker contrasts rapid adoption with responsible deployment, stating that trust, transparency and accountability are no longer optional but foundational for AI initiatives [3-5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session highlights that trust, transparency and accountability are foundational pillars for AI, moving beyond optional compliance [S5].
MAJOR DISCUSSION POINT
Foundational pillars of responsible AI
S
Shantari Malaya
5 arguments160 words per minute1621 words605 seconds
Argument 1
Responsible AI principles must be translated into concrete enterprise strategy frameworks
EXPLANATION
Shantari argues that the value of responsible AI lies in its operationalization within companies, requiring clear strategies that embed fairness, accountability, transparency, privacy and inclusivity into business models.
EVIDENCE
She notes that building trustworthy and inclusive AI will be about how responsible AI principles are realistically translated into enterprise strategy frameworks and how organizations will go about it [144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to operationalise responsible AI within enterprise strategy frameworks is echoed in the orchestration literature and consensus-building reports on translating principles into practice [S8][S26].
MAJOR DISCUSSION POINT
Operationalizing responsible AI in enterprises
Argument 2
Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem
EXPLANATION
Shantari states that in a maturing economy, regulation will inevitably play a role and that stakeholders should view it as a positive catalyst rather than a barrier.
EVIDENCE
She remarks that regulatory intervention is an inevitability that must be welcomed at some level, indicating acceptance of regulation as part of the AI governance landscape [369-371].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulation is described as inevitable and should be viewed as a catalyst rather than a barrier, with recommendations to avoid fragmented regulatory approaches [S24][S23].
MAJOR DISCUSSION POINT
Regulation as a necessary component of AI governance
AGREED WITH
Amol Deshpande
Argument 3
One‑size‑fits‑all solutions are unsuitable; industry‑specific approaches are required
EXPLANATION
Shantari highlights that different sectors have distinct needs, praising the “bring‑your‑own‑AI” concept and emphasizing the necessity for tailored frameworks rather than uniform standards.
EVIDENCE
She acknowledges that “one size doesn’t fit all,” appreciates the “bring your own AI” coinage, and stresses the need for sector-specific solutions [186-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses stress that while core responsible-AI principles are shared, implementation must be sector-specific, reflecting nuanced differences across industries [S26][S4].
MAJOR DISCUSSION POINT
Need for sector‑specific responsible AI models
AGREED WITH
Amol Deshpande, Andy Parsons
Argument 4
The pace of AI adoption must be balanced with the ability to manage operational consequences
EXPLANATION
Shantari points out the tension between moving quickly with AI and ensuring organizations can own the operational risks and consequences of rapid deployment.
EVIDENCE
She reflects on whether the industry is moving too fast to own operational consequences, questioning the balance between speed and responsibility [241-245].
MAJOR DISCUSSION POINT
Balancing speed of AI adoption with operational responsibility
Argument 5
Industry bodies and ecosystems must help MSMEs adopt responsible AI frameworks
EXPLANATION
Shantari stresses that larger enterprises and industry consortia have a duty to create reusable, accessible frameworks so that small and medium businesses can implement responsible AI without prohibitive costs.
EVIDENCE
She asks Amol about the role of the ecosystem in helping responsible AI move from letter to spirit, highlighting the need for industry-led support for smaller players [316-321].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session notes that industry bodies have a duty to create reusable, accessible frameworks for MSMEs, reinforcing ecosystem support for smaller players [S5][S26].
MAJOR DISCUSSION POINT
Ecosystem support for MSME responsible AI adoption
Agreements
Agreement Points
Regulation is essential and can act as a catalyst for good practices, but standards are needed beyond mere principles
Speakers: Andy Parsons, Amol Deshpande, Vishal Anand Kanwati, Shantari Malaya, Dr. Satya Ramaswamy
Regulation acts as a catalyst; standards are needed beyond mere statements of intent Industry bodies must disseminate and adopt such standards to avoid fragmented compliance Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem Compliance with multiple global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation
All speakers agree that regulation will inevitably shape responsible AI and should be viewed as a catalyst rather than a barrier; however, merely publishing commitments is insufficient – concrete, interoperable standards are required to translate principles into practice [107][109][328-336][360-366][369-371][341-354].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with the view that regulation should be a catalyst for good practices rather than mere compliance, as expressed in recent Indian policy discussions and OECD-style frameworks [S46][S47].
Transparency and provenance must be embedded in AI systems and clearly communicated to users
Speakers: Andy Parsons, Prativa Mohapatra, Vishal Anand Kanwati, Amol Deshpande
C2PA content credentials provide an open, interoperable standard for provenance and authenticity Adobe’s Firefly embeds provenance “nutrition labels” to guarantee lawful, traceable outputs Ensuring transparency, fairness, and explainability in AI‑driven transaction decisions Responsible AI must be orchestrated across all AI layers with people, process, and governance
The panel concurs that AI-generated content and decisions need verifiable provenance and explainability; open standards like C2PA and built-in “nutrition labels” are examples, and platforms should avoid stripping metadata while providing clear reasons for AI-driven outcomes [62-66][92-98][208-212][287-291][162-177].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects calls for labeling AI-generated content and ensuring provenance to maintain public trust, echoed in UN and OECD reports on AI transparency [S42][S43][S55][S49].
One‑size‑fits‑all solutions are unsuitable; responsible AI frameworks must be industry‑specific and flexible
Speakers: Amol Deshpande, Shantari Malaya, Andy Parsons
One size doesn’t fit all; industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles One‑size‑fits‑all solutions are unsuitable; industry‑specific approaches are required It should not be owned by any one company; it should be standards‑based
Speakers stress that responsible AI cannot be a single monolithic solution; instead, sector-tailored templates and a “bring-your-own-AI” mindset are needed, underpinned by open, non-proprietary standards [180-183][186-188][70-71].
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with advocacy for context-specific standards rather than universal ones, highlighted in OECD and industry-led governance debates [S51][S57][S56].
Responsible AI must move from abstract principles to provable, operational practice embedded in products and processes
Speakers: Andy Parsons, Prativa Mohapatra, Sarika Guliani, Amol Deshpande
Shift to provable practice rather than abstract principles Embedding “ART” (Accountability, Responsibility, Transparency) into product development and lifecycle Responsible AI is a value‑driven commitment, not just a compliance checkbox Responsible AI must be orchestrated across all AI layers with people, process, and governance
All agree that responsible AI should be concretised through baked-in product features, governance frameworks and measurable provenance, moving beyond policy statements to demonstrable practice [31-35][196-207][372-376][162-177].
POLICY CONTEXT (KNOWLEDGE BASE)
Matches the ‘principles-to-practice’ shift emphasized in multiple panels and policy roadmaps [S38][S41][S40].
Awareness and capacity building are prerequisite steps for responsible AI adoption, especially for MSMEs
Speakers: Amol Deshpande, Shantari Malaya
Responsible AI must be orchestrated across all AI layers with people, process, and governance Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem
Both highlight that the first step is raising awareness and building skills across the ecosystem; industry bodies must help smaller firms acquire the needed capacity to implement responsible AI [322-326][241-245].
POLICY CONTEXT (KNOWLEDGE BASE)
Supported by capacity-building recommendations in AI policy roadmaps and South-South cooperation initiatives [S60][S58][S50].
Similar Viewpoints
Both assert that regulatory compliance does not hinder innovation; instead, regulation can drive the adoption of robust standards and good practices [107][109][341-354].
Speakers: Andy Parsons, Dr. Satya Ramaswamy
Regulation acts as a catalyst; standards are needed beyond mere statements of intent Compliance with multiple global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation
Both emphasize the necessity of sector‑specific responsible AI frameworks rather than a universal solution [180-183][186-188].
Speakers: Amol Deshpande, Shantari Malaya
One size doesn’t fit all; industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles One‑size‑fits‑all solutions are unsuitable; industry‑specific approaches are required
Both view responsible AI as a core value‑driven commitment that must be integrated into product lifecycles, not merely a compliance exercise [196-207][372-376].
Speakers: Prativa Mohapatra, Sarika Guliani
Embedding “ART” (Accountability, Responsibility, Transparency) into product development and lifecycle Responsible AI is a value‑driven commitment, not just a compliance checkbox
Unexpected Consensus
Balancing safety guardrails with user convenience and transparency across very different domains (aviation and digital payments)
Speakers: Dr. Satya Ramaswamy, Vishal Anand Kanwati
Guardrails must be balanced with user convenience; over‑restrictive safety reduces service quality Ensuring transparency, fairness, and explainability in AI‑driven transaction decisions
Despite operating in distinct sectors, both speakers converge on the need to calibrate AI safety controls so that they protect users without degrading experience, and to provide clear explanations for AI-driven outcomes [260-262][287-291].
POLICY CONTEXT (KNOWLEDGE BASE)
Highlights the tension between safety and usability noted in cross-sector discussions on AI governance, such as the aviation-payments analogy used in trust-infrastructure talks [S45][S54].
Overall Assessment

The discussion shows strong convergence among speakers on four pillars: (1) regulation as a catalyst paired with concrete standards; (2) transparency/provenance embedded in AI products; (3) the necessity of industry‑specific, flexible frameworks; (4) operationalising responsible AI through baked‑in product features and capacity building. These alignments cut across AI, data governance, and the enabling environment for digital development, indicating a mature consensus that can drive coordinated policy, standard‑setting and industry collaboration.

High consensus – most speakers echo each other’s positions, suggesting a unified industry stance that can facilitate rapid development of interoperable standards, supportive regulatory frameworks, and ecosystem‑wide capacity initiatives.

Differences
Different Viewpoints
Universality of open standards versus need for industry‑specific templates
Speakers: Andy Parsons, Amol Deshpande
C2PA content credentials provide an open, interoperable standard for provenance and authenticity Need for industry‑specific templates that respect diverse sectors while maintaining core responsible‑AI principles
Andy advocates a cross-industry, open and free standard (C2PA) that should be adopted universally and not owned by any single company [62-66][70-71][109]. Amol counters that a single set of guardrails cannot suit every sector, insisting that “one size doesn’t fit all” and that templates must be tailored to each industry while still resting on shared principles [180-182][328-336].
POLICY CONTEXT (KNOWLEDGE BASE)
Debate mirrors the split observed in OECD workshops where open standards were weighed against sector-tailored templates [S51][S52].
Feasibility of industry‑led governance versus necessity of mandatory regulation
Speakers: Amol Deshpande, Vishal Anand Kanwati
Industry bodies must disseminate and adopt such standards to avoid fragmented compliance Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments
Amol emphasizes that awareness, action and industry-body partnerships can drive responsible AI without heavy regulatory imposition, proposing a demand-supply model where standards are shared through consortia [322-327][328-336]. Vishal argues that regulations are required because AI can “go berserk”, insisting that safeguards must be embedded across the ecosystem and that regulation is unavoidable [360-366].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects ongoing disagreement between industry-led and government-mandated approaches documented in recent AI governance forums [S56][S57][S46].
Perception of the trust crisis in AI‑generated content
Speakers: Andy Parsons, Dr. Satya Ramaswamy
The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves Air India’s generative AI assistant has not produced any inappropriate response in years, showing that the trust problem can be effectively managed
Andy stresses that a trust crisis is already evident in everyday media and business contexts, citing the proliferation of synthetic content and misinformation as real operational risks [38-40][42-44]. Satya, referencing Air India’s virtual assistant, claims that in over two years the system has never given an inappropriate answer, suggesting that the crisis is not as severe as portrayed [262].
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes concerns about public distrust of AI-generated media raised in UN and OECD panels on misinformation and content labeling [S42][S43][S45].
Unexpected Differences
Severity of the AI trust crisis
Speakers: Andy Parsons, Dr. Satya Ramaswamy
The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves Air India’s generative AI assistant has not produced any inappropriate response in years, showing that the trust problem can be effectively managed
Andy portrays a pervasive trust problem affecting media and businesses, while Satya points to his airline’s AI system that has operated without any inappropriate outputs, suggesting a much less acute crisis than Andy describes [38-40][42-44][262]
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with assessments of a deepening trust crisis in AI, cited in UN Security Council remarks and academic surveys on trust erosion [S49][S45][S42].
Overall Assessment

The panel largely shares a common vision of responsible AI as essential and sees regulation, standards, and industry collaboration as necessary. However, clear points of contention arise around whether a single open standard can serve all sectors versus the need for industry‑specific templates, the extent to which regulation should drive governance versus industry‑led self‑regulation, and how serious the current trust crisis truly is.

Moderate – while there is broad consensus on goals, the differing views on implementation pathways (universal standards vs sector‑specific frameworks; industry‑led governance vs mandatory regulation; perception of trust risk) could affect the speed and coherence of policy and product roll‑outs. These divergences suggest that coordinated multi‑stakeholder dialogue will be needed to reconcile approaches before large‑scale adoption can proceed smoothly.

Partial Agreements
All three concur that regulation is required for responsible AI, but Andy frames it as a catalyst to spur standards, Vishal stresses it as a mandatory safeguard, and Shantari views it as an inevitable component that should be embraced [107][109][360-366][369-371]
Speakers: Andy Parsons, Vishal Anand Kanwati, Shantari Malaya
Regulation acts as a catalyst; standards are needed beyond mere statements of intent Mandatory regulatory safeguards are essential to prevent AI misuse, especially in critical domains like payments Regulatory intervention is inevitable and should be welcomed as part of the responsible AI ecosystem
All agree that smaller enterprises need support, but Amol emphasizes industry‑body distribution of templates, Prativa stresses large firms creating reusable frameworks, and Shantari calls for ecosystem‑wide assistance through bodies like FICCI [328-336][304-307][316-321]
Speakers: Amol Deshpande, Prativa Mohapatra, Shantari Malaya
Industry bodies must disseminate and adopt such standards to avoid fragmented compliance Small and medium enterprises lack resources for dedicated AI compliance teams; large firms must create reusable frameworks Industry bodies and ecosystems must help MSMEs adopt responsible AI frameworks
Takeaways
Key takeaways
Responsible AI must move from abstract principles to provable, operational practice across all AI layers (people, process, technology, governance). Open, interoperable standards such as the C2PA content credentials are essential for building trust and provenance in AI‑generated content. Embedding accountability, responsibility, and transparency (the “ART” framework) directly into product development cycles is a practical way to operationalise responsible AI. Continuous safety monitoring, guardrails, and human‑in‑the‑loop feedback are critical for generative AI deployments, especially in high‑risk domains like aviation and payments. Regulation is viewed as a catalyst; compliance with global regimes (EU AI Act, US state laws, Indian IT rules) can coexist with innovation when supported by industry standards. Challenges include metadata stripping, low consumer awareness, and the resource gap for MSMEs to build dedicated AI compliance capabilities. Sector‑specific implementations (Adobe’s Firefly provenance labels, RPG’s “bring‑your‑own‑AI” orchestration, Air India’s virtual assistant, NPCI’s transparent fraud‑detection) illustrate practical pathways.
Resolutions and action items
FICCI will continue to facilitate dialogue and drive collaborative actions on responsible AI among industry participants. Panelists and their organisations committed to share frameworks, templates, and best‑practice guidance through industry bodies (e.g., C2PA, FICCI). Adobe will promote wider adoption of C2PA credentials and embed provenance metadata in its product suite. Air India will maintain and enhance its safety monitoring and feedback loops for the generative AI virtual assistant. NPCI will expand its transparent AI‑driven fraud‑detection explanations and refine false‑positive rates. RPG Group will disseminate its scalable “bring‑your‑own‑AI” governance model to other enterprises and partners.
Unresolved issues
How to effectively raise consumer awareness and UI visibility of provenance symbols at scale. Specific mechanisms for supporting MSMEs in implementing responsible AI without the resources of large enterprises. The precise balance between regulatory mandates and industry‑led self‑governance, especially regarding “light‑touch” versus stricter rules. Details of human‑in‑the‑loop processes for AI systems in domains like aviation and payments were mentioned but not fully defined. Standardisation of industry‑specific templates that satisfy diverse sector requirements while maintaining core responsible‑AI principles.
Suggested compromises
Adopt a regulatory approach that acts as a catalyst—mandatory baseline safeguards combined with flexibility for innovation (light‑touch regulation). Combine industry‑led standards (e.g., C2PA) with regulatory requirements to avoid fragmented compliance and ensure interoperability. Balance safety guardrails with user convenience by calibrating “safety knobs” and providing transparent fallback options (human escalation). Leverage large enterprises to create reusable compliance frameworks that can be shared with MSMEs through industry consortia.
Thought Provoking Comments
2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity for innovation.
Sets a concrete near‑term horizon, turning responsible AI from a vague aspiration into an imminent business imperative and innovation driver.
Created urgency that framed the rest of the discussion; panelists referenced the 2026 timeline when talking about upcoming regulations (EU AI Act, US California law, Indian IT rules) and the need to move from principles to practice.
Speaker: Andy Parsons
Can your systems actually prove that you have been responsible with AI, and how do you go about doing that?
Shifts the conversation from abstract ethics to measurable, auditable evidence of responsibility, introducing the notion of ‘provable practice’.
Prompted multiple speakers to discuss provenance, standards, and concrete mechanisms (C2PA credentials, product‑level metadata) that can demonstrate compliance, steering the dialogue toward technical solutions.
Speaker: Andy Parsons
We decided five years ago that responsible AI via content transparency wasn’t a feature that could be grafted onto our products… it had to be baked into the tools at their very core.
Highlights a strategic product‑development choice—embedding responsibility at the architecture level rather than as an after‑thought—offering a model for other enterprises.
Set the stage for the later discussion of the C2PA standard and inspired other panelists (e.g., Prativa) to cite how their own products embed provenance, reinforcing the theme of deep integration.
Speaker: Andy Parsons
The C2PA content credentials provide transparent context about a piece of media… an open, cross‑industry standard that anyone can adopt for free.
Introduces a tangible, industry‑wide solution that addresses the earlier call for provable responsibility and emphasizes openness and interoperability.
Led to references about adoption challenges (metadata stripping, consumer awareness) and reinforced the argument that standards—not just principles—are essential for scalable trust.
Speaker: Andy Parsons
One size doesn’t fit all. It’s a ‘bring‑your‑own‑AI’ scenario in every function.
Challenges the notion of a single, monolithic governance framework, emphasizing the need for flexible, context‑specific approaches across diverse business units.
Shifted the tone from a uniform solution to a discussion about modularity and the importance of tailoring guardrails, prompting other speakers to talk about industry‑specific templates and the role of ecosystem bodies.
Speaker: Amol Deshpande
Firefly embeds a ‘nutrition‑label’ style provenance; every output carries that nutrition level, guaranteeing compliance and accountability.
Provides a concrete product example that translates abstract principles (accountability, transparency) into a user‑facing feature, making the concept tangible.
Deepened the practical dimension of the conversation, leading to further examples (Acrobat Assistant) and reinforcing the idea that responsibility can be built into the user experience.
Speaker: Prativa Mohapatra
We use generative AI to watch the performance of our generative AI virtual assistant, balancing the safety knob with customer convenience.
Introduces the meta‑use of AI for self‑monitoring, illustrating a sophisticated, real‑world guardrail mechanism that addresses both safety and user experience.
Added a layer of technical complexity, prompting the panel to consider AI‑in‑AI oversight as part of responsible AI strategies and influencing the later discussion on regulation as a catalyst rather than a constraint.
Speaker: Dr. Satya Ramaswamy
We built a small language model that can explain to a customer why a transaction was declined, giving transparent, real‑time reasons for fraud‑related decisions.
Shows a concrete, consumer‑centric implementation of transparency in a high‑stakes domain (payments), extending the provenance concept beyond media to financial services.
Expanded the conversation to the payments ecosystem, illustrating how the same principles can be operationalized across sectors and reinforcing the need for explainability in AI decisions.
Speaker: Vishal Anand Kanwati
Regulation is a catalyst for good practices; compliance does not constrain Indian innovation.
Reframes regulation from a restrictive force to an enabling one, addressing a common fear among enterprises and aligning with India’s rapid digital growth.
Shifted the tone of the regulatory debate, influencing later remarks (Vishal, Amol) that while regulation is inevitable, it can coexist with innovation and industry‑led standards.
Speaker: Dr. Satya Ramaswamy
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the dialogue from abstract ethics to concrete, actionable frameworks. Andy Parsons’ framing of a 2026 deadline and the demand for provable responsibility set a sense of urgency and introduced the need for measurable standards, which anchored the rest of the conversation. Subsequent comments—especially the introduction of the C2PA open standard, Amol’s ‘bring‑your‑own‑AI’ flexibility, Prativa’s product‑level provenance labels, Satya’s meta‑AI monitoring, and Vishal’s transaction‑explanation model—provided tangible examples that illustrated how principles can be embedded across industries. These insights prompted participants to explore implementation challenges, the role of regulation as an enabler, and the importance of industry collaboration. Collectively, the highlighted comments shaped the session into a forward‑looking, solution‑oriented exchange, emphasizing that responsible AI is not merely a compliance checkbox but a strategic, technically grounded capability that can be scaled across India’s diverse enterprise landscape.

Follow-up Questions
How can organizations demonstrably prove that they are using AI responsibly, and what metrics or evidence are needed to show compliance?
Establishing verifiable proof of responsible AI is essential for building trust, meeting regulatory requirements, and differentiating compliant enterprises.
Speaker: Andy Parsons
What are the implementation costs and ongoing operational expenses associated with deploying responsible AI practices?
Understanding financial implications helps businesses budget, justify investments, and assess ROI for responsible AI initiatives.
Speaker: Andy Parsons
How can consumer awareness of content provenance symbols be increased, and what UI/UX designs are most effective for displaying these indicators?
Widespread user recognition of provenance marks is critical for the success of transparency standards and for empowering end‑users to make informed choices.
Speaker: Andy Parsons
What strategies can address the uneven adoption of provenance standards, especially when platforms (e.g., social media) strip metadata?
Ensuring consistent preservation of provenance data across all distribution channels is necessary to maintain the integrity of the transparency ecosystem.
Speaker: Andy Parsons
How can a viable business case be built for provenance and transparency technologies when they do not directly generate revenue?
Demonstrating economic value or indirect benefits (e.g., risk reduction, brand trust) is needed to encourage enterprise investment in responsible AI tools.
Speaker: Andy Parsons
What best‑practice frameworks enable a “bring‑your‑own‑AI” approach that remains scalable, safe, and governed by effective guardrails across diverse functions?
Enterprises need reusable, adaptable models for integrating third‑party AI while maintaining compliance and risk controls.
Speaker: Amol Deshpande
What training and skill‑development programs are most effective for upskilling the entire value chain on responsible AI principles?
People are a critical stakeholder; systematic education ensures consistent application of responsible AI across an organization.
Speaker: Amol Deshpande
How can explainable AI be integrated into payment‑transaction systems so that users receive clear, understandable reasons for declines or fraud flags?
Transparency in financial decisions builds trust and reduces customer friction, especially in high‑volume digital payment ecosystems.
Speaker: Vishal Anand Kanwati
What methods can balance false‑positive fraud detection with fairness, ensuring legitimate transactions are not unduly blocked while still catching fraud?
Optimizing detection accuracy is vital for user experience, financial inclusion, and regulatory compliance in payment systems.
Speaker: Vishal Anand Kanwati
How can responsible‑AI frameworks and tooling be made affordable and accessible for MSMEs that lack large legal or compliance teams?
Ensuring smaller businesses can adopt responsible AI prevents a widening gap between large enterprises and the broader market.
Speaker: Prativa Mohapatra
What mechanisms can industry bodies use to disseminate responsible‑AI standards and templates effectively across varied sectors and company sizes?
Coordinated industry‑wide adoption accelerates standardization and reduces duplication of effort.
Speaker: Amol Deshpande
How can global AI regulatory frameworks (EU AI Act, UNESCO, OECD) be harmonized with India’s emerging policies to create a coherent compliance landscape?
Alignment reduces regulatory friction for multinational operations and ensures Indian innovations remain globally competitive.
Speaker: Dr. Satya Ramaswamy
What metrics and evaluation methods should be used to assess the effectiveness of AI governance frameworks and provenance standards?
Quantitative assessment is needed to track progress, demonstrate impact, and guide continuous improvement.
Speaker: General (multiple participants implied)
What is the optimal balance between industry‑led self‑regulation and formal regulatory intervention for AI, especially in high‑scale ecosystems like payments?
Clarifying the roles of self‑governance versus law helps shape policy that protects users while fostering innovation.
Speaker: Vishal Anand Kanwati
How can a light‑touch regulatory approach be designed that still ensures safety and fairness without stifling AI innovation?
Finding the right regulatory intensity is crucial for encouraging rapid AI adoption while safeguarding public interest.
Speaker: Sarika Guliani
What design principles ensure effective human‑in‑the‑loop mechanisms for AI systems operating at airline‑scale, balancing safety with customer convenience?
Human oversight remains essential in safety‑critical domains; research is needed on scalable, real‑time intervention models.
Speaker: Dr. Satya Ramaswamy
How does the presence of provenance symbols affect user trust and behavior across different cultural and linguistic contexts in India?
India’s diverse user base may respond differently; studying impact informs culturally appropriate rollout strategies.
Speaker: Andy Parsons (implied)
What are the technical challenges and solutions for ensuring interoperability of provenance standards across hardware manufacturers (cameras, smartphones) and software platforms?
Cross‑industry compatibility is key for a universal trust layer; research can identify standards gaps and integration pathways.
Speaker: Andy Parsons

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.