Responsible AI in India Leadership Ethics & Global Impact

20 Feb 2026 18:00h - 19:00h

Responsible AI in India Leadership Ethics & Global Impact

Session at a glance

Summary

This discussion, titled “Responsible AI from Principles to Practice in Corporate India,” examined how enterprises can translate responsible AI principles into practical implementation strategies. The session was presented by Adobe in association with FICCI and featured industry leaders from Air India, NPCI, RPG Group, and Adobe.


Andy Parsons from Adobe opened by emphasizing that 2026 will mark a pivotal shift where responsible AI becomes both a regulatory requirement and business opportunity, particularly with the EU AI Act and new regulations in California and India taking effect. He highlighted Adobe’s Content Authenticity Initiative and the C2PA standard as examples of building transparency into AI systems from the ground up, comparing it to nutrition labels that provide essential information about digital content origins and creation methods.


The panel discussion revealed diverse approaches to implementing responsible AI across industries. Amol Deshpande from RPG Group emphasized the need for orchestrated responsibility across all five layers of AI deployment, advocating for a “bring your own AI” approach with proper guardrails rather than one-size-fits-all solutions. Dr. Satya Ramaswamy from Air India shared their experience launching the airline industry’s first generative AI virtual assistant, which now handles 40,000 customer queries daily while maintaining strict safety protocols and human oversight capabilities.


Pratibha Mohapatra from Adobe introduced the “ART” framework – Accountability, Responsibility, and Transparency – as a practical approach for organizations to implement responsible AI governance. She stressed that large enterprises have a responsibility to create frameworks that smaller companies can adopt, preventing responsible AI from becoming a luxury only available to large corporations. Vishal Kanwati from NPCI discussed balancing fraud detection accuracy with minimizing false positives in payment systems, emphasizing the importance of transparency in explaining AI decisions to customers.


The discussion concluded that while industry-led governance is valuable, regulatory intervention is inevitable and necessary to ensure responsible AI deployment at scale across India’s diverse digital ecosystem.


Keypoints

Major Discussion Points:

Transition from AI principles to provable practice: The discussion emphasized moving beyond theoretical commitments to responsible AI toward demonstrable, measurable implementation. Andy Parsons highlighted that 2026 will be pivotal as regulatory frameworks like the EU AI Act take effect, making compliance both a legal requirement and business opportunity.


Industry-specific implementation challenges and solutions: Panelists shared concrete examples of responsible AI deployment across different sectors – Air India’s generative AI virtual assistant handling 40,000 daily queries, NPCI’s fraud detection systems balancing accuracy with false positives, and Adobe’s content authenticity initiatives with embedded transparency features.


Governance frameworks and organizational structure: The conversation explored how large enterprises can avoid both over-centralized compliance and fragmented business unit approaches. Key themes included the “bring your own AI” concept, the need for cross-functional teams (legal, compliance, technical), and the importance of human-in-the-loop systems.


Democratization vs. enterprise luxury concern: Panelists discussed whether responsible AI practices risk becoming accessible only to large enterprises, leaving MSMEs behind. The conversation highlighted the collective responsibility of technology creators and industry leaders to develop accessible frameworks and standards.


Regulatory landscape and industry self-governance: The discussion concluded with debate over whether industry-led governance is sufficient or if regulatory intervention is inevitable, with consensus that regulations are necessary given AI’s potential societal impact, especially in critical sectors like payments and aviation.


Overall Purpose:

The discussion aimed to provide practical guidance for translating responsible AI principles into actionable enterprise strategies, moving beyond theoretical frameworks to real-world implementation across different industries and organizational scales.


Overall Tone:

The tone was professional and pragmatic throughout, with speakers sharing concrete examples and practical insights rather than abstract concepts. The conversation maintained an optimistic yet realistic perspective, acknowledging both the opportunities and challenges of responsible AI implementation. There was a collaborative spirit among panelists, with each building upon others’ insights while sharing industry-specific experiences. The tone remained consistently forward-looking, emphasizing collective responsibility and the urgency of establishing proper frameworks before regulatory deadlines.


Speakers

Speakers from the provided list:


Announcer – Session presenter/moderator


Andy Parsons – Global Head for Content Authenticity at Adobe, runs the Content Authenticity Initiative at Adobe


Shantari Malaya – Editor at Economic Times, panel moderator


Dr. Satya Ramaswamy – Chief Digital and Technology Officer at Air India Limited


Vishal Anand Kanwati – Chief Technology Officer, National Payments Corporation of India (NPCI)


Amol Deshpande – Group Chief Digital Officer and Head of Innovation at RPG Group


Prativa Mohapatra – Vice President and Managing Director of Adobe India


Sarika Guliani – Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session report

This comprehensive discussion, titled “Responsible AI from Principles to Practice in Corporate India,” brought together industry leaders from Adobe, Air India, NPCI, RPG Group, and FICCI to examine how enterprises can translate theoretical responsible AI principles into practical implementation strategies. The session addressed the critical transition from aspirational AI governance to measurable, compliant systems as regulatory frameworks take effect globally.


The Paradigm Shift: From Principles to Provable Practice

Andy Parsons, Adobe’s Global Head for Content Authenticity, opened by establishing a fundamental reframing of the responsible AI conversation. He emphasized that 2026 will mark a pivotal transition where responsible AI evolves from voluntary corporate commitments to mandatory regulatory requirements, driven by the EU AI Act enforcement in August 2026, California’s first US legislation, and India’s emerging SGI rules. This shift transforms the central question from “should we be responsible with AI?” to “can your systems actually prove that you have been responsible with AI?”


Parsons argued that organizations can accelerate their AI adoption by implementing responsible practices proactively, positioning responsibility as an enabler of innovation rather than a constraint.


Content Authenticity as a Model for Implementation

Adobe’s Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity (C2PA) served as a concrete case study. Parsons introduced the concept of “nutrition labels” for digital content—referencing remarks made by Prime Minister Modi—where consumers have the right to understand how digital content was created. The C2PA standard, developed with Microsoft, BBC, OpenAI, Sony, and other partners, provides transparent context about media creation, including which AI models were used and whether content is AI-generated.


However, Parsons acknowledged implementation challenges, including uneven platform adoption, limited consumer awareness, and social media platforms stripping transparency metadata. The business case for provenance investments remains challenging, as benefits to democratic discourse don’t always translate directly to revenue.


Industry-Specific Implementation Strategies

Aviation: Safety-Critical AI with Human Oversight

Dr. Satya Ramaswamy from Air India shared their experience launching the airline industry’s first generative AI virtual assistant in May 2023. The system handles approximately 40,000 customer queries daily from 100,000+ daily customers across 300 aircraft, processing 13.5 million queries to date at one-hundredth the cost of traditional contact centers, with a 97% autonomous resolution rate.


Air India continuously monitors the AI system using additional AI tools while maintaining customer feedback mechanisms. The aviation industry’s existing regulatory framework provided a natural foundation for responsible AI, with embedded human-in-the-loop control concepts. Their international operations require compliance with multiple regulatory frameworks, demonstrating that regulatory compliance need not constrain innovation.


Financial Infrastructure: Prioritizing User Experience

Vishal Anand Kanwati from NPCI provided insights into implementing AI in India’s critical payment infrastructure. NPCI’s approach prioritizes minimizing false positives—genuine transactions incorrectly flagged as fraudulent—recognizing that declining legitimate transactions severely damages user trust. They use small language models to explain to customers why transactions were declined, providing transparency for AI decisions.


NPCI implements safeguards preventing systemic failures, such as limiting the percentage of transactions that can be declined. As Kanwati noted, AI systems can “go berserk” and potentially decline all UPI transactions, requiring circuit breakers to prevent cascading failures across the payment ecosystem.


Conglomerate Governance: Orchestrated Responsibility

Amol Deshpande from RPG Group addressed implementing responsible AI across a diverse conglomerate spanning infrastructure, healthcare, IT, agriculture, and manufacturing. He introduced the “bring your own AI” concept, recognizing that different business functions require different AI solutions while maintaining consistent safety standards.


Deshpande emphasized that awareness is the first step toward responsibility, requiring significant investment in building AI literacy across the value chain. RPG’s approach provides scalable, safe environments with appropriate guardrails while allowing business units operational agility.


The ART Framework and Product-Embedded Responsibility

Prativa Mohapatra from Adobe India introduced the “ART” framework—Accountability, Responsibility, and Transparency—as a practical governance approach. This framework is embedded throughout Adobe’s product development, ensuring responsible AI principles are built into products rather than added afterward.


Adobe’s Firefly generative AI embeds content credentials directly into generated content, enabling enterprises to use AI-generated materials confidently without intellectual property concerns. Adobe’s Acrobat Assistant maintains trust principles that have made PDF reliable, addressing concerns raised by India’s Supreme Court about legal documents containing references to non-existent cases through careful source validation.


Democratization Challenges and Collective Responsibility

A critical theme was the risk that responsible AI practices might become accessible only to large enterprises, creating disadvantages for MSMEs. While large companies can shift resources to accommodate AI compliance requirements, smaller organizations lack this flexibility.


Industry bodies like FICCI play crucial roles in democratization, serving as conduits for knowledge transfer and framework dissemination. The collaborative approach mirrors India’s UPI success, which required cooperation across multiple stakeholders to create an open, interoperable system.


Regulatory Landscape and Industry Response

The discussion revealed consensus that regulatory frameworks are inevitable and necessary for managing AI risks at scale. Kanwati’s observation about AI systems potentially declining all UPI transactions illustrated why regulatory safeguards are essential for critical infrastructure.


However, panelists viewed regulation as a catalyst for good practices rather than innovation constraints. Dr. Ramaswamy’s experience with international aviation regulations demonstrated that compliance with multiple frameworks can coexist with innovation and competitive advantage through proactive engagement rather than reactive compliance.


Open Standards and Interoperability

A recurring theme was the importance of open standards for responsible AI implementation. Parsons drew parallels between C2PA and India’s UPI infrastructure, both succeeding through collaborative development and open access. This philosophy extends to governance frameworks and best practices, enabling organizations to build upon shared foundations rather than isolated solutions.


Implementation Challenges and Practical Solutions

Despite consensus on principles, significant practical challenges remain. Consumer awareness of content authenticity standards is low, user interfaces for transparency features are evolving, and business cases for responsible AI investments can be difficult to justify.


The panelists proposed practical solutions: starting with lower accuracy but minimal false positives (NPCI), using AI to monitor AI systems (Air India), and creating industry-specific templates rather than universal solutions (RPG). These approaches acknowledge that responsible AI implementation must be pragmatic and context-sensitive.


Future Outlook

The session concluded with FICCI’s commitment to continue translating discussions into actionable frameworks. Sarika Guliani emphasized that responsibility is not merely compliance but a commitment to developing technology with shared human values.


The transition from principles to practice represents both challenge and opportunity for Corporate India. Organizations implementing responsible AI frameworks proactively will be better positioned for 2026’s regulatory environment, while those delaying may face competitive disadvantages. The discussion provided a roadmap emphasizing practical implementation over theoretical commitments and collective responsibility over individual compliance, ensuring India’s AI development serves broader societal goals while maintaining global competitive advantage.


Session transcript

Announcer

Welcome to this session titled, Responsible AI from Principles to Practice in Corporate India, presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite a guest. Andy Parsons, our Global Head for Content Authenticity at Adobe.

Andy, over to you.

Andy Parsons

Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.

I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity for innovation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.

And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point, but can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.

So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.

But it’s made the trust problem absolutely impossible to address. Ignore. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that.

Hundreds of millions of people consuming digital content every day. In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to provide some leadership in the C2PA, which hopefully many of you have heard of. have heard about this week, which is the Coalition for Content Provenance and Authenticity.

And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free. So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others.

And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials. I have encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others.

And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company. It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy, I think, is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency, provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used.

Simple ideas like knowing that a photograph is actually a photograph and not generated. These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.

Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.

Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I’ve often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m sure that you will see it. I’m sure that you will see it. I’m sure that you will see it. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.

What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.

And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excited to go first, that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again, and in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things, and that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries because we have representatives from all of them on our excellent panel today.

So we’re fortunate to have leaders from. Air India, PCI, RPG Group, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.

Shantari Malaya

Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.

So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.

Right. So Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.

So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.

Amol Deshpande

Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.

It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise. It has to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.

We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.

You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple levels.

Shantari Malaya

Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Pratibha here. Welcome, Pratibha. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe has constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally and at the same time as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?

So all yours here.

Prativa Mohapatra

Okay, thank you. So I think Andy set the context and since we are here not to learn about the principles but the practices, I think everybody should go back with certain practices. So the first practice of AI governance which we practice is art, which is accountability, responsibility and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course we are in the business of content for a very, very long time and now the same content is becoming available. It’s becoming the currency which everybody’s debating. So our principles have been there for a while, but how it is actualized.

And let me say how it’s translated into our products. And by the way, it’s in our products. It’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law. You will not be getting into any liability issues. Because how you do it is by feeding.

Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines. So Acrobat has this new feature called Acrobat Assistant.

It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.

So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to… Already Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI ensure that you tick all the three. If you miss any one you might not be ready for the future.

So that’s how I see it.

Shantari Malaya

Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Prithiva. I’ll circle back to you time permitting. Let’s see how best we can get back. Dr. Satya calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.

How do these things really fall in place in terms of vision and metrics?

Dr. Satya Ramaswamy

Thank you, Shantiri. Since the audience is international, real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.

So it was a global first in the whole airline industry. Today it handles. It has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to a human agent the remaining 50 % of the contact volume comes to A.G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when we were the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.

At the same time we don’t want any jailbreak to happen we don’t want problem injection to happen we don’t want any inappropriate thing to happen so we are watching the whole performance of the Virtual Assistant A.G as we call it all the time. So we use in fact Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer so at the end of the day when we send a response we also ask the customer did it answer your question and also allow them to give their reactions is it appropriate, inappropriate and thankfully over the last 20 years it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it but now we are as the technologies are maturing for example we now have interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem.

That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.

Shantari Malaya

Excellent. And given the kind of scale you’re operating at, I think every day is a new day.

Dr. Satya Ramaswamy

Yes, it is. We face challenges. There is new, brand new every day.

Shantari Malaya

Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here or rather sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you. How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts?

One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be? Fair at the same time proactive and detective when it comes to looking at fraud. So how what are the aspects that you look at? keenly here

Vishal Anand Kanwati

I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with this success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.

Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised

Shantari Malaya

absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you something but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?

Is it getting risked?

Prativa Mohapatra

Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.

So… I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.

So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?

Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is a… similar to electricity and steam, it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem.

Shantari Malaya

Absolutely. Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face, you know, when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem, the industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit?

Amol Deshpande

Shantari, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?

Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.

Mind you, this would change. It’s not like this one guardrails construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is, I think, very, very critical.

Shantari Malaya

Than you for that, Amol. Dr. Satya, if, you know, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean, the principles, so on and so forth. And India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony

Dr. Satya Ramaswamy

Absolutely, I think taking Air India, we are an international airline so we operate in many countries for example we go to North America US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world so by nature we are geared to looking at the regulation in all parts of the world in all parts of the world and we are looking at the regulation and we are looking at the regulation and we are looking at the regulation and being in compliance.

And our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right? So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control.

And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing. the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is, this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation.

For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it

Shantari Malaya

So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts here.

Vishal Anand Kanwati

Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions, you know. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important.

While. All of us realize. it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high

Shantari Malaya

Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please

Sarika Guliani

first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content. Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done. Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken thank you so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of Vicky and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.

That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Chantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.

So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.

A

Andy Parsons

Speech speed

191 words per minute

Speech length

2021 words

Speech time

632 seconds

Shift from AI Principles to Provable Practice

Explanation

Andy stresses that responsible AI must become a leadership discipline rather than a mere regulatory checkbox, moving it from theoretical slides to concrete compliance actions. He frames the panel’s theme as turning principles into provable practice.


Evidence

“So we want to position responsibility, AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both” [9]. “So this shift from principles to provable practice is the theme of our panel today” [57]. “This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity” [58].


Major discussion point

Shift from AI Principles to Provable Practice


Topics

Artificial intelligence | The enabling environment for digital development


Importance of Open Standards and Interoperability

Explanation

He argues that responsible‑AI techniques must be interoperable, open and based on standards, citing the C2PA content credentials as a concrete example of an open, cross‑industry provenance standard.


Evidence

“Techniques you use for responsible AI should be interoperable, open, and standardized” [20]. “We are built on an open standard” [48]. “Five years later, there is an open standard called the C2PA content credentials” [63].


Major discussion point

Importance of Open Standards and Interoperability


Topics

Data governance | Artificial intelligence


Operational Challenges and Adoption Barriers

Explanation

Andy highlights practical hurdles such as metadata stripping, low consumer awareness, immature user interfaces and uneven adoption that impede transparent AI deployment.


Evidence

“There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you” [96]. “Many social media platforms strip metadata and remove that transparency when content is uploaded” [97]. “Consumer awareness is still very early” [98]. “And because consumer awareness is early, user interfaces are also quite early” [99]. “But adoption is uneven” [113].


Major discussion point

Operational Challenges and Adoption Barriers


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Role of Regulation versus Self‑Regulation

Explanation

He sees regulation as a catalyst that can drive good practices, while emphasizing that legislation should be reflected in working code rather than just statements on a slide deck.


Evidence

“I view regulation, like what’s happening here in India, as a catalyst for good practices” [128]. “Legislation may help here” [133]. “And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website” [134].


Major discussion point

Role of Regulation versus Self‑Regulation


Topics

The enabling environment for digital development | Artificial intelligence


Industry‑Specific Implementations and Examples

Explanation

Andy points to Adobe’s C2PA content credentials as a concrete responsible‑AI tooling that provides provenance and transparency across the ecosystem.


Evidence

“Five years later, there is an open standard called the C2PA content credentials” [63]. “If you browse LinkedIn and see this symbol, you have the C2PA content credentials” [64]. “I’m very privileged to provide some leadership in the C2PA, which hopefully many of you have heard of” [66].


Major discussion point

Industry‑Specific Implementations and Examples


Topics

Artificial intelligence | Data governance


S

Shantari Malaya

Speech speed

160 words per minute

Speech length

1621 words

Speech time

605 seconds

Shift from AI Principles to Provable Practice

Explanation

Shantari stresses that responsible‑AI principles such as fairness, accountability and transparency must be translated into concrete enterprise strategy frameworks to become operational reality.


Evidence

“So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks” [16]. “So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy” [17].


Major discussion point

Shift from AI Principles to Provable Practice


Topics

Artificial intelligence | The enabling environment for digital development


Operational Challenges and Adoption Barriers

Explanation

She notes that a one‑size‑fits‑all approach does not work; diverse industry templates and varying maturity levels create uneven adoption across enterprises.


Evidence

“One size doesn’t fit all” [111]. “So as you said, one size doesn’t fit all” [112]. “We face challenges” [109].


Major discussion point

Operational Challenges and Adoption Barriers


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Role of Regulation versus Self‑Regulation

Explanation

Shantari argues that while self‑regulation can be effective, harmonising global best practices with India’s emerging policies is essential for balanced governance.


Evidence

“Self‑regulation may be a way forward given the scale that we are operating on” [130]. “So how are we really looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time we are really gearing to go but how are we really looking at it” [135].


Major discussion point

Role of Regulation versus Self‑Regulation


Topics

The enabling environment for digital development | Artificial intelligence


Industry‑Specific Implementations and Examples

Explanation

She references the National Payments Corporation of India (NPCI) as a key player in building AI‑driven payment infrastructure, illustrating sector‑specific responsible‑AI deployment.


Evidence

“NPCI, you know, largest, you know, digital payments infrastructure platforms” [173]. “We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI” [174].


Major discussion point

Industry‑Specific Implementations and Examples


Topics

Artificial intelligence | The digital economy


A

Amol Deshpande

Speech speed

180 words per minute

Speech length

758 words

Speech time

251 seconds

Shift from AI Principles to Provable Practice

Explanation

Amol emphasizes that responsibility and accountability must be embedded at every AI layer, requiring orchestration across the whole ecosystem rather than isolated checklists.


Evidence

“So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that” [4]. “When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer” [21]. “It has to be an orchestration of all the things” [31].


Major discussion point

Shift from AI Principles to Provable Practice


Topics

Artificial intelligence | Data governance


Operational Challenges and Adoption Barriers

Explanation

He points out that templates and guardrails must be tailored; a single solution cannot serve all industries, and learning must be disseminated through industry bodies.


Evidence

“One size doesn’t fit all” [111]. “See, it is a very diverse element and there is a different kind of templates which we need to do so” [114]. “It’s not like this one guardrails construct will work for everybody, but it would vary from industry to industry, function to function” [115]. “Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies” [82].


Major discussion point

Operational Challenges and Adoption Barriers


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Role of Regulation versus Self‑Regulation

Explanation

Amol sees regulation as a guiding principle that can enable opportunities, while legislation can provide the necessary catalyst for responsible AI.


Evidence

“Legislation may help here” [133]. “It’s more of a guiding principles which need to be given so that it gives an opportunity” [129].


Major discussion point

Role of Regulation versus Self‑Regulation


Topics

The enabling environment for digital development | Artificial intelligence


Industry‑Specific Implementations and Examples

Explanation

He references the RPG Group’s AI governance across all five AI layers and the need for a “bring‑your‑own‑AI” approach within the conglomerate.


Evidence

“It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function” [157]. “As part of the RPG group, you also represent enterprises that are deploying AI at scale” [155].


Major discussion point

Industry‑Specific Implementations and Examples


Topics

Artificial intelligence | The digital economy


P

Prativa Mohapatra

Speech speed

155 words per minute

Speech length

1118 words

Speech time

432 seconds

Shift from AI Principles to Provable Practice

Explanation

Prativa describes the ART (accountability, responsibility, transparency) framework as the first practice of AI governance, embedding it into product methodologies and business responsibility.


Evidence

“So the first practice of AI governance which we practice is art, which is accountability, responsibility and transparency” [32]. “And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business” [7]. “Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it” [44].


Major discussion point

Shift from AI Principles to Provable Practice


Topics

Artificial intelligence | The enabling environment for digital development


Importance of Open Standards and Interoperability

Explanation

She highlights Adobe Firefly’s “nutrition label” and Acrobat Assistant as product‑level implementations that convey provenance and compliance through open‑standard‑based metadata.


Evidence

“Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions” [76]. “Firefly is an example” [77]. “Acrobat Assistant follows the same principles that PDF was used to be created” [153].


Major discussion point

Industry‑Specific Implementations and Examples


Topics

Artificial intelligence | Data governance


Operational Challenges and Adoption Barriers

Explanation

She points out that legal and compliance teams must adapt to AI governance, and that MSMEs lack the resources to build dedicated AI compliance structures.


Evidence

“I’m sure every organization today has a legal team, has a compliance team” [59]. “Legal teams have to re‑opt to talk about AI compliance” [62]. “The MSMEs don’t have that luxury” [104]. “Small organizations cannot do that” [108].


Major discussion point

Operational Challenges and Adoption Barriers


Topics

The enabling environment for digital development | Capacity development


D

Dr. Satya Ramaswamy

Speech speed

187 words per minute

Speech length

1064 words

Speech time

340 seconds

Shift from AI Principles to Provable Practice

Explanation

He explains that safety is ensured by combining regulatory compliance with human‑in‑the‑loop controls, embedding safety knobs and continuous monitoring into the AI system.


Evidence

“We know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk” [41]. “We have embedded all the safety procedures all deep into the way that we handle it” [165].


Major discussion point

Shift from AI Principles to Provable Practice


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Operational Challenges and Adoption Barriers

Explanation

He notes challenges such as preventing jailbreaks, managing safety knobs, and continuous monitoring to keep the virtual assistant trustworthy.


Evidence

“We face challenges” [109]. “We don’t want any jailbreak to happen we don’t want problem injection to happen we don’t want any inappropriate thing to happen” [121]. “If you dial the safety knob too much then it is an inconvenience to the customer” [164].


Major discussion point

Operational Challenges and Adoption Barriers


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Role of Regulation versus Self‑Regulation

Explanation

He asserts that compliance with all applicable regulations does not hinder innovation and that regulatory safeguards are essential for large‑scale AI deployments.


Evidence

“We comply with all the regulations, and it doesn’t in any way constrain Indian innovation” [143]. “We comply with all the regulations” [142]. “Regulatory safeguards are required” [120].


Major discussion point

Role of Regulation versus Self‑Regulation


Topics

The enabling environment for digital development | Artificial intelligence


Industry‑Specific Implementations and Examples

Explanation

He showcases Air India’s generative‑AI virtual assistant, which has handled over 13.5 million queries while maintaining safety and compliance.


Evidence

“We launched the global airline industry’s very first generative AI virtual assistant out of India” [162]. “It has handled about 13.5 million queries so far from customers” [163]. “We have not had any challenge with any of the regulations because we comply with all the regulations” [166].


Major discussion point

Industry‑Specific Implementations and Examples


Topics

Artificial intelligence | The digital economy


V

Vishal Anand Kanwati

Speech speed

184 words per minute

Speech length

584 words

Speech time

189 seconds

Importance of Open Standards and Interoperability

Explanation

He describes the RBA framework and a custom language model that provide transparent, explainable decisions for transaction declines, illustrating the role of open, interoperable AI tools.


Evidence

“We build a small language model where you can go and actually chat and say what happened to this transaction why is it declined… this level of transparency” [51]. “One is a transparency” [53].


Major discussion point

Importance of Open Standards and Interoperability


Topics

Data governance | Artificial intelligence


Operational Challenges and Adoption Barriers

Explanation

He highlights the need to balance accuracy with low false‑positive fraud rates and to give users clear explanations for declined transactions.


Evidence

“I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high” [49]. “If a customer has a transaction that is failed, I think you should know why it has been failed” [55].


Major discussion point

Operational Challenges and Adoption Barriers


Topics

The digital economy | Building confidence and security in the use of ICTs


Role of Regulation versus Self‑Regulation

Explanation

He argues that regulation is essential to prevent AI misuse, emphasizing mandatory safeguards and the need for a balanced regulatory approach.


Evidence

“Regulations are required, especially because AI can go berserk” [120]. “Those safeguards are very much, very much required” [147]. “Regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward” [145].


Major discussion point

Role of Regulation versus Self‑Regulation


Topics

The enabling environment for digital development | Building confidence and security in the use of ICTs


Industry‑Specific Implementations and Examples

Explanation

He references NPCI’s AI chat that explains transaction declines and the broader RBA framework as concrete examples of responsible AI in payments.


Evidence

“We build a small language model where you can go and actually chat… explain why a transaction was declined” [51]. “RBA has also given the framework from a responsible AI meaty document” [51].


Major discussion point

Industry‑Specific Implementations and Examples


Topics

Artificial intelligence | The digital economy


S

Sarika Guliani

Speech speed

141 words per minute

Speech length

586 words

Speech time

249 seconds

Shift from AI Principles to Provable Practice

Explanation

Sarika stresses that responsibility has moved beyond a compliance checkbox to a commitment rooted in shared human values, shaping the future of technology.


Evidence

“Responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future… the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken” [60].


Major discussion point

Shift from AI Principles to Provable Practice


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


Role of Regulation versus Self‑Regulation

Explanation

She calls for balanced, light‑touch regulation, arguing that responsible AI should be guided by shared values rather than heavy‑handed mandates.


Evidence

“…what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation… responsibility is not anymore a compliance check… it’s a commitment of the technology that we should develop it” [60].


Major discussion point

Role of Regulation versus Self‑Regulation


Topics

The enabling environment for digital development | Human rights and the ethical dimensions of the information society


A

Announcer

Speech speed

129 words per minute

Speech length

129 words

Speech time

59 seconds

Importance of Open Standards and Interoperability

Explanation

The Announcer frames the session as focusing on trustworthy AI, emphasizing that trust, transparency and accountability are now essential, not optional.


Evidence

“Trust, transparency and accountability are no longer optional” [10]. “It’s about how responsibly we deploy it” [5]. “The conversation will center on advancing safe and trusted AI in the corporate landscape” [93].


Major discussion point

Importance of Open Standards and Interoperability


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Agreements

Agreement points

Responsible AI requires moving from principles to practical implementation with demonstrable systems

Speakers

– Andy Parsons
– Prativa Mohapatra
– Amol Deshpande

Arguments

The question has changed from “should we be responsible with AI?” to “can your systems prove you’ve been responsible?”


Responsible AI commitments on websites are starting points, not meaningful milestones – need standards and working code


Enterprises need legal teams and compliance teams to re-adapt for AI compliance, covering business strategy, ethical strategies, and regulatory compliance


Moving from generative AI to more complex scenarios and agentic AI requires orchestrated responsibility across all layers


Summary

All speakers agree that the era of theoretical responsible AI commitments is over, and organizations must now demonstrate actual working systems and processes that prove responsible AI implementation


Topics

Artificial intelligence | The enabling environment for digital development


Transparency and accountability must be built into AI systems, not added as afterthoughts

Speakers

– Andy Parsons
– Prativa Mohapatra
– Dr. Satya Ramaswamy
– Vishal Anand Kanwati

Arguments

Adobe’s Content Authenticity Initiative provides transparency for AI-generated content through C2PA standards


Adobe practices “ART” philosophy – Accountability, Responsibility, and Transparency – embedded in product methodologies


Air India uses generative AI to monitor the performance of their generative AI chatbot, with customer feedback mechanisms


NPCI built small language models to provide transparency when transactions fail, explaining why decisions were made


Summary

All speakers emphasize that transparency and accountability must be embedded in the core design of AI systems rather than being superficial additions


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Large enterprises have a responsibility to create frameworks that smaller organizations can adopt

Speakers

– Prativa Mohapatra
– Amol Deshpande

Arguments

Large enterprises creating AI technologies are responsible for developing frameworks that smaller organizations can adopt


MSMEs lack the luxury of shifting resources like large companies, requiring creators and big users to provide methodologies


Industry partnerships through bodies like FICCI are critical for disseminating learnings to organizations without access to such information


Summary

Both speakers agree that there’s a risk of creating a divide between large and small organizations in responsible AI adoption, and that larger enterprises must take responsibility for democratizing access to responsible AI frameworks


Topics

Artificial intelligence | Closing all digital divides


Regulatory intervention is necessary and should be viewed as a catalyst for good practices

Speakers

– Andy Parsons
– Vishal Anand Kanwati
– Dr. Satya Ramaswamy

Arguments

Regulation should be viewed as a catalyst for good practices rather than just reactive compliance


Regulations are mandatory because AI can go wrong at scale, requiring safeguards across the ecosystem


International airlines must comply with regulations across multiple jurisdictions without constraining innovation


Summary

All speakers agree that regulation is not only inevitable but necessary for responsible AI deployment, and that it can actually enable rather than constrain innovation when properly implemented


Topics

Artificial intelligence | The enabling environment for digital development


Open standards and interoperability are essential for responsible AI implementation

Speakers

– Andy Parsons
– Amol Deshpande

Arguments

Responsible AI techniques should be interoperable, open, and standardized like India’s UPI payment infrastructure


Adobe’s open standard approach ensures independent creators can apply the same provenance at zero cost as Fortune 500 enterprises


In conglomerates, responsible AI requires providing scalable, safe environments with guardrails rather than one-size-fits-all solutions


Summary

Both speakers emphasize that responsible AI cannot be achieved through proprietary solutions but requires open, standardized approaches that enable broad adoption and interoperability


Topics

Artificial intelligence | Information and communication technologies for development


Similar viewpoints

Both speakers from Adobe share the same philosophy about content authenticity and transparency, using the nutrition label analogy to explain how people should have the right to know how digital content was created

Speakers

– Andy Parsons
– Prativa Mohapatra

Arguments

Content credentials act like “nutrition labels” for digital content, allowing people to know what’s in their media


Firefly embeds content credentials so enterprises can be confident they won’t violate laws or face liability issues


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers from critical infrastructure organizations (aviation and payments) emphasize the importance of balancing AI efficiency with safety and user protection, prioritizing user experience and safety over pure performance metrics

Speakers

– Dr. Satya Ramaswamy
– Vishal Anand Kanwati

Arguments

Air India’s generative AI virtual assistant handles 97% of queries autonomously while maintaining safety through continuous monitoring


NPCI prioritizes keeping false positives low over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraud


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Both speakers recognize the complexity of implementing responsible AI across diverse organizational contexts and the need for customized approaches while ensuring accessibility for smaller organizations

Speakers

– Amol Deshpande
– Prativa Mohapatra

Arguments

Different industries require different templates and guardrails, varying from function to function


MSMEs lack the luxury of shifting resources like large companies, requiring creators and big users to provide methodologies


Topics

Artificial intelligence | Closing all digital divides


Unexpected consensus

Regulation as enabler rather than constraint

Speakers

– Andy Parsons
– Dr. Satya Ramaswamy
– Vishal Anand Kanwati

Arguments

Regulation should be viewed as a catalyst for good practices rather than just reactive compliance


International airlines must comply with regulations across multiple jurisdictions without constraining innovation


Regulations are mandatory because AI can go wrong at scale, requiring safeguards across the ecosystem


Explanation

It’s unexpected that all speakers, including those from private enterprises, view regulation positively as an enabler of innovation rather than a burden. This consensus suggests a mature understanding that proper regulation can actually accelerate responsible AI adoption


Topics

Artificial intelligence | The enabling environment for digital development


Human-in-the-loop as fundamental design principle

Speakers

– Dr. Satya Ramaswamy
– Vishal Anand Kanwati
– Andy Parsons

Arguments

Aviation industry’s safety-critical nature provides embedded concepts of human-in-the-loop control and regulatory compliance


NPCI built small language models to provide transparency when transactions fail, explaining why decisions were made


The question has changed from “should we be responsible with AI?” to “can your systems prove you’ve been responsible?”


Explanation

The consensus on maintaining human oversight and control across different industries (aviation, payments, content creation) shows unexpected alignment on the fundamental principle that AI should augment rather than replace human judgment in critical decisions


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Overall assessment

Summary

The speakers demonstrate remarkable consensus on the need to transition from theoretical responsible AI principles to practical implementation, the importance of transparency and accountability built into systems, the necessity of regulatory frameworks, and the responsibility of large organizations to democratize responsible AI practices. There’s also strong agreement on the value of open standards and the need for human oversight in AI systems.


Consensus level

High level of consensus across all major themes, with speakers from different industries (technology, aviation, payments, conglomerates) aligning on fundamental principles. This suggests that responsible AI practices are becoming standardized across sectors and that industry leaders recognize both the opportunities and responsibilities that come with AI deployment. The consensus implies that responsible AI is moving from a competitive differentiator to a baseline requirement for enterprise AI adoption.


Differences

Different viewpoints

Industry self-regulation versus regulatory intervention necessity

Speakers

– Vishal Anand Kanwati
– Dr. Satya Ramaswamy

Arguments

Regulations are mandatory because AI can go wrong at scale, requiring safeguards across the ecosystem


International airlines must comply with regulations across multiple jurisdictions without constraining innovation


Summary

Vishal strongly advocates for mandatory regulations due to AI’s potential for widespread harm, while Dr. Satya emphasizes that existing regulatory compliance doesn’t constrain innovation and can coexist with technological advancement


Topics

Artificial intelligence | The enabling environment for digital development


Centralized versus decentralized approach to responsible AI in large organizations

Speakers

– Amol Deshpande
– Prativa Mohapatra

Arguments

In conglomerates, responsible AI requires providing scalable, safe environments with guardrails rather than one-size-fits-all solutions


Enterprises need legal teams and compliance teams to re-adapt for AI compliance, covering business strategy, ethical strategies, and regulatory compliance


Summary

Amol advocates for a decentralized ‘bring your own AI’ approach with flexible guardrails, while Prativa emphasizes the need for centralized compliance structures and standardized methodologies


Topics

Artificial intelligence | The enabling environment for digital development


Unexpected differences

Speed versus safety in AI deployment

Speakers

– Andy Parsons
– Dr. Satya Ramaswamy

Arguments

I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive


Air India’s generative AI virtual assistant handles 97% of queries autonomously while maintaining safety through continuous monitoring


Explanation

While both speakers advocate for responsible AI, Andy suggests that responsibility enables faster adoption, while Dr. Satya’s approach emphasizes careful monitoring and gradual scaling. This represents different philosophies on balancing innovation speed with safety measures


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Overall assessment

Summary

The discussion revealed relatively low levels of fundamental disagreement, with most tensions arising around implementation approaches rather than core principles. Key areas of difference included the necessity and role of regulation, centralized versus decentralized governance approaches, and strategies for democratizing responsible AI access


Disagreement level

Low to moderate disagreement level. The speakers largely aligned on the importance of responsible AI principles but differed on tactical approaches. This suggests a maturing field where practitioners agree on goals but are still developing best practices for implementation. The implications are positive – there’s broad consensus on the need for responsible AI, but healthy debate on optimal implementation strategies that can drive innovation in governance approaches


Partial agreements

Partial agreements

All speakers agree that democratizing responsible AI access is crucial, but they propose different mechanisms – Andy focuses on open technical standards, Prativa emphasizes enterprise responsibility for framework creation, and Amol highlights industry body partnerships for knowledge dissemination

Speakers

– Andy Parsons
– Prativa Mohapatra
– Amol Deshpande

Arguments

Adobe’s open standard approach ensures independent creators can apply the same provenance at zero cost as Fortune 500 enterprises


Large enterprises creating AI technologies are responsible for developing frameworks that smaller organizations can adopt


Industry partnerships through bodies like FICCI are critical for disseminating learnings to organizations without access to such information


Topics

Artificial intelligence | Closing all digital divides


All speakers acknowledge the importance of regulation, but differ on its role – Andy sees it as a catalyst for innovation, Dr. Satya views it as compatible with innovation, while Vishal sees it as a necessary constraint to prevent systemic failures

Speakers

– Andy Parsons
– Dr. Satya Ramaswamy
– Vishal Anand Kanwati

Arguments

Regulation should be viewed as a catalyst for good practices rather than just reactive compliance


International airlines must comply with regulations across multiple jurisdictions without constraining innovation


Regulations are mandatory because AI can go wrong at scale, requiring safeguards across the ecosystem


Topics

Artificial intelligence | The enabling environment for digital development


Similar viewpoints

Both speakers from Adobe share the same philosophy about content authenticity and transparency, using the nutrition label analogy to explain how people should have the right to know how digital content was created

Speakers

– Andy Parsons
– Prativa Mohapatra

Arguments

Content credentials act like “nutrition labels” for digital content, allowing people to know what’s in their media


Firefly embeds content credentials so enterprises can be confident they won’t violate laws or face liability issues


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers from critical infrastructure organizations (aviation and payments) emphasize the importance of balancing AI efficiency with safety and user protection, prioritizing user experience and safety over pure performance metrics

Speakers

– Dr. Satya Ramaswamy
– Vishal Anand Kanwati

Arguments

Air India’s generative AI virtual assistant handles 97% of queries autonomously while maintaining safety through continuous monitoring


NPCI prioritizes keeping false positives low over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraud


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Both speakers recognize the complexity of implementing responsible AI across diverse organizational contexts and the need for customized approaches while ensuring accessibility for smaller organizations

Speakers

– Amol Deshpande
– Prativa Mohapatra

Arguments

Different industries require different templates and guardrails, varying from function to function


MSMEs lack the luxury of shifting resources like large companies, requiring creators and big users to provide methodologies


Topics

Artificial intelligence | Closing all digital divides


Takeaways

Key takeaways

2026 marks a critical transition from AI principles to provable practice, with regulatory enforcement making responsible AI a compliance necessity rather than optional


Responsible AI must be embedded at all layers of AI systems through orchestrated governance, not treated as a centralized compliance exercise or fragmented checklist


Content authenticity and transparency standards (like Adobe’s C2PA) provide ‘nutrition labels’ for digital content, enabling users to understand how content was created


Enterprise-scale AI governance requires balancing innovation with safety through human-in-the-loop controls, continuous monitoring, and industry-specific guardrails


Large enterprises have a responsibility to create frameworks and standards that smaller organizations and MSMEs can adopt, preventing a divide between big and small players


Cross-industry collaboration and open standards are essential for responsible AI implementation, similar to India’s UPI infrastructure model


Transparency and accountability must be built into AI systems from the ground up, with clear explanations for AI decisions and outcomes


Industry-led governance must work in conjunction with regulatory frameworks, as self-regulation alone is insufficient for managing AI risks at scale


Resolutions and action items

FICCI committed to continuing the dialogue and translating discussions into actionable frameworks with industry support


Enterprises should implement the ‘ART’ philosophy (Accountability, Responsibility, Transparency) in their AI governance practices


Organizations need to re-adapt legal and compliance teams specifically for AI compliance covering business strategy, ethical strategies, and regulatory compliance


Industry bodies like FICCI should facilitate knowledge sharing and framework dissemination to MSMEs and smaller organizations


Companies should adopt open standards and interoperable approaches for responsible AI implementation


Enterprises should establish continuous monitoring systems for AI performance with customer feedback mechanisms


Unresolved issues

The specific balance between light-touch regulation versus comprehensive regulatory frameworks remains undefined


How to effectively scale responsible AI frameworks across diverse industries with different risk profiles and requirements


The challenge of consumer awareness and adoption of content authenticity standards, as many users are still unfamiliar with these systems


Business case justification for responsible AI investments, particularly for smaller organizations with limited resources


Technical challenges around social media platforms stripping metadata and removing transparency when content is uploaded


The timeline and methodology for harmonizing global best practices with India’s specific scale, diversity, and regulatory environment


Suggested compromises

Starting with lower AI accuracy but keeping false positives very low, then gradually improving accuracy as more data and collaboration becomes available (NPCI’s approach)


Balancing safety controls with customer convenience by using AI to monitor AI systems and providing customer feedback mechanisms


Creating industry-specific templates and guardrails rather than one-size-fits-all solutions for responsible AI implementation


Adopting a ‘bring your own AI’ approach within enterprises while providing scalable, safe environments with appropriate guardrails


Viewing regulation as a catalyst for good practices rather than purely reactive compliance, encouraging proactive adoption of responsible AI frameworks


Thought provoking comments

2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity for innovation… The question for everyone in this room has changed from should we be responsible with AI? I think that debate is well settled at this point, but can your systems actually prove that you have been responsible with AI, and how do you go about doing that?

Speaker

Andy Parsons


Reason

This comment fundamentally reframes the entire discussion by shifting focus from theoretical principles to practical implementation and proof of responsibility. It establishes a concrete timeline and transforms responsible AI from an abstract concept to a measurable business requirement.


Impact

This set the foundational framework for the entire panel discussion, moving all subsequent conversations away from ‘why’ responsible AI matters to ‘how’ to implement and demonstrate it. Every panelist subsequently focused on practical examples and implementation strategies rather than theoretical benefits.


We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children… we think that digital content has to have that same foundation of transparency.

Speaker

Andy Parsons


Reason

This analogy brilliantly simplifies a complex technical concept by connecting it to something universally understood. It makes the abstract concept of content provenance tangible and relatable, while also connecting it to consumer rights and democratic values.


Impact

This metaphor became a recurring theme throughout the discussion, with other panelists referencing transparency and traceability in their own contexts. It helped ground the technical discussion in everyday consumer experience and established transparency as a fundamental right rather than a nice-to-have feature.


It’s more of a bring your own AI kind of a scenario in every function. You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us.

Speaker

Amol Deshpande


Reason

This insight challenges the traditional centralized approach to enterprise technology deployment and recognizes the democratization of AI tools. It acknowledges that different business functions need different AI solutions while maintaining consistent safety standards.


Impact

This comment shifted the discussion toward the practical challenges of governance in decentralized AI adoption. It influenced subsequent speakers to address how to maintain consistency and safety across diverse use cases, leading to discussions about frameworks, templates, and industry-wide standards.


I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started.

Speaker

Vishal Anand Kanwati


Reason

This reveals a counterintuitive but crucial insight about implementing AI in high-stakes environments – that perfect accuracy isn’t the primary goal, but rather minimizing harm to legitimate users. It demonstrates sophisticated thinking about trade-offs in AI system design.


Impact

This comment introduced nuance to the discussion about AI performance metrics and highlighted that responsible AI isn’t just about technical accuracy but about understanding and minimizing real-world negative impacts. It influenced the conversation toward considering user experience and trust as key metrics of responsible AI implementation.


So the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt… It’s very hard… So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people… The MSMEs don’t have that luxury.

Speaker

Prativa Mohapatra


Reason

This comment addresses a critical equity issue in AI adoption – that responsible AI practices might become a luxury only large enterprises can afford, potentially creating a two-tiered system. It highlights the social responsibility of technology leaders.


Impact

This observation shifted the discussion from individual enterprise strategies to collective industry responsibility and the broader societal implications of AI adoption. It prompted discussions about frameworks, industry bodies, and the role of larger players in democratizing responsible AI practices.


AI can go berserk… today all the UPI transactions can get declined… And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions… So those safeguards are very much required.

Speaker

Vishal Anand Kanwati


Reason

This stark illustration of AI’s potential for systemic failure in critical infrastructure makes the abstract risks of AI very concrete. The image of an entire nation’s payment system failing due to AI malfunction is both vivid and terrifying, effectively arguing for regulatory intervention.


Impact

This comment provided the most compelling argument for why regulatory intervention is inevitable rather than optional. It moved the final discussion from whether regulation is needed to how it should be implemented, effectively settling the debate about industry self-regulation versus government oversight.


Overall assessment

These key comments fundamentally shaped the discussion by establishing a progression from theoretical principles to practical implementation challenges, and finally to systemic risks requiring collective action. Andy Parsons’ opening reframing moved the entire conversation from ‘why’ to ‘how,’ while his nutrition label analogy provided a accessible framework for understanding complex technical concepts. The subsequent panelists built on this foundation by sharing practical insights about implementation challenges, equity concerns, and systemic risks. The discussion evolved from individual enterprise strategies to industry-wide responsibilities and ultimately to the inevitability of regulatory intervention. The most impactful comments were those that either reframed the fundamental question, provided vivid analogies or examples, or highlighted previously unconsidered consequences – particularly around equity and systemic risk. Together, these comments created a comprehensive narrative arc that took the audience from principles through practice to policy implications.


Follow-up questions

How can social media platforms be encouraged or required to preserve content authenticity metadata instead of stripping it during upload?

Speaker

Andy Parsons


Explanation

This is a critical technical and policy challenge that affects the entire content authenticity ecosystem, as platforms currently remove transparency information when content is uploaded


How can consumer awareness of content authenticity symbols and provenance be increased effectively?

Speaker

Andy Parsons


Explanation

Low consumer awareness limits the effectiveness of content authenticity initiatives, and strategies are needed to make these symbols as recognizable as nutrition labels


What specific methodologies and frameworks can be developed to help MSMEs adopt responsible AI practices without the resources of large enterprises?

Speaker

Prativa Mohapatra


Explanation

There’s a risk that responsible AI becomes a luxury only large enterprises can afford, creating a divide that could harm smaller businesses and overall ecosystem development


How can industry bodies effectively disseminate responsible AI frameworks and learnings to create domain-specific templates for different industries?

Speaker

Amol Deshpande


Explanation

Different industries require different approaches to responsible AI, and there’s a need for systematic knowledge transfer mechanisms through industry partnerships


What is the optimal balance between industry-led governance and regulatory intervention for AI systems?

Speaker

Shantari Malaya


Explanation

This fundamental question about governance models remains unresolved and requires further exploration to determine the most effective approach for different contexts


How can global AI regulations and best practices be harmonized with India’s specific scale, diversity, and innovation requirements?

Speaker

Shantari Malaya


Explanation

India needs to balance compliance with international standards while maintaining its competitive advantage and addressing its unique market characteristics


What specific mechanisms can ensure AI systems remain fair and inclusive while scaling to handle massive transaction volumes like those at NPCI?

Speaker

Vishal Anand Kanwati


Explanation

The challenge of maintaining fairness and preventing false positives in high-volume, critical systems requires ongoing research and development


How can the business case for content provenance and AI transparency be strengthened beyond regulatory compliance?

Speaker

Andy Parsons


Explanation

Making responsible AI economically viable rather than just a compliance requirement is essential for widespread adoption


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.