Responsible AI in India Leadership Ethics & Global Impact part1_2

20 Feb 2026 18:00h - 19:00h

Responsible AI in India Leadership Ethics & Global Impact part1_2

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with the moderator emphasizing that responsible AI-grounded in trust, transparency and accountability-is now a foundational requirement for Indian enterprises [1-6]. Andy Parsons of Adobe framed the discussion as a shift from abstract AI principles to “provable practice,” noting that 2026 will see responsible AI become both a regulatory duty and a business opportunity [33-34][20-21]. He described Adobe’s leadership in the Coalition for Content Provenance and Authenticity (C2PA), an open, cross-industry standard that embeds provenance metadata directly into media files, enabling anyone to verify how content was created [54-62]. The C2PA’s core principles-transparency, provenance, accountability and inclusivity-are presented as “nutrition labels” for digital content, allowing users to trace models, tools and data behind each asset [74-80][81-84]. Andy also warned of uneven adoption, metadata stripping by platforms, low consumer awareness and the difficulty of building a profitable business case for provenance, arguing that standards, not merely principles, are needed to move forward [90-99][108-110].


In the panel, Amol Deshpande highlighted that responsible AI must be orchestrated across all five AI layers, involve people, processes and technology, and cannot rely on a single “one-size-fits-all” solution, coining a “bring-your-own-AI” approach [162-166][177-180]. Prativa Mohapatra explained Adobe’s internal “ART” framework (accountability, responsibility, transparency) and gave concrete examples such as Firefly, which tags generated outputs with “nutrition” metadata, and Acrobat Assistant, which ensures traceable, lawful document creation [197-199][209-214][224-228]. She stressed that legal and compliance teams must redesign their workflows to embed AI governance throughout the input-output lifecycle, otherwise enterprises risk falling short of future regulatory expectations [235-238].


Satya Ramaswamy described Air India’s generative-AI virtual assistant that has handled 13.5 million queries with a 97 % autonomous success rate, while continuous safety monitoring and customer feedback loops prevent jailbreaks and inappropriate responses [257-263][264-268]. He noted that partnerships with firms like Adobe provide “prompt firewalls” and indemnities that boost confidence in managing AI risk at airline scale [269-271]. Vishal Anand Kanvaty of NPCI emphasized transparency for declined transactions, using a language model to explain reasons to users, and argued that regulatory safeguards are essential to prevent false-positive fraud decisions and maintain trust in the payments ecosystem [293-298][370-376].


Across the discussion, participants agreed that industry-led standards, cross-sector collaboration and regulatory frameworks are all necessary to translate responsible-AI principles into operational practice, especially for MSMEs that lack internal resources [332-340][379-383]. Sarika Guliani of FICCI reiterated that responsible AI is a commitment to shared human values and that the “people, planet, progress” agenda must guide future innovation, with FICCI pledging to advance the dialogue into concrete action [379-383][389-390]. Overall, the dialogue underscored that moving from principle to practice requires open standards, robust governance, and coordinated regulation to ensure trustworthy AI deployment across India’s diverse enterprise landscape [108-110].


Keypoints


Major discussion points


From principles to provable practice – The panel framed responsible AI as moving beyond abstract ethics to demonstrable compliance, driven by new regulations such as the EU AI Act, California law and India’s IT rules, and positioning it as both a leadership imperative and a regulatory requirement [30-33][105-110][108-113].


Open, cross-industry standards for transparency – Adobe highlighted the C2PA (Coalition for Content Provenance and Authenticity) as an open, free standard that embeds provenance metadata directly into media assets; this model is being baked into Adobe products (e.g., Firefly, Acrobat) to give enterprises verifiable “nutrition labels” for AI-generated content [54-66][61-70][209-219].


Implementation challenges and governance needs – Speakers noted uneven adoption, metadata stripping by platforms, low consumer awareness, and the difficulty of building a business case for provenance. They stressed the necessity of robust governance, guardrails, and a shift from “check-list compliance” to operational frameworks [90-99][105-110][158-166].


Sector-specific responsible-AI deployments – Real-world examples were shared: Air India’s generative-AI virtual assistant that balances safety knobs, continuous monitoring, and human-in-the-loop escalation [257-270]; NPCI’s transparent fraud-prevention model that explains transaction declines and leverages AI while insisting on regulatory safeguards [286-301][370-376]; and RPG’s “bring-your-own-AI” approach that stresses orchestration across data, people, process and technology layers [162-180][185-190].


Overall purpose / goal


The session aimed to translate high-level responsible-AI principles into concrete, enterprise-ready practices for Indian corporations. By showcasing standards, regulatory trends, and concrete industry pilots, the discussion sought to equip leaders with actionable frameworks and to foster a collaborative ecosystem that can scale responsible AI across sectors.


Overall tone


The conversation began with an optimistic, forward-looking tone, emphasizing opportunity and collaboration. As speakers moved into challenges-such as uneven adoption, regulatory pressure, and implementation costs-the tone became more cautionary yet remained constructive, focusing on solutions and shared responsibility. The closing remarks returned to a hopeful, commitment-driven tone, urging continued dialogue and collective action.


Speakers

Vishal Anand Kanvaty


– Role/Title: Chief Technology Officer, National Payments Corporation of India (NPCI)


– Area of Expertise: Digital payments, AI-driven fraud detection and responsible AI governance [S1]


Sarika Guliani


– Role/Title: Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI


– Area of Expertise: AI policy, industry standards, responsible AI implementation [S3]


Dr. Satya Ramaswamy


– Role/Title: Chief Digital and Technology Officer, Air India Limited


– Area of Expertise: Aviation technology, AI-enabled customer service, safety-critical AI systems [S5]


Shantheri Mallaya


– Role/Title: Editor, Economic Times (Panel Moderator)


– Area of Expertise: Journalism, technology policy, AI ethics and industry discourse [S8]


Prativa Mohapatra


– Role/Title: Vice President and Managing Director, Adobe India


– Area of Expertise: Product governance, responsible AI, content authenticity and AI-driven creative tools [S11]


Andy Parsons


– Role/Title: Global Head for Content Authenticity, Adobe (runs the Content Authenticity Initiative)


– Area of Expertise: Content provenance, AI transparency, standards development (C2PA) [S13]


Amol Deshpande


– Role/Title: Group Chief Digital Officer and Head of Innovation, RPG Group


– Area of Expertise: Digital transformation, enterprise AI strategy, responsible AI implementation [S15]


Moderator


– Role/Title: Session Moderator (unnamed)


– Area of Expertise: Event facilitation, AI discussion moderation [S19]


Additional speakers:


Nita – mentioned in closing remarks; no role or expertise specified in the transcript.


Nanya – mentioned in closing remarks; no role or expertise specified in the transcript.


Full session reportComprehensive analysis and detailed insights

The session, presented by Adobe in association with FICCI, opened with moderator Shantari Mallaya (Economic Times) welcoming participants to “Responsible AI from Principles to Practice in Corporate India.” She framed trust, transparency and accountability as “foundational, not optional” for India’s accelerating digital transformation [5-6].


Andy Parsons, Global Head for Content Authenticity at Adobe, set the tone by declaring 2026 the year responsible AI becomes both a regulatory duty and a strategic opportunity. He highlighted that the EU AI Act’s enforcement provisions take effect in August, that California’s first AI law is already in force, and that India’s new IT rules on SGI are being implemented, shifting the business question from “should we be responsible?” to “can you prove you are responsible?” [24-33]. Parsons introduced Adobe’s leadership in the Content Provenance and Authenticity (C2PA) credentials, an open, free, cross-industry standard that embeds provenance metadata directly into media files, enabling anyone to verify a piece of content’s origin, model and tools [55-62]. He described this “nutrition-label” approach as essential for India’s massive digital population, where synthetic content and AI-generated misinformation pose real operational risks. He also warned of challenges: social-media platforms often strip metadata [89-92], consumer awareness of provenance symbols remains low [95-99], and building a profitable business case for provenance remains challenging [108-110]. Consequently, he argued for standards-based infrastructure rather than mere principles, and likened regulation to a catalyst that pushes good practice without being punitive [105-108].


After the opening, Mallaya positioned the panel as a deep dive into translating responsible-AI principles-fairness, accountability, transparency, privacy and inclusivity-into concrete enterprise strategies [144-150].


Amol Deshpande, Chief Digital & Innovation Officer, RPG Group, responded that responsibility must be orchestrated across the five AI layers (data, model, inference, deployment, monitoring) and cannot rely on a single solution. He advocated a “bring-your-own-AI” approach, where each function selects appropriate guardrails while the organisation supplies a scalable, safe environment and governance templates adaptable to diverse business units [162-166][177-184]. He emphasized people as the critical stakeholder, calling for extensive up-skilling to embed human judgement into increasingly complex generative and agentic AI systems [169-176].


Prativa Mohapatra, Vice-President & Managing Director, Adobe India, outlined Adobe’s internal ART (Accountability, Responsibility, Transparency) philosophy and how it is baked into product development pipelines through hundreds of validation steps. Across Adobe’s portfolio-Firefly and the Acrobat Assistant-every AI-generated output carries a content-credential tag that confirms licensing, data compliance and model traceability, thereby shielding enterprises from legal liability and requiring legal and compliance teams to redesign workflows to embed AI governance throughout the input-output lifecycle [209-218][224-232][235-238].


Satya Ramaswamy, Chief Digital and Technology Officer, Air India, illustrated a sector-specific deployment: a generative-AI virtual assistant launched in May 2023 that has handled 13.5 million customer queries with a 97 % autonomous success rate. The system balances a “safety knob” that prevents jailbreaks and inappropriate responses with a seamless user experience, using generative AI both to serve customers and to monitor its own performance. He likened the design to an autopilot/red-button safety-critical analogy, emphasizing human-in-the-loop oversight and “prompt firewalls” provided through Adobe partnerships that bolster risk management without stifling innovation [257-274][332-336].


Vishal Anand Kanwati, CTO, National Payments Corporation of India (NPCI), described AI-driven fraud detection that maintains fairness. NPCI began with a low false-positive threshold and, through data-driven model refinement and industry collaboration, achieved higher accuracy. A small language model now explains to users why a transaction was declined, delivering transparency that builds trust in the payments ecosystem. He stressed that regulatory safeguards are indispensable to prevent AI from “going berserk” and referenced the RBI’s responsible-AI framework as a guiding standard [286-293][298-302][370-376].


Points of Agreement

* All speakers endorsed the need for transparent provenance of AI-generated content – via C2PA credentials (Andy) [55-62], Adobe’s ART-driven content-credential tags (Prativa) [209-218], and NPCI’s transaction-explanation model (Vishal) [286-293].


* They concurred that open, standards-based infrastructure and reusable frameworks are essential for scaling responsible AI, with industry bodies such as FICCI, C2PA and RBI playing pivotal dissemination roles [66-70][297-304][332-340][344-347].


* Regulation was uniformly seen as a catalyst that must coexist with innovation (Andy) [105-108].


* Both Satya and Amol highlighted the critical importance of human-in-the-loop oversight and adjustable guardrails for safety-critical applications [180-182][360-362].


Points of Disagreement

1. Regulation intensity – Vishal argued that mandatory safeguards are essential to prevent harmful AI behaviour [370-376]; Sarika Guliani cautioned that regulation should be balanced and proportionate [379-382]; Andy positioned regulation as a catalyst that encourages good practice without being punitive [105-108].


2. Scope of standards – Andy promoted a single, open C2PA standard as the foundation for provenance [55-62]; Amol counter-argued that “one size does not fit all”, advocating sector-specific templates and a “bring-your-own-AI” model [168-180]; Prativa warned that without free, universally accessible frameworks the divide between large enterprises and MSMEs would widen [297-304].


3. Primary driver of adoption – Amol emphasized an awareness → action → demonstration pathway, with industry bodies disseminating frameworks [332-340]; Vishal insisted that regulation is indispensable for ecosystem safety [370-376]; Sarika stressed that responsible AI is a commitment to shared human values, not merely a compliance checkbox, and should be guided by the “people, planet, progress” agenda [383-389].


Key Take-aways

– Responsible AI must move from high-level principles to provable, operational practice.


Transparent provenance, enabled by open standards such as C2PA, is a cornerstone for trust.


– Effective governance requires coordinated people, process, technology and industry-body layers, not a simple checklist.


– Emerging regulations (EU AI Act, India’s IT rules, state-level AI laws) act as catalysts that should coexist with innovation.


Sector-specific pilots-Air India’s AI assistant, NPCI’s fraud-explanation service, RPG’s flexible governance, Adobe’s ART-driven products-demonstrate practical pathways.


– Without open, free frameworks, responsible AI risks becoming a luxury for large firms, leaving MSMEs behind.


Closing Remarks

Sarika Guliani (FICCI) concluded that responsible AI is a commitment to shared human values rather than a mere compliance checkbox, and that the “people, planet, progress” agenda must guide all technological innovation. FICCI pledged to continue the dialogue and translate the insights into concrete actions for the Indian ecosystem [383-389][389-390].


The moderator thanked the panelists and the audience, signalling that the conversation will move from discussion to implementation.


Session transcriptComplete transcript of the session
Moderator

I’d like to welcome you all to this session titled Responsible AI from Principles to Practice in Corporate India presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite Andy Parsons, our Global Head for Content Authenticity at Adobe.

Andy, over to you.

Andy Parsons

Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.

I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity. I’m going to talk about that in a minute. I’m going to talk about that in a minute. I’m going to talk about that in a minute. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.

And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point. But can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, responsible AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.

So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.

But it’s made the trust problem absolutely impossible to address. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that. Hundreds of millions of people consuming digital content every day.

In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to have leadership in the C2PA, which hopefully many of you have. have heard about this week, which is the Coalition for Content Provenance and Authenticity. And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free.

So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others. And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials.

I’ve encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio, or video. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others. And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company.

It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy I think is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency. Provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used. Simple ideas like knowing that a photograph is actually a photograph and not generated.

These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.

Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.

Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I have often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m not sure if that’s true. I’m not sure if that’s true. I’m not sure if that’s true. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.

What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.

And last, enterprises. Like many of yours in this room, that are willing and excited to go first that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again. And in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things. And that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries, because we have representatives from all of them on our excellent panel today.

So we’re fortunate to have leaders from Air India and PCI, RPCI, and the U .S. Department of Defense, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.

Shantheri Mallaya

Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.

So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.

Right. So, Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So, Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.

So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.

Amol Deshpande

Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.

It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise, to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.

We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.

You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. And that. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple

Shantheri Mallaya

Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Prativa here. Welcome, Prativa. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe is constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally? And at the same time, as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?

So all yours here.

Prativa Mohapatra

Okay, thank you. So I think Andy said the context. And since we are here not to learn about the principles, but the practices, I think everybody should go back with certain practices. So the first practice of AI governance, which we practice, is art. Which is accountability, responsibility, and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course, we are in the business of content for a very, very long time. And now the same content is becoming the currency which everybody’s debating. So our principles have been there for a while.

But. How it is actualized? and let me say how it’s translated into our products. And by the way, it’s in our products, it’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law, you will not be getting into any liability issues.

Because how you do it is by feeding. Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines.

So Acrobat has this new feature called Acrobat Assistant. It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.

So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to already, Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt and re -design and re -design to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI, ensure that you tick all the three.

If you miss any one, you might not be ready for the future. So that’s how I see it.

Shantheri Mallaya

Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Pratibha. I’ll circle back to you, time permitting. Let’s see how best we can get back. Dr. Satya, calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.

How do these things really fall in place in terms of vision and metrics?

Dr. Satya Ramaswamy

Thank you, Shantiri. Since the audience is international, a real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.

So it was a global first in the whole airline industry. Today, it has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to human agent the remaining 50 % of the contact volume comes to A .G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.

At the same time, we don’t want any jailbreak to happen. We don’t want problem injection to happen. We don’t want any inappropriate thing to happen. So we are watching the whole performance of the Gen A virtual assistant, A .G as we call it, all the time. So we use, in fact, Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer, right? So at the end of the day, when we send a response, we also ask the customer, you know, did it answer your question? And also allow them to give their reactions, right? Is it appropriate, inappropriate? And thankfully, over the last 20 years, it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it.

But now we are, as the technologies are maturing, for example, we now have… interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem. That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.

Shantheri Mallaya

Excellent. And given the kind of scale you’re operating at, I think every day is a new day. Yes, it is. We face challenges. There is new, brand new every day. Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here, or rather I’ll sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you.

How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts? One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be, fair at the same time proactive and detective when it comes to looking at fraud? So how, what are the aspects that you look at? keenly here

Vishal Anand Kanvaty

I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with the success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.

Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised

Shantheri Mallaya

absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you some things but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?

Is it getting risked?

Prativa Mohapatra

Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.

So, I think that’s a big thing. I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.

So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?

Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is similar to electricity and it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem. Absolutely.

Shantheri Mallaya

Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem? The industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit. So

Amol Deshpande

Shantiri, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?

Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.

Mind you, this would change. It’s not like this one guardrail construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is I think very, very critical. Absolutely. Thank

Shantheri Mallaya

you for that Amol. Dr. Satya, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean the principles, so on and so forth, and India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time, we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony absolutely, I

Dr. Satya Ramaswamy

think taking Air India, we are an international airline so we operate in many countries, for example we go to North America, US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world, so by nature we are geared to looking at the regulation in all parts of the world and I think and being in compliance, and our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right?

So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco Airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control. And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing.

the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is – this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation. For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it in the same spirit like Adobe.

Absolutely.

Shantheri Mallaya

So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts

Vishal Anand Kanvaty

Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important. While all of us realize.

it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high

Shantheri Mallaya

Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please

Sarika Guliani

first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken and the agency itself and the so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of FICCI and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.

That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Shantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.

So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (7)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The session was presented by Adobe in association with FICCI and titled “Responsible AI from Principles to Practice in Corporate India.””

The knowledge base explicitly states that the discussion titled “Responsible AI from Principles to Practice in Corporate India” was presented by Adobe in association with FICCI, confirming the partnership and session title [S2].

Confirmedhigh

“EU AI Act’s enforcement provisions take effect in August.”

EU AI Act enforcement begins in August, with oversight authorities appointed and penalties enforceable from 2 August (and the Act itself entered into force on 1 August 2024) [S72] and [S73].

Confirmedhigh

“Adobe leads the Content Provenance and Authenticity (C2PA) credentials, an open, free, cross‑industry standard that embeds provenance metadata directly into media files.”

C2PA is described as a technical standard that enables creators to attach cryptographically signed provenance metadata to media, and is supported by Adobe among other companies, confirming its open, cross-industry nature [S37] and [S76].

Confirmedmedium

“Amol Deshpande advocated a “bring‑your‑own‑AI” approach for organisational governance.”

The discussion notes that the phrase “bring your own AI” was highlighted and praised during the session, confirming its use by speakers such as Amol Deshpande [S1].

Additional Contextmedium

“India’s new IT rules on SGI are being implemented, requiring platforms to label synthetic content and act on it.”

India has introduced rules that obligate social-media platforms to label AI-generated/deep-fake content and remove flagged material within three hours, providing concrete detail on the regulatory environment referenced in the report [S79].

External Sources (82)
S1
Responsible AI in India Leadership Ethics & Global Impact part1_2 — -Vishal Anand Kanvaty- Chief Technology Officer, National Payments Corporation of India (NPCI)
S2
Responsible AI in India Leadership Ethics & Global Impact — -Vishal Anand Kanwati- Chief Technology Officer, National Payments Corporation of India (NPCI)
S3
Responsible AI in India Leadership Ethics & Global Impact part1_2 — The discussion concluded with Sarika Guliani from FICCI emphasising that “responsibility is not anymore a compliance che…
S4
Responsible AI in India Leadership Ethics & Global Impact — The session concluded with FICCI’s commitment to continue translating discussions into actionable frameworks. Sarika Gul…
S5
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact-part1_2 — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S6
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Dr. Satya Ramaswamy- Vishal Anand Kanvaty – Vishal Anand Kanvaty- Dr. Satya Ramaswamy Dr. Satya focuses on balancing…
S7
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S8
Responsible AI in India Leadership Ethics & Global Impact part1_2 — -Shantheri Mallaya- Editor at Economic Times, panel moderator
S9
Responsible AI in India Leadership Ethics & Global Impact — -Shantari Malaya- Editor at Economic Times, panel moderator
S10
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S11
Responsible AI in India Leadership Ethics & Global Impact — -Prativa Mohapatra- Vice President and Managing Director of Adobe India
S12
Driving U.S. Innovation in Artificial Intelligence — 2. Amy Cohen – Executive Director, National Association of State Election Directors 3. Andy Parsons – Senior Director of…
S13
Responsible AI in India Leadership Ethics & Global Impact — -Andy Parsons- Global Head for Content Authenticity at Adobe, runs the Content Authenticity Initiative at Adobe
S14
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excit…
S15
Responsible AI in India Leadership Ethics & Global Impact part1_2 — – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S16
Responsible AI in India Leadership Ethics & Global Impact — – Andy Parsons- Amol Deshpande – Andy Parsons- Prativa Mohapatra- Amol Deshpande – Prativa Mohapatra- Amol Deshpande
S17
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S18
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S19
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S20
Closing remarks – Charting the path forward — Importance of moving from principles to practical implementation
S21
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Safe SAIF, secure AI framework is something we have shared outside. And it is important to understand supply chain risk….
S22
Ethics and AI | Part 6 — A significant focus of the Act is placed on transparency. It mandates that users be informed when they are interacting w…
S23
Building the Next Wave of AI_ Responsible Frameworks & Standards — This comment fundamentally shifted the discussion from viewing responsibility as a constraint on innovation to seeing it…
S24
The Global Power Shift India’s Rise in AI & Semiconductors — The discussion concluded that India’s opportunity in AI and semiconductors is real but time-bound, requiring decisive ex…
S25
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — This comment shifted the discussion from high-level policy frameworks to concrete, relatable concerns. It established a …
S26
Toward Collective Action_ Roundtable on Safe & Trusted AI — And AI is allowing this to happen at a scale that at the moment we already see disruptions, but I think there’s real ris…
S27
AI as critical infrastructure for continuity in public services — This comment provides a concrete, measurable example of how AI exclusion occurs, moving beyond abstract discussions of i…
S28
The rise and risks of synthetic media — The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in he…
S29
AI slop’s meteoric rise and the impact of synthetic content in 2026 — In December 2025, the Macquarie Dictionary, Merriam-Webster, and the American Dialect Society named ‘slop’ as the Word o…
S30
Meta India VP highlights AI’s role in ensuring user safety against misinformation — Meta India Vice President Sandhya Devanathan said the companyuses AI to combat misinformationwhile stressing that it wil…
S31
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste. Honorable Minister Vaishnav, Your Excellency’s colleagues, let me begin by thanking our host, Prime Minister Mo…
S32
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S33
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S34
Conversational AI in low income & resource settings | IGF 2023 — Finding the right balance between regulation and innovation is crucial. By addressing these issues, AI can play a signif…
S35
Open Forum #17 AI Regulation Insights From Parliaments — Balancing Innovation and Regulation There’s a critical balance needed between regulation and innovation incentives. Cou…
S36
What is it about AI that we need to regulate? — Global AI Governance Initiatives: Directions and TrajectoriesGlobal AI governance initiatives are heading toward multipl…
S37
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges Charlie Halford…
S38
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — Nadja Blagojevic: Yes, very happy to. And thank you so much for having Google here. We’re very happy to be speaking with…
S39
WSIS Action Line C7 E-environment — – **Governance and Implementation Challenges**: Common barriers across all initiatives including unclear policy framewor…
S40
Advancing Scientific AI with Safety Ethics and Responsibility — -Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI…
S41
Laying the foundations for AI governance — – Industry attitudes toward regulation: whether companies genuinely want regulation or resist it due to power concentrat…
S42
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Andy Parsons positioned regulation as helping enterprises move from reactive to proactive responsible AI adoption. The u…
S43
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — Agent Standards Initiative focuses on industry-driven, consensus-based voluntary standards rather than regulation There…
S44
Industry leaders partner to promote responsible AI development — Anthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies,have joined to establishtheFrontier …
S45
Building the Next Wave of AI_ Responsible Frameworks & Standards — I think there is a significant role the governments, innovation hubs, academia, and startups have to play in developing …
S46
Safe and Responsible AI at Scale Practical Pathways — A sustainable data economy requires clear incentive models with guaranteed trust, value creation, and exchangeability me…
S47
AI for agriculture Scaling Intelegence for food and climate resiliance — The minister emphasizes that artificial intelligence in agriculture should rest on reliable data sources, be governed by…
S48
Opening address of the co-chairs of the AI Governance Dialogue — Infrastructure | Legal and regulatory International technical standards and their role to make sure that policy and reg…
S49
Responsible AI in India Leadership Ethics & Global Impact — Aviation industry’s safety-critical nature provides embedded concepts of human-in-the-loop control and regulatory compli…
S50
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S51
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Matilda Road:Mathilda, over to you. Thank you, Florian. Good morning everyone. It’s great to see so many of you here and…
S52
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — in the world in terms of policy and regulation. When Vision 2030 was launched by His Royal Highness the Crown Prince, we…
S53
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S54
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S55
WS #283 AI Agents: Ensuring Responsible Deployment — User control and human oversight are essential safeguards, particularly for high-impact decisions that are difficult to …
S56
Agentic AI in Focus Opportunities Risks and Governance — All panelists emphasized the critical importance of enterprise guardrails and human oversight. They stressed that while …
S57
Policy Guidelines — – ◾ Section 1: The Development of Open Access to Scientific Information and Research , gives an overview of the definiti…
S58
Is the AI bubble about to burst? Five causes and five scenarios — Historically,open systems often win in the long run– think of the internet, HTML, and Linux. They become standards, attr…
S59
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The level of consensus among the speakers was relatively high, particularly on the benefits and potential applications o…
S60
Comprehensive Report: European Approaches to AI Regulation and Governance — And how would the downstream provider offering then this final system to the border control or to the, for instance, to …
S61
Google to require disclosure of AI-generated content in political ads — Googleis implementing new rules requiring political ads on its platforms to disclose when images and audio are generated…
S62
Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content — Human rights | Legal and regulatory | Sociocultural Information Integrity and Human Rights Framework There must be dis…
S63
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — – Ioanna Ntinou- Mark Gachara Example of energy efficiency passes for houses in Germany and EU that are obligatory, mak…
S64
Responsible AI in India Leadership Ethics & Global Impact — And our customers are international, and when we operate in this international geographies, we have to comply with the a…
S65
Responsible AI in India Leadership Ethics & Global Impact part1_2 — This comment reframes the entire discussion from theoretical principles to practical implementation. It shifts the focus…
S66
What is it about AI that we need to regulate? — Global AI Governance Initiatives: Directions and TrajectoriesGlobal AI governance initiatives are heading toward multipl…
S67
Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology — # Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges Charlie Halford…
S68
From principles to practice: Governing advanced AI in action — – **Implementation Challenges Across Jurisdictions**: Participants highlighted the tension between rapid technological a…
S69
WSIS Action Line C7 E-environment — – **Governance and Implementation Challenges**: Common barriers across all initiatives including unclear policy framewor…
S70
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Examples of sectoral self-regulations are in the case of Mauritius in the perspective of increasing the capacity of exis…
S71
How AI in 2026 will transform management roles and organisational design — In 2026, AI will transform management structures and automate tasks as companies strive to demonstrate real value. By 20…
S72
EU AI Act oversight and fines begin this August — A new phase of the EU AI Acttakes effect on 2 August, requiring member states to appoint oversight authorities and enfor…
S73
EU AI Act officially comes into force — The world’s first comprehensive AI law, known as the EU AI Act, officially came intoforceon 1 August 2024, marking a sig…
S74
Keynotes — Legal and regulatory | Human rights O’Flaherty calls for the EU to maintain its commitment to enforcing the Digital Ser…
S75
EU AI Act published in Official Journal, initiating countdown to legal deadlines — The European Union has finalised its AI Act, a significant regulatory framework aimed at governing the use of AI within …
S76
Certifying humanity: Labeling content amid AI flood — These debates are no longer theoretical. Provenance-based initiatives such as theContent Authenticity Initiative (C2PA),…
S77
Day 0 Event #12 Tackling Misinformation with Information Literacy — Zoe Darma: to start with a quiz and that there are no wrong answers, but there actually are. There actually are right …
S78
Day 0 Event #265 Using Digital Platforms to Promote Info Integrity — Gisella Lomax connected online misinformation to devastating real-world consequences: “Information risks such as hate sp…
S79
India enforces a three-hour removal rule for AI-generated deepfake content — Strict new ruleshave been introducedin India for social media platforms in an effort to curb the spread of AI-generated …
S80
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S81
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S82
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Andy Parsons
7 arguments190 words per minute2010 words632 seconds
Argument 1
Principles‑to‑practice imperative (Andy Parsons)
EXPLANATION
Andy stresses that responsible AI must move beyond abstract principles and become a demonstrable part of corporate compliance and strategy. He frames this shift as essential for 2026, when responsibility will be both a regulatory requirement and a business opportunity.
EVIDENCE
He notes that responsible AI will stop being a slide in a deck and become part of a compliance strategy and an important opportunity, and that the panel’s theme is “the shift from principles to provable practice” [33-34]. He also points out that responsibility will become a discipline rather than a mere policy statement [32-33].
MAJOR DISCUSSION POINT
Principles‑to‑practice imperative (Andy Parsons)
Argument 2
C2PA content credentials as an open, interoperable standard (Andy Parsons)
EXPLANATION
Andy describes the C2PA content credentials as an open, cross‑industry standard that attaches provenance information to any media asset. The standard is designed to be freely adoptable and interoperable across tools and platforms.
EVIDENCE
He explains that five years of work produced the open C2PA standard, that a C2PA symbol appears on LinkedIn, and that the credentials provide transparent context for videos, audio, or images [61-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Andy’s description of C2PA matches the external mention of an open, free C2PA content credentials standard developed five years ago [S1][S2].
MAJOR DISCUSSION POINT
C2PA content credentials as an open, interoperable standard (Andy Parsons)
AGREED WITH
Prativa Mohapatra, Vishal Anand Kanvaty, Moderator, Sarika Guliani
DISAGREED WITH
Amol Deshpande, Prativa Mohapatra
Argument 3
Need for industry‑wide, standards‑based infrastructure (Andy Parsons)
EXPLANATION
Andy argues that a shared, standards‑based infrastructure for content trust is essential and must not be owned by any single company. He calls for an open, interoperable layer that any organization can adopt to embed transparency into AI‑generated content.
EVIDENCE
He highlights a cross-industry coalition that includes Adobe, Microsoft, BBC, OpenAI, Sony, Qualcomm and others, creating an infrastructure layer for content trust that is standards-based, non-proprietary, and available to everyone [66-70] and stresses that this philosophy is especially important for India [71-73].
MAJOR DISCUSSION POINT
Need for industry‑wide, standards‑based infrastructure (Andy Parsons)
AGREED WITH
Prativa Mohapatra, Amol Deshpande, Vishal Anand Kanvaty
DISAGREED WITH
Amol Deshpande, Prativa Mohapatra
Argument 4
EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons)
EXPLANATION
Andy points out that emerging regulatory regimes—such as the EU AI Act, California’s AI law, and India’s new IT rules—are compelling organizations to embed responsible AI practices now. He frames regulation as a catalyst for good practices rather than a purely punitive force.
EVIDENCE
He cites the EU AI Act’s enforcement provisions taking effect in August, the first U.S. state law in California, and India’s new IT rules on SGI, noting that India is actively shaping its own path [25-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The focus on the EU AI Act’s transparency requirements and balanced regulation is reflected in the EU AI Act transparency provisions [S22] and discussions on balancing regulation and innovation [S34][S35].
MAJOR DISCUSSION POINT
EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons)
AGREED WITH
Vishal Anand Kanvaty, Dr. Satya Ramaswamy, Sarika Guliani
DISAGREED WITH
Vishal Anand Kanvaty, Sarika Guliani
Argument 5
Embedding responsible AI at the core of products is essential rather than treating it as a bolt‑on feature
EXPLANATION
Andy argues that responsible AI must be baked into the core architecture of tools, not added later as an afterthought, to ensure genuine trust and provenance.
EVIDENCE
He explains that five years ago Adobe decided that responsible AI via content transparency had to be baked into the core of products like Photoshop and Premiere, not grafted on as a feature [57-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on baking transparency into tools rather than grafting it on is echoed in external notes about core integration of content transparency [S1][S5].
MAJOR DISCUSSION POINT
Core integration of responsible AI into products (Andy Parsons)
Argument 6
The AI trust crisis is real, concrete and impacts everyday users and businesses
EXPLANATION
Andy points out that the trust crisis caused by AI‑generated content is a tangible, daily problem affecting consumers, children, and enterprises across India’s diverse linguistic landscape.
EVIDENCE
He describes the trust crisis with AI as real, concrete, happening every day to children, businesses and individuals, especially across India’s cultural and linguistic diversity [37-39].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The real-world trust erosion and synthetic media risks are discussed in roundtable remarks on trust breakdown [S26] and the rise of synthetic media [S28][S29].
MAJOR DISCUSSION POINT
Real‑world AI trust crisis (Andy Parsons)
Argument 7
India’s massive digital population makes synthetic content and AI‑generated misinformation operational risks for businesses
EXPLANATION
Andy highlights that with hundreds of millions of daily digital consumers, AI‑generated misinformation is not abstract but an operational risk that enterprises must manage.
EVIDENCE
He notes that India has the world’s largest digital population, and that synthetic content and AI-generated misinformation are real operational risks for businesses [46-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s large digital user base and misinformation challenges are highlighted in the Meta India VP remarks on AI combating misinformation [S30] and the broader risks of synthetic media [S28].
MAJOR DISCUSSION POINT
Operational risks of AI‑generated misinformation in India (Andy Parsons)
S
Shantheri Mallaya
3 arguments159 words per minute1631 words611 seconds
Argument 1
Translating principles into enterprise strategy (Shantheri Mallaya)
EXPLANATION
Shantheri frames the central challenge as moving responsible‑AI principles—fairness, accountability, transparency, privacy, inclusivity—into concrete enterprise strategy frameworks. She asks panelists to explain how these values can be operationalised in real business contexts.
EVIDENCE
In her opening she asks how responsible-AI principles will be realistically translated into enterprise strategy frameworks and how organisations will go about it [144-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from principles to practice is also noted in the closing remarks [S20] and the responsible AI as an enabler discussion [S23].
MAJOR DISCUSSION POINT
Translating principles into enterprise strategy (Shantheri Mallaya)
Argument 2
India is positioning itself as a global leader in trustworthy and inclusive AI
EXPLANATION
Shantheri highlights that India is charting the course for the world in building trustworthy and inclusive AI, indicating a leadership role on the international stage.
EVIDENCE
She remarks that India is really charting the course for the world and that building trustworthy and inclusive AI is a momentous time for the country [135-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s leadership in trustworthy AI is highlighted in summit remarks on inclusive AI development [S31] and the global vision plenary noting India’s role [S32].
MAJOR DISCUSSION POINT
India as a global leader in trustworthy and inclusive AI (Shantheri Mallaya)
Argument 3
Balancing AI‑driven innovation with regulation and user experience is essential
EXPLANATION
She stresses the need to balance rapid AI innovation with regulatory compliance and maintaining a high-quality user and customer experience.
EVIDENCE
She asks how to balance AI-driven innovation, regulation, accountability, operational efficiency, and user experience within large-scale aviation operations [245-249].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for balance between regulation and innovation is discussed in the IGF session on conversational AI [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Balancing innovation, regulation and user experience (Shantheri Mallaya)
S
Sarika Guliani
2 arguments142 words per minute590 words249 seconds
Argument 1
Commitment beyond compliance, embedding human values (Sarika Guliani)
EXPLANATION
Sarika argues that responsible AI should be seen as a commitment to shared human values rather than a mere compliance checkbox. She stresses that technology choices now shape the future, and that ethical considerations must be embedded from the outset.
EVIDENCE
She states that responsibility is no longer a compliance check but a commitment of technology with shared human values, and that the choice of what to create defines our future, not just words on a slide [379-382].
MAJOR DISCUSSION POINT
Commitment beyond compliance, embedding human values (Sarika Guliani)
AGREED WITH
Andy Parsons, Prativa Mohapatra, Vishal Anand Kanvaty, Moderator
Argument 2
Regulation should be balanced, avoiding overly heavy‑handed approaches
EXPLANATION
Sarika argues that while regulation is necessary, it should be proportionate and not stifle innovation, advocating for a light‑touch regulatory approach where appropriate.
EVIDENCE
She notes that the discussion would need another session to compare light-touch versus balanced regulation, indicating a preference for proportionate regulatory frameworks [379-382].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balanced regulatory approaches are advocated in the IGF discussion on regulation vs innovation [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Need for balanced, proportionate regulation (Sarika Guliani)
P
Prativa Mohapatra
6 arguments156 words per minute1126 words432 seconds
Argument 1
Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra)
EXPLANATION
Prativa explains Adobe’s internal “ART” philosophy—Accountability, Responsibility, Transparency—and shows how it is baked into its generative AI tool Firefly and the Acrobat Assistant. This ensures that outputs are traceable, lawful, and trustworthy.
EVIDENCE
She describes how Firefly embeds a “nutrition label” that guarantees lawful, non-infringing output, and how Acrobat Assistant follows the same provenance principles, allowing users to trace the origin of content and ensure compliance [197-210] and [222-228].
MAJOR DISCUSSION POINT
Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra)
Argument 2
Product‑level governance methodology with hundreds of checks (Prativa Mohapatra)
EXPLANATION
Prativa notes that every new Adobe product undergoes a rigorous, secure development methodology that includes hundreds of validation steps, embedding responsible‑AI principles directly into the product lifecycle.
EVIDENCE
She states that each new product goes through a very strong, secure methodology with hundreds of steps, ensuring principles are embedded into creation processes [206-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The mention of a strong, secure methodology with hundreds of validation steps aligns with external commentary on product governance processes [S5].
MAJOR DISCUSSION POINT
Product‑level governance methodology with hundreds of checks (Prativa Mohapatra)
Argument 3
Risk of a divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra)
EXPLANATION
Prativa warns that a gap could emerge between large AI developers and smaller firms that lack resources, emphasizing the need for free, open frameworks that all can adopt. She cites Adobe’s early C2PA work as an example of making standards freely available.
EVIDENCE
She highlights the stark divide between big and small enterprises, the importance of free, accessible frameworks, and references Adobe’s 2019 content authentication initiative as a pioneering open effort [297-304] and notes that creators must continue providing such frameworks [305-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for open, free frameworks for all enterprises echo the discussion of open standards and inclusive AI leadership [S23][S31].
MAJOR DISCUSSION POINT
Risk of a divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra)
Argument 4
Large players must create reusable, open frameworks that MSMEs can adopt (Prativa Mohapatra)
EXPLANATION
Prativa argues that large enterprises should develop reusable, open‑source frameworks that smaller businesses can leverage, ensuring responsible AI does not become a luxury only for the well‑resourced. She calls for ongoing collaboration among technology creators to extend methodologies to the broader ecosystem.
EVIDENCE
She states that large enterprises must create frameworks that MSMEs can adopt, and that creators need to keep building methods for others to use, emphasizing the need for open, reusable solutions [315-317].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation for large firms to provide reusable, open frameworks matches the emphasis on open standards and inclusive AI development [S23][S31].
MAJOR DISCUSSION POINT
Large players must create reusable, open frameworks that MSMEs can adopt (Prativa Mohapatra)
Argument 5
Legal, compliance and ethics teams must redesign processes to embed AI governance
EXPLANATION
Prativa emphasizes that enterprises need to re‑opt and redesign their legal, compliance and ethical processes to incorporate AI governance throughout the organization.
EVIDENCE
She states that every organization has legal and compliance teams that must be re-opted and re-designed to address AI compliance, ensuring all three pillars-legal, compliance and ethics-are covered [234-237].
MAJOR DISCUSSION POINT
Re‑designing legal and compliance processes for AI governance (Prativa Mohapatra)
Argument 6
AI governance requires integration of people, process and technology, reflecting the ART philosophy
EXPLANATION
She outlines that responsible AI must combine accountability, responsibility, and transparency across people, processes, and technology, mirroring the ART framework used at Adobe.
EVIDENCE
She notes that enterprises need legal, compliance, and ethical strategies together, and that AI governance must tick all three-people, process, technology-to be ready for the future [233-236].
MAJOR DISCUSSION POINT
Holistic integration of people, process and technology in AI governance (Prativa Mohapatra)
A
Amol Deshpande
5 arguments181 words per minute759 words251 seconds
Argument 1
Orchestration across all AI layers; people‑centric governance (Amol Deshpande)
EXPLANATION
Amol stresses that responsible AI must be orchestrated across every layer of the AI stack and that people are a critical stakeholder. He advocates a “bring‑your‑own‑AI” approach with guardrails, rather than a one‑size‑fits‑all solution.
EVIDENCE
He explains that responsibility must exist at every AI layer, that people are a very important stakeholder, and that a scalable, safe environment with guardrails is essential, describing a “bring your own AI” scenario and the need for templates [162-166] and [169-176] and [177-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guardrails across the AI stack and people-centric governance are highlighted in the generative AI guardrails discussion [S33].
MAJOR DISCUSSION POINT
Orchestration across all AI layers; people‑centric governance (Amol Deshpande)
Argument 2
Awareness → action → demonstration cycle; role of industry bodies in disseminating frameworks (Amol Deshpande)
EXPLANATION
Amol outlines a three‑step process—awareness, action, demonstration—to embed responsible AI, and highlights the pivotal role of industry bodies in spreading best‑practice frameworks across sectors.
EVIDENCE
He states that the first step is awareness, followed by action, then demonstration, and that industry bodies (e.g., FICCI) are crucial for disseminating learnings and templates across the value chain [332-340] and [341-347].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The three-step cycle and role of industry bodies are reflected in the IGF roundtable on safe AI and the open forum on regulation insights [S34][S35].
MAJOR DISCUSSION POINT
Awareness → action → demonstration cycle; role of industry bodies in disseminating frameworks (Amol Deshpande)
DISAGREED WITH
Vishal Anand Kanvaty, Sarika Guliani
Argument 3
RPG Group’s need for flexible, scalable AI governance across diverse business units (Amol Deshpande)
EXPLANATION
Amol describes the RPG Group’s challenge of governing AI across a heterogeneous conglomerate, emphasizing that a single solution cannot fit all units and that flexible, scalable guardrails are required.
EVIDENCE
He notes the need for a scalable, safe environment with guardrails, that one size doesn’t fit all, and that templates are being exercised within the enterprise across diverse business units [168-180] and [181-184].
MAJOR DISCUSSION POINT
RPG Group’s need for flexible, scalable AI governance across diverse business units (Amol Deshpande)
Argument 4
Industry bodies help cascade standards and templates to MSMEs lacking resources (Amol Deshpande)
EXPLANATION
Amol argues that industry associations can bridge the resource gap for MSMEs by sharing standards, templates, and best practices, enabling smaller firms to adopt responsible AI without building frameworks from scratch.
EVIDENCE
He mentions that organizations like FICCI can help cascade frameworks, that MSMEs lack access to such information, and that industry bodies are critical for sharing learnings across sectors [344-347].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry associations bridging resource gaps for MSMEs are discussed in the IGF session on regulation and innovation [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Industry bodies help cascade standards and templates to MSMEs lacking resources (Amol Deshpande)
Argument 5
Enterprises need a scalable, safe AI environment with built‑in guardrails
EXPLANATION
Amol stresses that large organisations must provide a scalable environment where AI operates safely, with guardrails that protect against misuse while allowing flexibility.
EVIDENCE
He describes the need for a scalable, safe environment protected with guardrails as a key requirement for the enterprise [180-182].
MAJOR DISCUSSION POINT
Scalable safe AI environment with guardrails (Amol Deshpande)
D
Dr. Satya Ramaswamy
4 arguments183 words per minute1035 words338 seconds
Argument 1
Global regulatory compliance coexists with innovation; safety‑critical aviation example (Dr. Satya Ramaswamy)
EXPLANATION
Satya explains that Air India must comply with a patchwork of international regulations (US, EU, India) while still innovating with AI. He stresses that safety‑critical aviation standards drive rigorous compliance without stifling innovation.
EVIDENCE
He notes Air India’s operations across multiple jurisdictions, the need to obey DGCA, FAA, and EU regulators, and that compliance does not constrain Indian innovation, citing the partnership with Adobe and the launch of a global AI virtual assistant [351-364].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing global regulatory compliance with innovation is discussed in the IGF session on regulation and innovation [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Global regulatory compliance coexists with innovation; safety‑critical aviation example (Dr. Satya Ramaswamy)
Argument 2
Air India’s generative‑AI virtual assistant with safety guardrails and continuous monitoring (Dr. Satya Ramaswamy)
EXPLANATION
Satya details Air India’s AI‑driven virtual assistant that handles millions of customer queries, operates with a 97 % autonomous success rate, and incorporates multiple safety guardrails, continuous monitoring, and user feedback loops to prevent misuse.
EVIDENCE
He describes the launch in May 2023, handling 13.5 million queries, 97 % autonomous handling, safety knobs, jailbreak prevention, real-time monitoring, and the use of generative AI to watch its own performance, with Adobe providing indemnity [257-270] and [261-268].
MAJOR DISCUSSION POINT
Air India’s generative‑AI virtual assistant with safety guardrails and continuous monitoring (Dr. Satya Ramaswamy)
Argument 3
Safety‑critical aviation demands continuous human‑in‑the‑loop oversight of AI systems
EXPLANATION
Satya explains that because aviation is safety‑critical, AI systems must always allow a human operator to intervene instantly, ensuring safety overrides automated decisions.
EVIDENCE
He describes the red button on the joystick that lets a pilot take control at any moment if the autopilot behaves incorrectly, illustrating the human-in-the-loop safety mechanism [360-362].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop oversight for safety‑critical AI (Dr. Satya Ramaswamy)
Argument 4
Partnerships with technology providers like Adobe provide indemnity and confidence in AI deployments
EXPLANATION
Satya highlights that collaborations with firms such as Adobe, which offer indemnity, give Air India confidence to adopt AI while managing risk.
EVIDENCE
He notes that Adobe provides full indemnity in case of problems, which gives a lot of confidence in managing AI risk [269-270].
MAJOR DISCUSSION POINT
Strategic tech partnerships to mitigate AI risk (Dr. Satya Ramaswamy)
V
Vishal Anand Kanvaty
2 arguments184 words per minute582 words189 seconds
Argument 1
Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty)
EXPLANATION
Vishal argues that regulatory frameworks are necessary because unchecked AI can produce harmful outcomes; safeguards embedded in law protect the ecosystem and maintain trust.
EVIDENCE
He states that regulations are required because AI can go berserk, that safeguards are mandatory to prevent such behavior, and that regulations must be embedded into systems and consulted with stakeholders [370-376].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of regulation to prevent uncontrolled AI behavior is highlighted in the IGF discussion on regulation balance [S34] and the open forum on AI regulation insights [S35].
MAJOR DISCUSSION POINT
Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty)
DISAGREED WITH
Amol Deshpande, Sarika Guliani
Argument 2
NPCI’s AI for fraud detection prioritising low false‑positives and transparent explanations (Vishal Anand Kanvaty)
EXPLANATION
Vishal explains NPCI’s AI‑driven fraud detection system, which aims to keep false‑positive rates low while providing transparent, user‑facing explanations for declined transactions, thereby building trust in the payment ecosystem.
EVIDENCE
He notes the priority of minimizing false positives, the development of a language model that can explain why a transaction was declined, and that this transparency aligns with RBI’s responsible-AI framework, helping maintain trust in the payment system [286-294] and [295-301].
MAJOR DISCUSSION POINT
NPCI’s AI for fraud detection prioritising low false‑positives and transparent explanations (Vishal Anand Kanvaty)
M
Moderator
3 arguments132 words per minute132 words59 seconds
Argument 1
Responsible deployment outweighs speed of AI adoption
EXPLANATION
The moderator stresses that while AI can accelerate innovation, the priority must be on deploying it responsibly rather than merely adopting it quickly. Speed without responsibility could undermine trust and safety.
EVIDENCE
He notes that the real differentiator is not how quickly AI is adopted but how responsibly it is deployed, emphasizing the need for responsible AI over rapid adoption [3-4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on responsible deployment over speed mirrors the closing remarks on moving from principles to practice [S20].
MAJOR DISCUSSION POINT
Responsible deployment outweighs speed of AI adoption (Moderator)
Argument 2
Trust, transparency and accountability are foundational for AI in corporate India
EXPLANATION
The moderator frames trust, transparency, and accountability as non‑optional, foundational elements that must underpin AI initiatives in Indian enterprises.
EVIDENCE
He declares that trust, transparency and accountability are no longer optional and are foundational for the discussion on responsible AI [5-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Foundational importance of trust, transparency and accountability is reflected in the EU AI Act transparency focus [S22] and roundtable concerns about trust breakdown [S26].
MAJOR DISCUSSION POINT
Foundational role of trust, transparency and accountability (Moderator)
Argument 3
The session aims to advance safe and trusted AI in the corporate landscape
EXPLANATION
The moderator sets the purpose of the session as focusing on advancing safe, trusted AI practices within corporations.
EVIDENCE
He states that the conversation will center on advancing safe and trusted AI in the corporate landscape [7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session’s goal aligns with the overall theme of advancing safe, trusted AI in the responsible AI discussions [S20][S23].
MAJOR DISCUSSION POINT
Advancing safe and trusted AI in corporate sector (Moderator)
Agreements
Agreement Points
Transparency and provenance of AI‑generated content must be embedded in products and made openly verifiable.
Speakers: Andy Parsons, Prativa Mohapatra, Vishal Anand Kanvaty, Moderator, Sarika Guliani
C2PA content credentials as an open, interoperable standard (Andy Parsons) Adobe’s “ART” philosophy with nutrition‑label style provenance in Firefly (Prativa Mohapatra) NPCI’s transparent explanations for declined transactions (Vishal Anand Kanvaty) Trust, transparency and accountability are foundational (Moderator) Commitment beyond compliance, embedding human values (Sarika Guliani)
All speakers stress that responsible AI requires concrete, transparent provenance mechanisms-whether via open standards like C2PA, Adobe’s built-in nutrition labels, or transaction-level explanations-so that users can see how content or decisions are generated and trust the system [5-6][61-66][74-76][209-210][293-294][379-382].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy trends emphasize mandatory disclosure of AI-generated media, as seen in Google’s upcoming political-ad rules requiring clear labeling of synthetic content [S61] and broader calls for algorithmic transparency in public-interest frameworks [S62]; NPCI’s own transparency-by-design approach for its language models reinforces this direction [S49].
Open, standards‑based infrastructure and reusable frameworks are essential for scaling responsible AI across industries.
Speakers: Andy Parsons, Prativa Mohapatra, Amol Deshpande, Vishal Anand Kanvaty
Need for industry‑wide, standards‑based infrastructure (Andy Parsons) Risk of divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra) Industry bodies help cascade standards and templates to MSMEs (Amol Deshpande) RBI framework and transparent AI models as a reusable foundation (Vishal Anand Kanvaty)
The panel concurs that responsible AI cannot rely on proprietary solutions; it must be built on open, cross-industry standards and reusable frameworks that can be adopted by both large firms and MSMEs, with industry bodies playing a key dissemination role [66-70][297-304][332-340][344-347][292-301].
POLICY CONTEXT (KNOWLEDGE BASE)
International bodies promote voluntary, consensus-driven standards (e.g., the Agent Standards Initiative) to foster interoperable, responsible AI ecosystems [S43]; the AI Standards Hub and multistakeholder dialogues stress the need for open technical standards that remain adaptable to regulatory needs [S48][S51].
Regulatory frameworks are a catalyst and necessary safeguard for responsible AI, but should be balanced and proportionate.
Speakers: Andy Parsons, Vishal Anand Kanvaty, Dr. Satya Ramaswamy, Sarika Guliani
EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons) Regulation is essential to prevent AI “berserk” behaviour (Vishal Anand Kanvaty) Global regulatory compliance coexists with innovation (Dr. Satya Ramaswamy) Regulation should be balanced, avoiding overly heavy‑handed approaches (Sarika Guliani)
All agree that regulation is indispensable-acting as a catalyst, ensuring safety, and providing a level playing field-while emphasizing the need for proportionate rules that do not stifle innovation [25-27][370-376][351-364][379-382].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry perspectives acknowledge that well-designed regulation can shift firms from reactive to proactive AI governance, providing clarity and urgency for responsible practices [S42]; however, scholars warn against over-regulation and advocate proportionate, context-sensitive rules that complement existing laws [S53][S45].
Human‑in‑the‑loop oversight and guardrails are critical, especially for safety‑critical applications.
Speakers: Dr. Satya Ramaswamy, Amol Deshpande
Human‑in‑the‑loop oversight for safety‑critical aviation AI (Dr. Satya Ramaswamy) Enterprises need scalable, safe AI environments with built‑in guardrails (Amol Deshpande)
Both speakers highlight that AI systems must include real-time human oversight and robust guardrails to ensure safety, whether in aviation or broader enterprise contexts [360-362][180-182].
POLICY CONTEXT (KNOWLEDGE BASE)
Aviation safety standards embed human-in-the-loop controls and regulatory compliance, illustrating the necessity of oversight for high-risk AI systems [S49]; similar principles are echoed in broader AI governance discussions emphasizing user control and human accountability [S55][S56][S54].
Balancing rapid AI innovation with regulatory compliance and user experience is essential.
Speakers: Shantheri Mallaya, Dr. Satya Ramaswamy, Vishal Anand Kanvaty
Balancing AI‑driven innovation with regulation and user experience (Shantheri Mallaya) Global regulatory compliance coexists with innovation; safety does not constrain Indian innovation (Dr. Satya Ramaswamy) Low false‑positive rates and transparent explanations balance fraud detection with user trust (Vishal Anand Kanvaty)
The moderator and panelists agree that AI deployment must simultaneously pursue speed, compliance, and a high-quality user experience, using mechanisms such as transparent explanations and safety guardrails [245-249][351-364][286-294].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses stress the need to align fast-moving AI development with compliance mechanisms that do not hinder user experience, advocating incentive models that build trust while preserving innovation speed [S45][S46][S53].
Similar Viewpoints
Both stress that the priority is responsible AI deployment rather than merely rapid adoption, framing responsibility as a strategic imperative [3-4][33-34][5-6].
Speakers: Andy Parsons, Moderator
Responsible deployment outweighs speed of AI adoption (Moderator) Principles‑to‑practice imperative (Andy Parsons)
Both highlight the danger that responsible AI becomes a luxury for large firms and argue that industry bodies must provide open frameworks to enable MSMEs [297-304][332-340][344-347].
Speakers: Prativa Mohapatra, Amol Deshpande
Risk of divide between large and small enterprises; need for free, accessible frameworks (Prativa Mohapatra) Industry bodies help cascade standards and templates to MSMEs (Amol Deshpande)
Both see regulation as indispensable for safety and trust, even in highly regulated sectors like aviation and payments [351-364][370-376].
Speakers: Vishal Anand Kanvaty, Dr. Satya Ramaswamy
Regulation is essential to prevent AI “berserk” behaviour (Vishal Anand Kanvaty) Global regulatory compliance coexists with innovation; safety‑critical aviation demands compliance (Dr. Satya Ramaswamy)
Unexpected Consensus
Both a payment‑system leader (NPCI) and an airline (Air India) emphasize that AI safety must be achieved without compromising user experience, using transparent explanations and human oversight.
Speakers: Vishal Anand Kanvaty, Dr. Satya Ramaswamy
NPCI’s AI for fraud detection prioritising low false‑positives and transparent explanations (Vishal Anand Kanvaty) Air India’s generative‑AI virtual assistant with safety guardrails, continuous monitoring and human feedback (Dr. Satya Ramaswamy)
Despite operating in very different domains, both speakers converge on a model where AI safety, transparency, and user-centric design are jointly pursued, an alignment not explicitly anticipated at the start of the session [257-270][286-294].
POLICY CONTEXT (KNOWLEDGE BASE)
NPCI’s implementation of transparent small language models and Air India’s adherence to safety-critical, human-in-the-loop standards exemplify sector-specific applications of responsible AI that prioritize user experience alongside safety [S49].
An engineer (Andy Parsons) and a senior policy‑focused moderator both frame responsible AI as a strategic business opportunity rather than a compliance burden.
Speakers: Andy Parsons, Moderator
Embedding responsible AI as a leadership and operating discipline and opportunity (Andy Parsons) Trust, transparency and accountability are foundational for corporate AI (Moderator)
It is notable that a technical leader and the session moderator share a business-oriented view of responsible AI, treating it as a growth driver rather than a mere regulatory checkbox [32-33][5-6].
POLICY CONTEXT (KNOWLEDGE BASE)
Andy Parsons highlighted how emerging regulations can act as catalysts for proactive AI adoption, turning compliance into a competitive advantage, a view echoed by industry leaders who see responsible AI as a market differentiator [S42][S41].
Overall Assessment

The panel exhibits strong consensus on four core pillars: (1) embedding transparent provenance through open standards; (2) building open, reusable frameworks with industry‑body support; (3) viewing regulation as a necessary, balanced catalyst; and (4) ensuring human‑in‑the‑loop safety guardrails while balancing innovation and user experience.

High consensus across technical, business, and policy perspectives, indicating a unified direction for responsible AI implementation in India’s corporate sector. This alignment suggests that forthcoming initiatives are likely to prioritize open standards, collaborative governance, and proportionate regulation, facilitating scalable and trustworthy AI adoption.

Differences
Different Viewpoints
Extent and nature of regulation for AI
Speakers: Vishal Anand Kanvaty, Sarika Guliani, Andy Parsons
Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty) Regulation should be balanced, avoiding overly heavy‑handed approaches (Sarika Guliani) EU AI Act, India IT rules, and other regulations drive responsible AI (Andy Parsons)
Vishal argues that mandatory regulation is required to embed safeguards and prevent harmful AI outcomes [370-376]. Sarika counters that regulation must be proportionate and avoid stifling innovation, advocating a light-touch or balanced approach [379-382]. Andy frames regulation as a catalyst that pushes good practices rather than a punitive burden, citing the EU AI Act, California law and India’s IT rules as drivers for responsible AI [25-27][106-108]. These positions reveal a clear disagreement on how strong and prescriptive AI regulation should be.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI regulation range from calls for comprehensive safeguards to arguments for limited, sector-specific rules, reflecting divergent industry attitudes toward regulatory scope and the need for balanced policy design [S41][S42][S53].
Universal open standards versus industry‑specific, flexible frameworks
Speakers: Andy Parsons, Amol Deshpande, Prativa Mohapatra
C2PA content credentials as an open, interoperable standard (Andy Parsons) Need for industry‑wide, standards‑based infrastructure (Andy Parsons) One size doesn’t fit all… need templates per industry (Amol Deshpande) Risk of a divide… need free, accessible frameworks (Prativa Mohapatra)
Andy promotes a single, open, cross-industry standard (C2PA) that any organization can adopt, emphasizing non-proprietary, interoperable infrastructure [61-66][66-70]. Amol stresses that a “one size fits all” model is unrealistic and that each sector requires its own templates and guardrails, advocating a “bring-your-own-AI” approach [168-180]. Prativa warns that without free, open frameworks large enterprises will outpace MSMEs, underscoring the need for accessible standards to avoid a divide [297-304]. The speakers therefore disagree on whether a universal open standard can serve all sectors or whether tailored, industry-specific solutions are necessary.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between universal, open standards and adaptable, industry-specific frameworks is a recurring theme in AI governance, with initiatives like the Agent Standards Initiative advocating open, consensus-based standards while acknowledging the need for flexibility in implementation [S43][S48][S58].
Primary driver for responsible AI adoption – industry bodies versus regulatory mandates
Speakers: Amol Deshpande, Vishal Anand Kanvaty, Sarika Guliani
Awareness → action → demonstration cycle; role of industry bodies in disseminating frameworks (Amol Deshpande) Regulation is essential to prevent AI “berserk” behavior and ensure ecosystem safety (Vishal Anand Kanvaty) Regulation should be balanced, avoiding overly heavy‑handed approaches (Sarika Guliani)
Amol emphasizes that the ecosystem should first become aware, then act, and finally demonstrate responsible AI, with industry associations (e.g., FICCI) playing a key role in cascading standards and templates to the broader market [332-340]. Vishal argues that regulation is indispensable to keep AI from behaving dangerously and must be embedded in systems [370-376]. Sarika, while acknowledging the need for regulation, calls for a proportionate, balanced approach that does not over-regulate, suggesting that industry bodies can complement but not replace regulation [379-382]. The tension lies in whether industry-led self-governance or statutory regulation should be the main engine for responsible AI.
Unexpected Differences
Open‑standard advocacy versus internal proprietary governance approaches
Speakers: Andy Parsons, Prativa Mohapatra
C2PA content credentials as an open, interoperable standard (Andy Parsons) Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra)
Andy strongly advocates for an industry-wide, free, open standard (C2PA) that any organization can adopt, emphasizing cross-industry interoperability [61-66][66-70]. Prativa, while supporting responsible AI, focuses on Adobe’s internal ART framework embedded within its own products, without explicitly championing an external open standard. This subtle divergence-external open standards versus internal proprietary governance-was not anticipated given the overall consensus on the need for transparency.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate pits open-standard advocates, who promote interoperable, community-driven specifications, against firms favoring proprietary governance models; this mirrors broader discussions on open ecosystems versus closed incumbents in technology history [S43][S44][S58].
Overall Assessment

The panelists uniformly agree that responsible AI, transparency, and accountability are essential for India’s digital future. However, they diverge on three main fronts: (1) how prescriptive regulation should be, ranging from mandatory safeguards to balanced, light‑touch frameworks; (2) whether a single open standard can satisfy all sectors or whether industry‑specific, flexible solutions are required; (3) the relative weight of industry bodies versus statutory regulation in driving adoption. These disagreements are moderate rather than polarising, reflecting differing strategic preferences rather than fundamental opposition.

Moderate disagreement – the differing views on regulatory intensity, standardisation strategy, and governance mechanisms could lead to fragmented implementation unless a coordinated consensus is reached. The implications are that policy makers and industry leaders must negotiate a hybrid model that blends baseline regulatory requirements with adaptable standards and strong industry‑body participation to avoid silos and ensure inclusive, trustworthy AI deployment.

Partial Agreements
All four speakers share the goal of achieving transparency and accountability in AI systems. Andy pushes for a global open standard (C2PA) that tags content with provenance [61-66]. Prativa describes internal product‑level governance (the ART philosophy) that embeds traceability directly into Adobe tools [197-210]. Satya highlights the necessity of a human‑in‑the‑loop safety mechanism in aviation AI [360-362]. Vishal focuses on transaction‑level transparency, providing users with explanations for AI‑driven decisions [292-294]. While the end‑goal of trustworthy AI is common, the speakers diverge on the mechanisms—global standards, internal product design, operational human oversight, or user‑facing explanations.
Speakers: Andy Parsons, Prativa Mohapatra, Satya Ramaswamy, Vishal Anand Kanvaty
C2PA content credentials as an open, interoperable standard (Andy Parsons) Adobe’s “ART” (Accountability, Responsibility, Transparency) embedded in Firefly and Acrobat (Prativa Mohapatra) Human‑in‑the‑loop oversight for safety‑critical AI (Satya Ramaswamy) Transparent explanations for declined transactions (Vishal Anand Kanvaty)
Takeaways
Key takeaways
Responsible AI must move from high‑level principles to provable, operational practice within enterprises. Transparency and provenance of AI‑generated content are essential; open, interoperable standards such as C2PA enable this at scale. Effective AI governance requires coordinated people, process, technology, and industry‑body layers – not a single checklist. Regulatory developments (EU AI Act, India IT rules, state‑level AI laws) are viewed as catalysts that should coexist with innovation. Sector‑specific implementations illustrate practical approaches: Air India’s guarded generative‑AI assistant, NPCI’s fraud‑detection model with transparent explanations, RPG Group’s flexible, scalable governance across diverse units. There is a risk of a divide between large enterprises and MSMEs; open, free frameworks and industry‑wide dissemination are needed to ensure inclusive adoption.
Resolutions and action items
FICCI pledged to continue the dialogue and translate insights into concrete actions for the Indian ecosystem. Adobe highlighted its ART (Accountability, Responsibility, Transparency) methodology and will continue embedding it in product pipelines such as Firefly and Acrobat. Air India committed to maintain continuous monitoring and safety guardrails for its generative‑AI virtual assistant, leveraging partner technologies for risk mitigation. NPCI will expand its transparent AI‑driven fraud‑explanation service and align it with emerging regulatory frameworks. Industry bodies (e.g., C2PA, FICCI, sector associations) agreed to promote open standards and share governance templates to help MSMEs adopt responsible AI.
Unresolved issues
How to harmonise global AI regulations (EU AI Act, OECD, UNESCO) with India’s emerging policies and the diverse needs of different sectors. The precise balance between industry‑led self‑regulation and mandatory regulatory intervention remains unsettled. Effective mechanisms for consumer awareness of provenance symbols and UI design for transparency are still under development. Specific approaches for integrating human‑in‑the‑loop oversight in high‑volume payment fraud detection were mentioned but not detailed. Scalable, low‑cost governance frameworks that MSMEs can realistically implement without extensive legal teams were not fully resolved.
Suggested compromises
Adopt a hybrid model where open, industry‑driven standards provide the baseline, complemented by proportionate regulatory requirements to ensure safety without stifling innovation. Implement safety guardrails that are adjustable – tighter for high‑risk contexts (aviation) and lighter for consumer‑facing services, balancing risk and user convenience. Encourage large enterprises to create reusable, open‑source governance templates that can be cascaded to smaller firms via industry bodies. Regulators to act as catalysts, offering guidance and frameworks while allowing flexibility for companies to innovate within those boundaries.
Thought Provoking Comments
2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity.
Frames the timeline as a decisive turning point, moving responsible AI from a nice‑to‑have to a business imperative, which sets a forward‑looking urgency for the whole panel.
Established the central theme of the session and prompted other speakers to discuss concrete ways to meet that 2026 deadline, leading to deeper talks on standards, compliance and operationalisation.
Speaker: Andy Parsons
The question is no longer ‘should we be responsible with AI?’ but ‘can your systems actually prove that you have been responsible with AI?’
Shifts the debate from philosophical agreement to measurable proof, introducing the concept of ‘provable practice’ that challenges participants to think about auditability and evidence.
Triggered a focus on provenance, metadata and standards (C2PA) and caused panelists like Prativa and Amol to reference how their organisations embed traceability into products.
Speaker: Andy Parsons
We built an open, cross‑industry standard – the C2PA content credentials – that embeds provenance directly into media files, so anyone can verify who made it, with what model, and when.
Introduces a concrete, industry‑wide solution that is non‑proprietary, highlighting collaboration over competition and providing a tangible tool for accountability.
Guided the discussion toward the importance of open standards, with later speakers (e.g., Amol and Prativa) echoing the need for interoperable frameworks and citing the C2PA as a model for other sectors.
Speaker: Andy Parsons
One size doesn’t fit all – we need a ‘bring your own AI’ approach, with orchestration across all AI layers and people as the most critical stakeholder.
Challenges the notion of a single, monolithic AI governance model, emphasizing flexibility, modularity, and the human factor in responsible AI deployment.
Shifted the conversation from generic principles to practical implementation strategies, prompting Prativa to discuss product‑specific safeguards and Satya to illustrate how Air India balances flexibility with safety.
Speaker: Amol Deshpande
Our AI governance philosophy is ART – Accountability, Responsibility, Transparency – and we embed it into every product through hundreds of validation steps.
Provides a memorable framework (ART) that simplifies complex governance concepts and demonstrates how Adobe operationalises them, making the abstract tangible.
Reinforced Andy’s provable practice theme, gave the panel a concrete example (Firefly’s nutrition labels), and encouraged other speakers to share analogous mechanisms in their domains.
Speaker: Prativa Mohapatra
Our generative AI virtual assistant has handled 13.5 million queries with a 97 % autonomous success rate, and we even use generative AI to monitor its own performance for safety.
Offers a real‑world, high‑scale case study that illustrates both the benefits and the safety challenges of AI, and introduces the novel idea of AI‑in‑the‑loop monitoring.
Moved the discussion from theory to operational reality, prompting follow‑up questions about risk management, prompting Vishal to discuss transparency in payments, and reinforcing the need for robust guardrails.
Speaker: Dr. Satya Ramaswamy
We built a small language model that can explain why a transaction was declined, giving customers transparent reasons while keeping false‑positive rates low.
Shows how transparency can be delivered at massive scale in a critical financial context, linking technical design (explainability) with consumer trust.
Introduced the payments perspective, expanding the conversation beyond media to financial services, and highlighted the practical trade‑offs between accuracy and user experience.
Speaker: Vishal Anand Kanwat
Responsibility is no longer a compliance checklist; it is a commitment to shared human values – we choose what we create, not just what we can create.
Elevates the discussion to a philosophical level, reminding participants that ethical intent underpins technical measures, and framing responsible AI as a value‑driven choice.
Served as a concluding synthesis, reinforcing earlier points about standards, governance, and human‑centric design, and set the tone for future collaborative actions beyond the session.
Speaker: Sarika Guliani
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that moved the dialogue from abstract principles to concrete, measurable practices. Andy Parsons’ framing of 2026 as the deadline for provable responsible AI and his introduction of the C2PA standard set the agenda, prompting panelists to showcase how their organisations translate those ideas into product‑level safeguards (Prativa’s ART framework, Satya’s airline AI assistant, Vishal’s transparent payment explanations). Amol’s ‘bring your own AI’ and emphasis on people added nuance, steering the conversation toward flexible, human‑centric governance. Each of these insights sparked new sub‑topics—standards, auditability, scalability, and the balance between regulation and innovation—thereby deepening the analysis and shaping a cohesive narrative that blended technical solutions with ethical imperatives.

Follow-up Questions
What are the implementation costs and day‑to‑day operational expenses of adopting responsible AI practices?
Understanding financial implications is crucial for enterprises to plan and justify responsible AI investments.
Speaker: Andy Parsons
How can organizations demonstrably prove that their AI systems are responsible and compliant?
A measurable, auditable proof of responsibility is needed to move from principles to provable practice.
Speaker: Andy Parsons
How can consumer awareness of content‑provenance symbols (e.g., C2PA badge) be increased, and what UI designs are most effective?
Early consumer awareness is limited; effective UI can drive trust and adoption of provenance standards.
Speaker: Andy Parsons
What business case can be built for content provenance to make it financially compelling for enterprises?
Enterprises need clear ROI or value‑proposition arguments to invest in provenance infrastructure.
Speaker: Andy Parsons
How can standards adoption be improved given that many social‑media platforms strip metadata and provenance information?
Metadata stripping undermines transparency; research is needed on platform policies and technical solutions.
Speaker: Andy Parsons
What approaches allow embedding safety controls (the “safety knob”) in generative AI without degrading user experience?
Balancing safety with convenience is critical for customer‑facing AI services like virtual assistants.
Speaker: Dr. Satya Ramaswamy
How can prompt‑firewall and centralized control mechanisms be standardized across industries?
Standardized prompt controls could help prevent jailbreaks and misuse, but industry‑wide norms are lacking.
Speaker: Dr. Satya Ramaswamy
How can responsible‑AI frameworks be made accessible and affordable for MSMEs?
SMEs lack resources for extensive governance; scalable, low‑cost frameworks are needed to avoid a divide.
Speaker: Prativa Mohapatra
What role should industry bodies play in disseminating responsible‑AI templates and best practices to diverse sectors?
Industry bodies can cascade standards, but mechanisms for effective knowledge transfer require study.
Speaker: Amol Deshpande
How can global best practices (EU AI Act, UNESCO, OECD, etc.) be harmonized with India’s emerging regulatory landscape (DPDP Act, IT rules, etc.)?
Alignment is needed to avoid conflicting obligations and to create a coherent national AI governance model.
Speaker: Shantheri Mallaya
Is industry‑led governance realistically possible for AI at scale, or is regulatory intervention inevitable?
Determining the balance between self‑regulation and mandatory rules is essential for sustainable AI ecosystems.
Speaker: Vishal Anand Kanvaty
What metrics and governance models ensure fairness, accountability, and transparency in AI‑driven fraud detection for payment systems?
Payments require precise, unbiased AI; research is needed on appropriate performance and fairness metrics.
Speaker: Vishal Anand Kanvaty
How can AI transparency be integrated into legacy systems across sectors such as aviation, payments, and creative tools?
Legacy environments pose technical challenges for embedding provenance and auditability.
Speaker: Multiple (Andy Parsons, Dr. Satya Ramaswamy, Prativa Mohapatra)
What impact does the lack of consumer‑facing provenance symbols have on trust, and how can this impact be measured?
Empirical evidence is needed to justify investments in visible provenance cues.
Speaker: Andy Parsons
What barriers exist to global adoption of open standards like C2PA, and how can they be overcome?
Understanding technical, legal, and market obstacles is key to widespread standard uptake.
Speaker: Andy Parsons
How can AI governance frameworks be tailored for sector‑specific needs while maintaining interoperability?
Sector diversity requires flexible yet compatible governance models.
Speaker: Amol Deshpande
What are the implications of AI‑generated misinformation in a multilingual, culturally diverse market like India?
Misinformation risk is amplified by language and cultural variety; targeted research is needed.
Speaker: Andy Parsons
How can legal and compliance teams be upskilled efficiently to handle AI governance responsibilities?
Rapid skill development is essential for enterprises to meet emerging AI regulations.
Speaker: Prativa Mohapatra
What is the optimal balance between AI automation and human‑in‑the‑loop oversight for safety‑critical domains?
Ensuring safety while leveraging AI efficiency requires clear guidelines for human intervention.
Speaker: Dr. Satya Ramaswamy
How can the effectiveness of AI transparency measures be evaluated empirically across different industries?
Metrics and studies are needed to assess whether transparency initiatives actually build trust and reduce risk.
Speaker: General (multiple participants)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.