Responsible AI in India Leadership Ethics & Global Impact part1_2
20 Feb 2026 18:00h - 19:00h
Responsible AI in India Leadership Ethics & Global Impact part1_2
Session at a glance
Summary
This discussion focused on translating responsible AI principles into practical enterprise implementation across various industries in India. The session, presented by Adobe in association with FICCI, examined how organizations can move beyond theoretical commitments to demonstrable responsible AI practices as regulatory frameworks like the EU AI Act and India’s new IT rules take effect in 2026.
Andy Parsons from Adobe introduced the Content Authenticity Initiative (C2PA), which provides transparency for AI-generated content through open standards that act like “nutrition labels” for digital media. He emphasized that responsible AI must be built into systems from the ground up rather than added as an afterthought, highlighting the shift from asking whether to be responsible with AI to proving that organizations have been responsible.
The panel discussion revealed diverse approaches across industries. Air India’s Dr. Satya Ramaswamy described their global-first generative AI virtual assistant that handles 40,000 daily customer queries while maintaining strict safety protocols and human oversight capabilities. NPCI’s Vishal Kanwati explained their fraud detection systems that prioritize minimizing false positives while maintaining transparency, allowing customers to understand why transactions are declined through AI-powered explanations.
Adobe’s Prativa Mohapatra outlined their “ART” framework – Accountability, Responsibility, and Transparency – embedded in products like Firefly and Acrobat Assistant, ensuring enterprise-grade AI tools provide traceable, licensed content. RPG Group’s Amol Deshpande emphasized the need for scalable governance frameworks that can accommodate diverse business units while providing appropriate guardrails for different AI applications.
The discussion highlighted challenges including uneven adoption, limited consumer awareness, and the risk of responsible AI becoming a luxury for large enterprises while smaller organizations struggle with implementation costs. Panelists agreed that industry leaders must create accessible frameworks and standards to democratize responsible AI practices. The conversation concluded with consensus that while industry self-regulation is valuable, regulatory intervention is inevitable and necessary to ensure AI systems serve human values and societal benefit at scale.
Keypoints
Major Discussion Points:
– Transition from AI principles to practical implementation: The discussion emphasized moving beyond theoretical responsible AI commitments to demonstrable, measurable practices that can prove compliance and accountability, especially with upcoming regulations like the EU AI Act taking effect in 2026.
– Industry-specific approaches to responsible AI governance: Panelists shared how different sectors (aviation, payments, conglomerates, creative tools) implement responsible AI differently based on their unique risk profiles, regulatory requirements, and operational contexts, highlighting that “one size doesn’t fit all.”
– Content authenticity and transparency standards: Adobe’s Content Authenticity Initiative and C2PA standards were presented as a concrete example of responsible AI implementation, focusing on content provenance, transparency, and “nutrition labels” for digital content to combat misinformation and synthetic content risks.
– Balancing innovation with safety and compliance: Multiple speakers addressed the challenge of maintaining rapid AI adoption and innovation while ensuring proper guardrails, risk management, and regulatory compliance, particularly in high-stakes industries like aviation and financial services.
– Democratizing responsible AI across enterprise sizes: The panel discussed the risk of responsible AI becoming a “luxury” for large enterprises only, emphasizing the need for industry leaders to create frameworks and standards that smaller organizations and MSMEs can adopt and implement effectively.
Overall Purpose:
The discussion aimed to provide practical guidance for translating responsible AI principles into actionable enterprise strategies, moving beyond theoretical commitments to concrete implementation practices that ensure accountability, transparency, and compliance across different industries and organization sizes.
Overall Tone:
The tone was professional, collaborative, and pragmatically optimistic throughout. Speakers maintained a solution-oriented approach, sharing real-world examples and acknowledging challenges while emphasizing collective responsibility. The discussion remained constructive and forward-looking, with panelists building on each other’s insights rather than debating opposing viewpoints. The moderator kept the pace brisk but allowed for substantive exchanges, and the closing remarks reinforced the collaborative spirit with commitments to continue the dialogue beyond the session.
Speakers
Speakers from the provided list:
– Moderator – Session moderator for the Responsible AI discussion
– Andy Parsons – Global Head for Content Authenticity at Adobe, runs the Content Authenticity Initiative at Adobe
– Shantheri Mallaya – Editor at Economic Times, panel moderator
– Dr. Satya Ramaswamy – Chief Digital and Technology Officer at Air India Limited
– Prativa Mohapatra – Vice President and Managing Director of Adobe India
– Amol Deshpande – Group Chief Digital Officer and Head of Innovation at RPG Group
– Vishal Anand Kanvaty – Chief Technology Officer, National Payments Corporation of India (NPCI)
– Sarika Guliani – Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI
Additional speakers:
None – all speakers mentioned in the transcript were included in the provided speakers names list.
Full session report
This comprehensive discussion on responsible AI implementation in corporate India, presented by Adobe in association with FICCI as part of the AI Impact Summit, marked a pivotal moment in the transition from theoretical principles to practical enterprise deployment. Moderated by Shantheri Mallaya from Economic Times, the session brought together industry leaders from diverse sectors to examine how organisations can move beyond aspirational commitments to demonstrable responsible AI practices, particularly as regulatory frameworks prepare to take effect globally in 2026.
Setting the Context: From Principles to Provable Practice
Andy Parsons, Adobe’s Global Head for Content Authenticity, established the foundational premise that 2026 represents a critical inflection point for responsible AI. With the EU AI Act’s enforcement provisions, California’s first AI law, and India’s new IT rules on SGI taking effect, responsible AI will transition from being “a slide in a deck” to becoming a core compliance strategy and business opportunity. This shift fundamentally changes the enterprise question from “should we be responsible with AI?” to “can your systems actually prove that you have been responsible with AI?”
Parsons, who described himself as “a mere engineer at Adobe” and felt “unqualified” to talk about policy, emphasised that this transformation requires moving beyond theoretical frameworks to working code and products that demonstrate responsibility through transparency, accountability, and inclusivity. He positioned Adobe’s Content Authenticity Initiative (C2PA) as a concrete example of this approach, describing it as creating “nutrition labels” for digital content—a metaphor that Prime Minister Modi had mentioned the previous day. Just as consumers have the right to know what ingredients are in their food, Parsons argued that people deserve transparency about how digital content is created, what AI models were used, and whether an image is a genuine photograph or AI-generated content.
The C2PA standard is built on open standards developed with partners including Microsoft, BBC, OpenAI, Sony, and others, creating content credentials that provide transparency about digital content creation. However, Parsons acknowledged significant challenges: “adoption is uneven,” consumer awareness is “very early,” and the business case has been “challenging.” As he noted, “doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money,” though this is changing as compliance requirements create business imperatives for transparency.
Industry-Specific Approaches to Responsible AI Implementation
The panel discussion revealed how different sectors approach responsible AI governance based on their unique risk profiles, regulatory requirements, and operational contexts. This diversity underscored the principle that “one size doesn’t fit all” when implementing responsible AI frameworks.
Aviation: Safety-Critical AI with Human Oversight
Dr. Satya Ramaswamy from Air India provided compelling insights into implementing AI in safety-critical environments. Air India operates 300 aircraft, carries over 100,000 customers daily, and has “a few hundred airplanes on order.” Their generative AI virtual assistant, called “A.G,” was developed starting in November 2022 and launched in May 2023 as the global airline industry’s first such system. Over 2.5 years, it has handled 13.5 million queries, currently processing approximately 40,000 customer queries daily at just 1% of the cost of traditional contact centres. The system maintains a 97% autonomous success rate, with only 3% of queries requiring escalation to human agents.
The aviation approach demonstrates sophisticated risk management through embedded safety procedures and continuous monitoring. Dr. Ramaswamy explained how they balance the “safety dial”—too much safety creates customer inconvenience, whilst insufficient safeguards risk system failures or inappropriate responses. Their solution involves using AI to monitor AI performance, combined with customer feedback mechanisms and human oversight capabilities. He mentioned using “prompt firewalls where we can centralize all these controls” as part of their safety architecture. Importantly, the system has never provided an inappropriate response over its operational period, demonstrating that robust safety frameworks can coexist with high performance.
The aviation industry’s regulatory complexity—operating across multiple jurisdictions with different aviation authorities—provides a model for managing diverse compliance requirements without constraining innovation. Dr. Ramaswamy noted that Air India’s global-first AI implementation emerged from India whilst maintaining compliance with international aviation regulations, proving that regulatory frameworks can catalyse rather than constrain innovation.
Financial Infrastructure: Balancing Accuracy with User Impact
Vishal Anand Kanvaty from the National Payments Corporation of India (NPCI) offered insights into implementing AI in high-volume, high-stakes financial systems. NPCI’s approach to fraud detection reveals a counterintuitive but crucial principle: starting with lower accuracy whilst prioritising the minimisation of false positives. This strategy ensures that genuine transactions aren’t incorrectly flagged as fraudulent, which could severely impact user trust and system adoption. As Kanvaty explained, if UPI transactions were all getting declined, they have safeguards including limiting the percentage of transactions that can be declined.
NPCI’s implementation demonstrates the importance of transparency in AI decision-making. They’ve developed small language models that allow customers to understand why transactions are declined, providing explanations such as “you normally don’t send this transaction” or “this is the first time you’re scanning this QR code.” This transparency builds trust whilst maintaining security, showing how responsible AI can enhance rather than compromise user experience.
The payments infrastructure perspective highlighted the necessity of regulatory frameworks, with Kanvaty arguing that industry self-governance alone is insufficient because “AI can go berserk” and have widespread systemic impacts. However, he emphasised that regulations must be developed collaboratively with industry to ensure they’re practical and effective.
Conglomerate Complexity: Orchestrated Governance Across Diverse Business Units
Amol Deshpande from RPG Group addressed the unique challenges of implementing responsible AI across diverse business portfolios spanning infrastructure, healthcare, IT, agriculture, and manufacturing. His concept of “bring your own AI” reflects the reality that different business functions require different AI solutions, making centralised, uniform approaches impractical.
Deshpande’s framework emphasises three critical elements: providing scalable playgrounds for business units to operate with agility, investing heavily in people development and awareness, and establishing process and governance frameworks that provide guardrails without stifling innovation. This approach recognises that responsible AI in conglomerates requires orchestration across all AI layers rather than isolated centre-of-excellence approaches.
The RPG experience demonstrates how large enterprises can balance centralised compliance with decentralised innovation needs. Their approach involves creating templates and frameworks that can be adapted across different industries and functions, then sharing these learnings through industry bodies to benefit smaller organisations that lack similar resources.
Creative Technology: Embedding Responsibility in Product Design
Prativa Mohapatra from Adobe India outlined how responsible AI principles can be embedded directly into product development through their “ART” framework—Accountability, Responsibility, and Transparency. This approach goes beyond compliance to make responsible AI a core product philosophy that guides development decisions from inception.
Adobe’s Firefly generative AI tool exemplifies this approach by using only licensed training data and embedding content credentials in all generated content. This ensures that enterprises using Firefly won’t face liability issues from unauthorised content use. Similarly, Acrobat Assistant applies the same trust principles that have made PDF a universally accepted format, allowing users to work with authenticated sources whilst maintaining full traceability.
Mohapatra highlighted real-world implications of AI misuse, citing Supreme Court concerns about lawyers using fictitious case references generated by AI. She emphasised that enterprises must simultaneously address business strategy, ethical strategies, and regulatory compliance when implementing AI solutions. Missing any of these three elements leaves organisations unprepared for the future regulatory landscape. She also highlighted the need for organisational restructuring, noting that legal and compliance teams must evolve to handle AI-specific guidelines across multiple jurisdictions.
Addressing the Digital Divide in Responsible AI
A significant theme throughout the discussion was the risk of responsible AI becoming a “luxury” available only to large enterprises whilst smaller organisations struggle with implementation costs and complexity. Prativa Mohapatra articulated this challenge clearly, noting that whilst large enterprises can restructure teams, hire additional legal expertise, and invest in comprehensive AI governance frameworks, MSMEs lack these resources. The transformation from digital transformation to AI transformation requires significant organisational changes that smaller businesses cannot easily accommodate.
The panellists agreed that large enterprises and technology creators bear responsibility for developing frameworks and standards that smaller organisations can adopt. Adobe’s approach of making C2PA standards completely free and open exemplifies this principle—an independent creator in India can access the same content authenticity capabilities as a Fortune 500 enterprise at zero cost.
Industry bodies like FICCI play a crucial role in this democratisation process by facilitating knowledge dissemination and creating domain-specific templates that MSMEs can access. Amol Deshpande emphasised that these frameworks must be tailored to different industries and functions, recognising that a manufacturing MSME faces different AI challenges than a healthcare startup.
The Role of Regulation and Standards
The discussion revealed nuanced perspectives on the relationship between industry self-governance and regulatory intervention. Whilst all panellists agreed that regulation is inevitable and necessary, they viewed it as a catalyst for good practices rather than a constraint on innovation.
Andy Parsons positioned regulation as helping enterprises move from reactive to proactive responsible AI adoption. The upcoming regulatory landscape provides clarity and urgency that can accelerate the adoption of responsible practices, though he acknowledged his limitations in discussing policy matters as an engineer.
Dr. Satya Ramaswamy’s aviation perspective demonstrated how multiple regulatory frameworks can coexist without constraining innovation. Air India operates under various aviation authorities globally whilst maintaining its innovative edge, suggesting that well-designed regulation can provide structure without stifling creativity.
However, Vishal Kanvaty argued most directly for regulatory necessity, stating that industry self-governance alone is insufficient given AI’s potential for widespread systemic impact. His experience with financial infrastructure informed this perspective on the need for external oversight.
The discussion highlighted the importance of collaborative regulation development, where industry expertise informs regulatory frameworks to ensure they’re both effective and practical. This approach can help avoid the pitfalls of either overly restrictive regulations that stifle innovation or insufficient oversight that fails to address genuine risks.
Technical Implementation Challenges and Organisational Transformation
The panellists identified several concrete technical and operational challenges in implementing responsible AI systems. Content authenticity faces significant adoption hurdles, including social media platforms that strip metadata when content is uploaded, removing the transparency information that content credentials provide. Consumer awareness remains low, with many people unfamiliar with content authenticity symbols and their significance.
The transition to responsible AI requires significant organisational changes that go beyond technology implementation. Prativa Mohapatra emphasised that enterprises must restructure legal and compliance teams to handle AI-specific guidelines across multiple jurisdictions. This involves not just hiring additional expertise but fundamentally rethinking how these teams operate and integrate with product development and business strategy.
People development emerged as a critical success factor, with Amol Deshpande noting that enterprises must invest significantly in building AI awareness and skills across their value chains. This goes beyond technical training to include ethical reasoning, risk assessment, and decision-making capabilities that enable employees to work effectively with AI systems whilst maintaining responsible practices.
Future Outlook and Continuing Challenges
The discussion concluded with Sarika Guliani from FICCI emphasising that “responsibility is not anymore a compliance check” but rather “a commitment of the technology” to shared human values that should guide technological development. Despite the tight time constraints noted by moderator Shantheri Mallaya, the session demonstrated remarkable consensus among industry leaders on fundamental principles and approaches.
Several unresolved challenges emerged from the discussion. The technical challenge of maintaining content authenticity across platforms that strip metadata remains unsolved, as does the broader question of how to harmonise multiple international regulatory frameworks whilst maintaining domestic innovation capabilities. Consumer awareness and adoption of transparency standards continue to lag behind technical capabilities, creating a gap between what’s possible and what’s practically effective.
This alignment suggests that the field is maturing beyond theoretical debates toward practical implementation, with 2026 representing a crucial milestone where responsible AI transitions from aspiration to operational requirement. The session’s commitment to continuing dialogue through FICCI and other industry bodies reflects recognition that responsible AI implementation is an ongoing process requiring sustained collaboration across sectors, organisation sizes, and stakeholder groups. As India positions itself as a global leader in digital innovation, the approaches developed and refined through these discussions could influence responsible AI practices worldwide, demonstrating how emerging economies can lead in establishing ethical frameworks for transformative technologies.
Session transcript
I’d like to welcome you all to this session titled Responsible AI from Principles to Practice in Corporate India presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite Andy Parsons, our Global Head for Content Authenticity at Adobe.
Andy, over to you.
Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.
I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity. I’m going to talk about that in a minute. I’m going to talk about that in a minute. I’m going to talk about that in a minute. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.
And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point. But can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, responsible AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.
So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.
But it’s made the trust problem absolutely impossible to address. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that. Hundreds of millions of people consuming digital content every day.
In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to have leadership in the C2PA, which hopefully many of you have. have heard about this week, which is the Coalition for Content Provenance and Authenticity. And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free.
So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others. And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials.
I’ve encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio, or video. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others. And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company.
It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy I think is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency. Provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used. Simple ideas like knowing that a photograph is actually a photograph and not generated.
These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.
Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.
Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I have often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m not sure if that’s true. I’m not sure if that’s true. I’m not sure if that’s true. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.
What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.
And last, enterprises. Like many of yours in this room, that are willing and excited to go first that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again. And in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things. And that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries, because we have representatives from all of them on our excellent panel today.
So we’re fortunate to have leaders from Air India and PCI, RPCI, and the U.S. Department of Defense, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.
Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.
So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.
Right. So, Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So, Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.
So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.
Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.
It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise, to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.
We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.
You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. And that. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple
Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Prativa here. Welcome, Prativa. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe is constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally? And at the same time, as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?
So all yours here.
Okay, thank you. So I think Andy said the context. And since we are here not to learn about the principles, but the practices, I think everybody should go back with certain practices. So the first practice of AI governance, which we practice, is art. Which is accountability, responsibility, and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course, we are in the business of content for a very, very long time. And now the same content is becoming the currency which everybody’s debating. So our principles have been there for a while.
But. How it is actualized? and let me say how it’s translated into our products. And by the way, it’s in our products, it’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law, you will not be getting into any liability issues.
Because how you do it is by feeding. Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines.
So Acrobat has this new feature called Acrobat Assistant. It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.
So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to already, Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt and re -design and re -design to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI, ensure that you tick all the three.
If you miss any one, you might not be ready for the future. So that’s how I see it.
Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Pratibha. I’ll circle back to you, time permitting. Let’s see how best we can get back. Dr. Satya, calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.
How do these things really fall in place in terms of vision and metrics?
Thank you, Shantiri. Since the audience is international, a real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.
So it was a global first in the whole airline industry. Today, it has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to human agent the remaining 50 % of the contact volume comes to A.G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.
At the same time, we don’t want any jailbreak to happen. We don’t want problem injection to happen. We don’t want any inappropriate thing to happen. So we are watching the whole performance of the Gen A virtual assistant, A.G as we call it, all the time. So we use, in fact, Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer, right? So at the end of the day, when we send a response, we also ask the customer, you know, did it answer your question? And also allow them to give their reactions, right? Is it appropriate, inappropriate? And thankfully, over the last 20 years, it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it.
But now we are, as the technologies are maturing, for example, we now have… interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem. That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.
Excellent. And given the kind of scale you’re operating at, I think every day is a new day. Yes, it is. We face challenges. There is new, brand new every day. Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here, or rather I’ll sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you.
How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts? One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be, fair at the same time proactive and detective when it comes to looking at fraud? So how, what are the aspects that you look at? keenly here
I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with the success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.
Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised
absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you some things but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?
Is it getting risked?
Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.
So, I think that’s a big thing. I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.
So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?
Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is similar to electricity and it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem. Absolutely.
Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem? The industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit. So
Shantiri, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?
Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.
Mind you, this would change. It’s not like this one guardrail construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is I think very, very critical. Absolutely. Thank
you for that Amol. Dr. Satya, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean the principles, so on and so forth, and India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time, we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony absolutely, I
think taking Air India, we are an international airline so we operate in many countries, for example we go to North America, US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world, so by nature we are geared to looking at the regulation in all parts of the world and I think and being in compliance, and our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right?
So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco Airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control. And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing.
the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is – this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation. For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it in the same spirit like Adobe.
So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts
Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important. While all of us realize.
it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high
Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please
first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken and the agency itself and the so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of FICCI and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.
That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Shantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.
So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.
Andy Parsons
Speech speed
190 words per minute
Speech length
2010 words
Speech time
632 seconds
Responsible AI must be demonstrable, not just aspirational
Explanation
Parsons stresses that organizations need to move beyond statements of responsibility and provide concrete proof that their AI systems are being managed responsibly.
Evidence
“But can your systems actually prove that you have been responsible with AI, and how do you go about doing that?” [7]. “The responsible AI conversation has matured, and now we have to move it to pragmatic implementation.” [3].
Major discussion point
Shift from AI principles to provable practice
Topics
Artificial intelligence | The enabling environment for digital development
Open, cross‑industry standards (C2PA) enable trustworthy content
Explanation
Parsons describes the C2PA content credentials as a free, open standard that provides provenance information for media, allowing any creator to embed trust signals.
Evidence
“Five years later, there is an open standard called the C2PA content credentials.” [46]. “Our standard is open and free.” [55]. “It should not be proprietary, but available to everyone.” [56].
Major discussion point
Open standards and content provenance as a trust model
Topics
Data governance | Artificial intelligence
Regulatory landscape as a catalyst for responsible AI
Explanation
Parsons notes that the EU AI Act, California law, and India’s new IT rules are coming into force, pushing enterprises to embed responsibility into compliance strategies.
Evidence
“The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California.” [70]. “And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.” [73].
Major discussion point
Regulation as catalyst and its alignment with industry practice
Topics
The enabling environment for digital development | Artificial intelligence
Operational challenges: uneven adoption and metadata loss
Explanation
Parsons highlights that many platforms strip provenance metadata and that consumer awareness of AI transparency is still very low, creating hurdles for responsible deployment.
Evidence
“But adoption is uneven.” [101]. “Consumer awareness is still very early.” [102]. “Many social media platforms strip metadata and remove that transparency when content is uploaded.” [103]. “And because consumer awareness is early, user interfaces are also quite early.” [104].
Major discussion point
Operational challenges and implementation realities
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Shantheri Mallaya
Speech speed
159 words per minute
Speech length
1631 words
Speech time
611 seconds
Responsible AI as a strategic discussion point
Explanation
Mallaya frames the session as a deep dive into how responsible AI principles will shape enterprise strategy and calls for collective industry action.
Evidence
“And to take this discussion forward, we are looking at responsible AI.” [2]. “So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.” [13].
Major discussion point
Shift from AI principles to provable practice
Topics
Artificial intelligence | The digital economy
Global regulatory context drives responsible AI
Explanation
Mallaya references the EU AI Act, UNESCO recommendations, OECD rules, and India’s emerging policies as the backdrop for responsible AI adoption.
Evidence
“You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean the principles, so on and so forth, and India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time.” [71]. “One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours.” [72].
Major discussion point
Regulation as catalyst and its alignment with industry practice
Topics
The enabling environment for digital development | Artificial intelligence
Amol Deshpande
Speech speed
181 words per minute
Speech length
759 words
Speech time
251 seconds
Responsibility must be orchestrated across all AI layers
Explanation
Deshpande argues that responsible AI cannot be a single function; it needs to be embedded throughout the five layers of AI development and governance.
Evidence
“When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.” [16]. “It has to be an orchestration of all the things.” [17]. “It comes across all those five layers of the AI when we are looking at it.” [19].
Major discussion point
Shift from AI principles to provable practice
Topics
Artificial intelligence | Data governance
Bring‑your‑own‑AI with guardrails for scalable safety
Explanation
Deshpande promotes a model where each function can adopt its own AI solutions, provided they are bounded by robust guardrails and safety controls.
Evidence
“So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us.” [111]. “It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.” [114].
Major discussion point
Operational challenges and implementation realities
Topics
Artificial intelligence | Capacity development
Ecosystem steps: awareness, action, demonstration
Explanation
Deshpande outlines three concrete steps the ecosystem must follow—raising awareness, taking action, and demonstrating responsible AI through products and services.
Evidence
“First, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that.” [158]. “Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact.” [161]. “The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it.” [162].
Major discussion point
Ecosystem role in supporting MSMEs and broader adoption
Topics
Capacity development | The enabling environment for digital development
RPG Group templates enable industry‑wide learning
Explanation
Deshpande describes how the RPG Group creates domain‑specific AI templates and shares learnings across the value chain, helping other firms adopt responsible practices.
Evidence
“But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves.” [150]. “Where the learnings have to be disseminated into this.” [154].
Major discussion point
Industry‑specific implementations demonstrating responsible AI
Topics
Artificial intelligence | The digital economy
Prativa Mohapatra
Speech speed
156 words per minute
Speech length
1126 words
Speech time
432 seconds
Embedding ART (Accountability, Responsibility, Transparency) in products
Explanation
Mohapatra explains that Adobe’s internal framework, ART, is woven into product lifecycles to ensure every AI output carries accountability, responsibility, and transparency.
Evidence
“Which is accountability, responsibility, and transparency.” [23]. “So the first practice of AI governance, which we practice, is art.” [92].
Major discussion point
Shift from AI principles to provable practice
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Firefly embeds provenance “nutrition labels” via C2PA
Explanation
Mohapatra notes that Adobe Firefly automatically attaches C2PA‑based provenance information—described as nutrition labels—to every generated asset.
Evidence
“Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions.” [58]. “We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks.” [60]. “Provenance information travels with assets when you make them.” [62]. “It is anything that you have being generated out of this product will have that nutrition level.” [63].
Major discussion point
Open standards and content provenance as a trust model
Topics
Data governance | Artificial intelligence
Enterprise redesign and resource constraints for MSMEs
Explanation
Mohapatra highlights that large enterprises must overhaul legal and compliance structures to meet AI guidelines, while smaller firms lack the capacity to build full frameworks.
Evidence
“I’m sure every organization today has a legal team, has a compliance team.” [42]. “So small organizations cannot do that.” [121]. “The big companies have to quickly create a new org structure, have to create the legal teams… now have to go through the AI guidelines of countries.” [122]. “The MSMEs don’t have that luxury.” [123].
Major discussion point
Operational challenges and implementation realities
Topics
Capacity development | The enabling environment for digital development
Adobe Acrobat Assistant follows same responsible AI principles
Explanation
Mohapatra points out that the Acrobat Assistant inherits the same ART principles, ensuring that user‑facing tools embed responsible AI safeguards.
Evidence
“Acrobat has this new feature called Acrobat Assistant.” [145]. “But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created.” [146].
Major discussion point
Industry‑specific implementations demonstrating responsible AI
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Dr. Satya Ramaswamy
Speech speed
183 words per minute
Speech length
1035 words
Speech time
338 seconds
Regulatory compliance does not hinder innovation
Explanation
Ramaswamy asserts that adhering to all relevant regulations gives confidence and does not constrain Indian innovation, especially in the airline sector.
Evidence
“So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation.” [75]. “That gives a lot of confidence in the way that we manage the risk.” [76].
Major discussion point
Regulation as catalyst and its alignment with industry practice
Topics
The enabling environment for digital development | Artificial intelligence
Air India generative‑AI virtual assistant with safety knobs
Explanation
Ramaswamy describes the AI virtual assistant deployed for Air India, which handles 97 % of queries autonomously while allowing human‑in‑the‑loop oversight and adjustable safety settings.
Evidence
“So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.” [132]. “It handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate… if you dial the safety knob too much then it is an inconvenience to the customer.” [133].
Major discussion point
Industry‑specific implementations demonstrating responsible AI
Topics
Artificial intelligence | The digital economy
Vishal Anand Kanvaty
Speech speed
184 words per minute
Speech length
582 words
Speech time
189 seconds
Governance and transparency are core foundations
Explanation
Kanvaty emphasizes that transparency is a fundamental governance principle required for responsible AI.
Evidence
“One is a transparency.” [38]. “I think there are obviously the governance principles are core to it.” [41].
Major discussion point
Shift from AI principles to provable practice
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Mandatory regulation needed to prevent AI misuse
Explanation
He argues that regulations are essential, especially to curb AI‑driven fraud and ensure systems do not “go berserk”.
Evidence
“Yeah, I think definitely the regulations are required, especially because AI can go berserk.” [86]. “And when this has to be across the ecosystem, I think, you know, the regulations are mandatory.” [88].
Major discussion point
Regulation as catalyst and its alignment with industry practice
Topics
The enabling environment for digital development | Building confidence and security in the use of ICTs
Balancing false‑positive/false‑negative trade‑offs
Explanation
Kanvaty notes that while accuracy can be relaxed, false positives (legitimate transactions flagged as fraud) must remain very low to protect users.
Evidence
“I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high.” [90].
Major discussion point
Operational challenges and implementation realities
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Sarika Guliani
Speech speed
142 words per minute
Speech length
590 words
Speech time
249 seconds
Shared human‑centred standards are essential across AI layers
Explanation
Guliani stresses that responsible AI must be built on common human values and standards that span the entire AI lifecycle.
Evidence
“responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future…” [69].
Major discussion point
Open standards and content provenance as a trust model
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Regulation should be balanced, not a checklist
Explanation
She argues that regulation must act as a framework that guides responsible AI rather than a rigid, prescriptive list.
Evidence
“…what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation…” [69].
Major discussion point
Regulation as catalyst and its alignment with industry practice
Topics
The enabling environment for digital development | Artificial intelligence
Collective industry commitment beyond compliance
Explanation
Guliani calls for a coordinated effort where responsibility is driven by shared values, not merely by meeting regulatory checkboxes.
Evidence
“…responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values… the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side… we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation…” [69].
Major discussion point
Ecosystem role in supporting MSMEs and broader adoption
Topics
Capacity development | Human rights and the ethical dimensions of the information society
Moderator
Speech speed
132 words per minute
Speech length
132 words
Speech time
59 seconds
Trust, transparency and accountability are non‑optional
Explanation
The moderator frames the session by stating that these three pillars are essential foundations for any responsible AI effort.
Evidence
“Trust, transparency and accountability are no longer optional.” [35].
Major discussion point
Governance and transparency as foundational
Topics
Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs
Agreements
Agreement points
Transition from AI principles to practical implementation
Speakers
– Andy Parsons
– Amol Deshpande
– Prativa Mohapatra
– Sarika Guliani
– Moderator
Arguments
2026 will mark the shift from AI responsibility being a slide in a deck to actual compliance strategy and opportunity due to regulatory enforcement
The question has changed from “should we be responsible with AI?” to “can your systems prove you have been responsible with AI?”
Responsible AI conversation has matured and now requires pragmatic implementation with demonstrable practices rather than just principles on websites
Moving from generative AI to more complex scenarios and agentic AI requires orchestration across all AI layers, not just center of excellence approaches
Enterprises must address business strategy, ethical strategies, and regulatory compliance simultaneously when implementing AI solutions
Responsible AI development should be viewed as a commitment to shared human values rather than just a compliance checkbox
Trust, transparency and accountability are foundational requirements for responsible AI deployment, not optional features
Summary
All speakers agree that the industry has moved beyond theoretical discussions about responsible AI and must now focus on concrete, demonstrable implementation with proper governance frameworks
Topics
Artificial intelligence | The enabling environment for digital development
Need for transparency and accountability in AI systems
Speakers
– Andy Parsons
– Prativa Mohapatra
– Dr. Satya Ramaswamy
– Vishal Anand Kanvaty
Arguments
Adobe’s Content Authenticity Initiative provides transparency for AI-generated content through C2PA content credentials, acting as “nutrition labels” for digital content
Firefly embeds content credentials and uses only licensed input data to ensure enterprises avoid liability issues when generating content
Acrobat Assistant follows the same trust principles as PDF creation, allowing users to work with authenticated sources and maintain accountability
Balancing AI safety controls with customer convenience requires continuous monitoring and allowing customer feedback on AI system performance
NPCI provides transparency by allowing customers to understand why transactions fail through small language models that explain decision-making processes
Summary
Speakers consistently emphasize the importance of transparency mechanisms that allow users to understand AI decision-making processes and content provenance
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Importance of addressing the digital divide in AI adoption
Speakers
– Andy Parsons
– Amol Deshpande
– Prativa Mohapatra
Arguments
Open standards should be available to independent creators at zero cost, same as Fortune 500 enterprises, ensuring inclusivity in AI adoption
Industry bodies and partnerships are crucial for disseminating responsible AI learnings and creating domain-specific templates that MSMEs can access
Large enterprises creating AI technologies have responsibility to develop frameworks that smaller organizations can adopt, preventing a stark divide between big and small players
Summary
All speakers recognize the risk of creating a divide between large enterprises and smaller organizations in AI adoption, emphasizing the need for accessible frameworks and standards
Topics
Artificial intelligence | Closing all digital divides | Capacity development
Regulatory frameworks as catalysts rather than constraints
Speakers
– Andy Parsons
– Dr. Satya Ramaswamy
– Vishal Anand Kanvaty
Arguments
Regulation serves as a catalyst for good practices rather than a constraint, helping enterprises move from reactive to proactive responsible AI adoption
International airlines must comply with multiple regulatory frameworks across different countries while maintaining innovation capabilities
Industry-led governance alone is insufficient; regulatory intervention is necessary because AI can have widespread systemic impacts
Summary
Speakers view regulation as a positive force that enables innovation within structured frameworks rather than as a limitation on development
Topics
Artificial intelligence | The enabling environment for digital development
Similar viewpoints
Both speakers emphasize that responsible AI requires fundamental architectural changes in both technology design and organizational structure, not superficial additions
Speakers
– Andy Parsons
– Prativa Mohapatra
Arguments
Content transparency must be baked into tools at their core rather than grafted on as features, requiring open standards and cross-industry collaboration
Organizations need to restructure legal and compliance teams to handle AI-specific guidelines across multiple jurisdictions and create new organizational frameworks
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Both speakers demonstrate practical approaches to balancing AI efficiency with safety, prioritizing user experience while maintaining robust safeguards
Speakers
– Dr. Satya Ramaswamy
– Vishal Anand Kanvaty
Arguments
Air India’s generative AI virtual assistant handles 40,000 queries daily at 1% the cost of contact centers while maintaining 97% autonomous success rate through embedded safety procedures
NPCI prioritizes low false positives over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraudulent
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Social and economic development
Both speakers recognize the special responsibility of large organizations to create scalable frameworks that can be adapted across different contexts and organization sizes
Speakers
– Amol Deshpande
– Prativa Mohapatra
Arguments
Large conglomerates need to balance centralized compliance with decentralized business unit needs by providing scalable, safe environments with guardrails
Large enterprises creating AI technologies have responsibility to develop frameworks that smaller organizations can adopt, preventing a stark divide between big and small players
Topics
Artificial intelligence | The enabling environment for digital development | Closing all digital divides
Unexpected consensus
Human-in-the-loop as essential safety mechanism
Speakers
– Andy Parsons
– Dr. Satya Ramaswamy
– Vishal Anand Kanvaty
Arguments
The question has changed from “should we be responsible with AI?” to “can your systems prove you have been responsible with AI?”
Aviation industry’s safety-critical nature requires human-in-the-loop controls, allowing pilots to override automated systems when safety is at risk
NPCI prioritizes low false positives over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraudulent
Explanation
Despite coming from very different industries (content creation, aviation, and payments), all speakers converged on the importance of maintaining human oversight and control mechanisms in AI systems, suggesting this is a universal principle across sectors
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Open standards as foundation for responsible AI
Speakers
– Andy Parsons
– Amol Deshpande
– Prativa Mohapatra
Arguments
Content transparency must be baked into tools at their core rather than grafted on as features, requiring open standards and cross-industry collaboration
Industry bodies and partnerships are crucial for disseminating responsible AI learnings and creating domain-specific templates that MSMEs can access
Large enterprises creating AI technologies have responsibility to develop frameworks that smaller organizations can adopt, preventing a stark divide between big and small players
Explanation
Unexpectedly, speakers from both technology providers and enterprise users agreed on the critical importance of open, non-proprietary standards, suggesting a shift away from competitive advantage through proprietary AI governance approaches
Topics
Artificial intelligence | The enabling environment for digital development | Closing all digital divides
Overall assessment
Summary
The speakers demonstrated remarkable consensus across multiple dimensions of responsible AI implementation, including the need for practical implementation over theoretical principles, transparency mechanisms, inclusive access to AI governance frameworks, and the positive role of regulation. There was also unexpected agreement on human oversight mechanisms and open standards approaches across different industries.
Consensus level
High level of consensus with significant implications for the responsible AI landscape. The agreement suggests that industry leaders are aligned on fundamental principles and ready to move toward coordinated implementation. This consensus could accelerate the development of industry-wide standards and collaborative approaches to AI governance, particularly important as regulatory frameworks emerge globally in 2026.
Differences
Different viewpoints
Industry self-regulation versus regulatory intervention necessity
Speakers
– Vishal Anand Kanvaty
– Dr. Satya Ramaswamy
Arguments
Industry-led governance alone is insufficient; regulatory intervention is necessary because AI can have widespread systemic impacts
International airlines must comply with multiple regulatory frameworks across different countries while maintaining innovation capabilities
Summary
Vishal argues that regulations are mandatory because AI can have systemic impacts (‘AI can go berserk’), while Dr. Satya suggests that existing regulatory compliance doesn’t constrain innovation and can work effectively within current frameworks
Topics
Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs
Approach to AI safety controls and risk management
Speakers
– Dr. Satya Ramaswamy
– Vishal Anand Kanvaty
Arguments
Balancing AI safety controls with customer convenience requires continuous monitoring and allowing customer feedback on AI system performance
NPCI prioritizes low false positives over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraudulent
Summary
Dr. Satya focuses on balancing safety with convenience through monitoring and feedback, while Vishal emphasizes starting with conservative approaches that prioritize avoiding false positives over achieving high accuracy
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The digital economy
Unexpected differences
Timeline urgency for responsible AI implementation
Speakers
– Andy Parsons
– Other panelists
Arguments
2026 will mark the shift from AI responsibility being a slide in a deck to actual compliance strategy and opportunity due to regulatory enforcement
Various implementation approaches without specific timeline emphasis
Explanation
While Andy emphasizes 2026 as a critical deadline driven by regulatory enforcement, other panelists discuss implementation without the same sense of regulatory urgency, suggesting different perspectives on the timeline pressure for responsible AI adoption
Topics
Artificial intelligence | The enabling environment for digital development
Overall assessment
Summary
The discussion revealed relatively low levels of fundamental disagreement, with most speakers aligned on the need for responsible AI implementation. The main disagreements centered on regulatory approaches and risk management strategies rather than core principles.
Disagreement level
Low to moderate disagreement level. Speakers generally agreed on fundamental principles but differed on implementation approaches, regulatory necessity, and risk management strategies. This suggests a maturing field where basic concepts are accepted but operational details remain contested. The implications are positive for the field as it indicates consensus on core values while allowing for diverse implementation approaches tailored to different sectors and organizational needs.
Partial agreements
Partial agreements
All speakers agree that responsible AI must move from principles to practice, but they disagree on implementation approaches – Andy emphasizes open standards and cross-industry collaboration, Prativa focuses on embedding responsibility in product design, while Amol advocates for orchestrated frameworks across business units
Speakers
– Andy Parsons
– Prativa Mohapatra
– Amol Deshpande
Arguments
Responsible AI conversation has matured and now requires pragmatic implementation with demonstrable practices rather than just principles on websites
Enterprises must address business strategy, ethical strategies, and regulatory compliance simultaneously when implementing AI solutions
Moving from generative AI to more complex scenarios and agentic AI requires orchestration across all AI layers, not just center of excellence approaches
Topics
Artificial intelligence | The enabling environment for digital development
Both agree that large enterprises must help smaller organizations access responsible AI frameworks, but Prativa emphasizes the responsibility of technology creators to develop accessible frameworks, while Amol focuses on industry bodies and partnerships as the mechanism for knowledge dissemination
Speakers
– Prativa Mohapatra
– Amol Deshpande
Arguments
Large enterprises creating AI technologies have responsibility to develop frameworks that smaller organizations can adopt, preventing a stark divide between big and small players
Industry bodies and partnerships are crucial for disseminating responsible AI learnings and creating domain-specific templates that MSMEs can access
Topics
Artificial intelligence | Closing all digital divides | Capacity development
Similar viewpoints
Both speakers emphasize that responsible AI requires fundamental architectural changes in both technology design and organizational structure, not superficial additions
Speakers
– Andy Parsons
– Prativa Mohapatra
Arguments
Content transparency must be baked into tools at their core rather than grafted on as features, requiring open standards and cross-industry collaboration
Organizations need to restructure legal and compliance teams to handle AI-specific guidelines across multiple jurisdictions and create new organizational frameworks
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Both speakers demonstrate practical approaches to balancing AI efficiency with safety, prioritizing user experience while maintaining robust safeguards
Speakers
– Dr. Satya Ramaswamy
– Vishal Anand Kanvaty
Arguments
Air India’s generative AI virtual assistant handles 40,000 queries daily at 1% the cost of contact centers while maintaining 97% autonomous success rate through embedded safety procedures
NPCI prioritizes low false positives over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraudulent
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Social and economic development
Both speakers recognize the special responsibility of large organizations to create scalable frameworks that can be adapted across different contexts and organization sizes
Speakers
– Amol Deshpande
– Prativa Mohapatra
Arguments
Large conglomerates need to balance centralized compliance with decentralized business unit needs by providing scalable, safe environments with guardrails
Large enterprises creating AI technologies have responsibility to develop frameworks that smaller organizations can adopt, preventing a stark divide between big and small players
Topics
Artificial intelligence | The enabling environment for digital development | Closing all digital divides
Takeaways
Key takeaways
2026 marks a critical transition point where responsible AI shifts from principles to mandatory compliance and demonstrable practices due to regulatory enforcement (EU AI Act, California laws, India’s IT rules)
Responsible AI requires implementation across all layers of AI systems with proper orchestration, not just isolated center of excellence approaches
Content authenticity and transparency must be built into AI tools at their core, using open standards like C2PA content credentials that act as ‘nutrition labels’ for digital content
Large enterprises have a collective responsibility to create frameworks and standards that smaller organizations and MSMEs can adopt to prevent a digital divide in responsible AI access
Successful enterprise AI implementation requires balancing three elements: business strategy, ethical strategies, and regulatory compliance simultaneously
Industry-led self-governance alone is insufficient – regulatory intervention is necessary due to AI’s potential for widespread systemic impact
Practical implementation challenges include uneven adoption, early consumer awareness, platform metadata stripping, and the need for continuous human oversight
Cross-industry collaboration and open standards are essential for creating interoperable, scalable responsible AI infrastructure
Resolutions and action items
FICCI committed to continuing the dialogue and translating discussions into actionable initiatives with industry support
Adobe’s ART framework (Accountability, Responsibility, Transparency) was presented as a practical philosophy for organizations to implement
Industry bodies like FICCI should facilitate knowledge dissemination and create domain-specific responsible AI templates for different sectors
Large technology creators must develop open frameworks and standards that can be adopted across the ecosystem
Organizations need to restructure legal and compliance teams to handle AI-specific guidelines and create new organizational frameworks
Enterprises should implement input-output validation processes for AI systems, ensuring both data sources and outputs meet responsibility standards
Unresolved issues
The specific balance between light-touch regulation versus comprehensive regulatory frameworks remains undefined and requires further discussion
How to effectively scale responsible AI practices from large enterprises to MSMEs without creating prohibitive barriers to innovation
The challenge of maintaining innovation speed while implementing comprehensive responsible AI governance across diverse industry sectors
Consumer awareness and adoption of content authenticity standards remains low, with unclear timelines for widespread recognition
The business case for AI transparency and provenance continues to be challenging, particularly for smaller organizations
How to harmonize multiple international regulatory frameworks while maintaining domestic innovation capabilities
The technical challenge of preventing social media platforms from stripping metadata that enables content transparency
Suggested compromises
Start with lower AI accuracy but prioritize reducing false positives to build trust gradually, as demonstrated by NPCI’s approach to fraud detection
Implement scalable environments with guardrails that allow ‘bring your own AI’ flexibility while maintaining safety standards
Balance AI safety controls with customer convenience through continuous monitoring and customer feedback mechanisms
Use AI systems to monitor other AI systems while maintaining human oversight and control mechanisms
Adopt phased implementation approaches that allow for learning and adjustment rather than immediate full-scale deployment
Create industry-specific templates and frameworks rather than one-size-fits-all solutions to accommodate diverse business needs
Leverage existing trust infrastructures (like PDF for documents) as models for building trust in new AI applications
Thought provoking comments
2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity… The question for everyone in this room has changed from should we be responsible with AI? But can your systems actually prove that you have been responsible with AI, and how do you go about doing that?
Speaker
Andy Parsons
Reason
This comment reframes the entire discussion from theoretical principles to practical implementation. It shifts the focus from whether organizations should adopt responsible AI to how they can demonstrate and prove their responsibility – a much more concrete and actionable challenge.
Impact
This set the foundational tone for the entire panel discussion, moving away from abstract principles to practical implementation. All subsequent panelists referenced this shift from ‘principles to practice’ and focused on concrete examples and operational challenges rather than theoretical frameworks.
We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food… you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.
Speaker
Andy Parsons
Reason
This analogy brilliantly simplifies a complex technical concept by comparing AI content transparency to something universally understood – food nutrition labels. It makes the abstract concept of content provenance tangible and relatable.
Impact
This metaphor became a recurring reference point throughout the discussion, with other panelists building on the concept of transparency and ‘knowing what’s in’ AI-generated content. It helped ground the technical discussion in everyday consumer experience.
It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function. You cannot provide one solution. One size doesn’t fit all.
Speaker
Amol Deshpande
Reason
This introduces a paradigm shift in how enterprises should think about AI deployment – from centralized, uniform solutions to distributed, function-specific approaches. The ‘bring your own AI’ concept challenges traditional IT governance models.
Impact
This comment shifted the discussion toward the practical challenges of governance in decentralized AI adoption. It influenced subsequent speakers to address how to maintain responsible AI principles across diverse, distributed implementations rather than through centralized control.
So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now… Which is accountability, responsibility, and transparency.
Speaker
Prativa Mohapatra
Reason
The clever use of ‘ART’ as an acronym makes responsible AI principles memorable and actionable. It transforms abstract concepts into a simple framework that attendees can immediately implement in their organizations.
Impact
This provided a concrete takeaway that other panelists and the moderator referenced. It demonstrated how complex principles can be distilled into practical, memorable frameworks that drive organizational behavior.
I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started.
Speaker
Vishal Anand Kanvaty
Reason
This reveals a crucial insight about implementing AI in high-stakes environments – the counterintuitive approach of accepting lower accuracy to minimize false positives. It shows how responsible AI sometimes means making trade-offs that prioritize user experience over technical metrics.
Impact
This comment introduced the critical concept of balancing technical performance with user impact, influencing the discussion toward practical trade-offs in AI implementation rather than pursuing maximum technical accuracy at all costs.
So the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt… So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology.
Speaker
Prativa Mohapatra
Reason
This identifies a critical ecosystem responsibility – that large technology creators have an obligation to develop frameworks that smaller organizations can adopt. It addresses the democratization challenge of responsible AI beyond just the technology itself.
Impact
This comment elevated the discussion from individual organizational responsibility to ecosystem-wide responsibility, prompting other panelists to consider their roles in supporting smaller organizations and the broader industry ecosystem.
Is industry-led governance realistically possible, or is regulatory intervention an inevitability?
Speaker
Shantheri Mallaya
Reason
This question cuts to the heart of a fundamental tension in AI governance – whether the industry can self-regulate effectively or whether external regulation is necessary. It forces panelists to take a position on a critical policy question.
Impact
This question prompted the most direct policy discussion of the session, with panelists having to articulate their views on the role of regulation versus self-governance, leading to nuanced responses about the necessity and inevitability of regulatory frameworks.
Overall assessment
These key comments fundamentally shaped the discussion by moving it from theoretical principles to practical implementation challenges. Andy Parsons’ opening reframing set the tone for the entire session, while the ‘nutrition labels’ analogy provided a accessible framework for understanding complex technical concepts. The panelists’ contributions built on each other progressively – from Amol’s ‘bring your own AI’ concept highlighting decentralization challenges, to Prativa’s ‘ART’ framework providing actionable principles, to Vishal’s insights on balancing accuracy with user impact. The discussion evolved from individual organizational challenges to ecosystem-wide responsibilities, culminating in fundamental questions about governance models. These comments created a coherent narrative arc that took the audience from understanding the ‘why’ of responsible AI to grappling with the ‘how’ of implementation across different scales and contexts.
Follow-up questions
How can systems actually prove that you have been responsible with AI, and what does it cost in terms of implementation and day-to-day usage?
Speaker
Andy Parsons
Explanation
This represents a shift from theoretical principles to practical implementation and measurement of responsible AI practices, which is crucial for enterprise adoption and compliance.
How to balance the safety dial in AI systems – if you dial the safety knob too much it becomes inconvenient to customers, but you don’t want jailbreaks or prompt injection to happen?
Speaker
Dr. Satya Ramaswamy
Explanation
This addresses the practical challenge of finding the right balance between AI safety measures and user experience in real-world applications.
How to ensure AI compliance teams are properly structured and resourced across legal, business strategy, ethical strategies, and regulatory compliance?
Speaker
Prativa Mohapatra
Explanation
This highlights the organizational restructuring needed for enterprises to properly implement responsible AI practices across multiple domains.
How can industry frameworks and learnings be effectively disseminated to MSMEs who may not have access to the same resources as large enterprises?
Speaker
Amol Deshpande
Explanation
This addresses the equity challenge in responsible AI adoption, ensuring smaller businesses aren’t left behind due to resource constraints.
What specific regulatory framework would work best – light touch regulation versus balanced regulation?
Speaker
Sarika Guliani
Explanation
This was identified as requiring another dedicated session to properly explore the optimal regulatory approach for responsible AI.
How to create domain-specific and function-specific responsible AI templates that can work across different industries?
Speaker
Amol Deshpande
Explanation
This addresses the need for customized approaches to responsible AI implementation across diverse business sectors and use cases.
How to improve consumer awareness and user interfaces for AI transparency features like content credentials?
Speaker
Andy Parsons
Explanation
This addresses the challenge of making AI transparency tools more accessible and understandable to end users.
How to prevent social media platforms from stripping metadata and removing transparency when AI-generated content is uploaded?
Speaker
Andy Parsons
Explanation
This highlights a technical and policy challenge in maintaining content provenance across different platforms and systems.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

