Responsible AI in India Leadership Ethics & Global Impact
20 Feb 2026 18:00h - 19:00h
Responsible AI in India Leadership Ethics & Global Impact
Summary
The session examined how Indian corporations can move responsible AI from abstract principles to provable practice, emphasizing that trust, transparency and accountability are now foundational ( [1][34] ). Andy Parsons argued that responsible AI must become an operational discipline rather than a mere compliance slide, noting a shift toward “provable practice” ( [33][34] ). He warned of a trust crisis caused by the massive scale of generative AI and said enterprises need to prove how content was created, by which models and tools ( [38-44] ). Parsons introduced the Content Authenticity Initiative and the C2PA open standard, which embeds provenance metadata directly into media files and is backed by a cross-industry coalition including Adobe, Microsoft, BBC and others ( [55-62][66-68] ). He stressed that open, interoperable, non-proprietary standards must be implemented in working code, a point especially relevant for India’s huge digital population ( [70-74] ).
Prativa Mohapatra explained Adobe’s “ART” (accountability, responsibility, transparency) philosophy, describing how provenance checks are baked into products such as Firefly and Acrobat Assistant so that inputs are licensed and outputs can be audited ( [196-204][208-220][224-228] ). She added that coordinated legal, compliance and ethical teams are essential, and that neglecting any pillar threatens future readiness ( [235-239] ). Amol Deshpande highlighted that responsible AI must be orchestrated across all five AI layers, involve people, processes and guardrails, and cannot be a one-size-fits-all solution; instead organisations should offer a “bring-your-own-AI” framework ( [162-170][176-181][187-190] ). Vishal Anand Kanwati described NPCI’s transparent transaction-decline explanations via a language model and affirmed that governance principles such as transparency are non-negotiable for trust in payment systems ( [287-293][295-298] ). Satya Ramaswamy shared Air India’s generative-AI virtual assistant that handles millions of queries, with safety “knobs” and continuous human-in-the-loop monitoring to satisfy global aviation regulations ( [258-262][261-264] ). He argued that complying with diverse international regulations does not hinder innovation, citing the airline’s ability to launch the industry’s first AI assistant while remaining within regulatory bounds ( [341-345][350-354] ).
The panel debated whether industry-led governance can replace regulation; Amol and Vishal stressed the need for standards, awareness and industry partnerships, while both agreed that regulatory frameworks are essential to prevent AI misuse at scale ( [322-329][360-366] ). Sarika Guliani concluded that responsible AI is a commitment beyond compliance, requiring shared human values, cross-sector collaboration and alignment with the “people, planet, progress” agenda, and announced that FICCI will continue to drive the dialogue into action ( [370-376][382-383] ). The discussion underscored that responsible AI must be embedded in products, governed by open standards, and supported by both industry initiatives and regulatory oversight to realize its potential in India’s digital future.
Keypoints
Major discussion points
– From principles to provable practice – the need for concrete standards and transparency
Andy emphasized that responsible AI must move beyond slide-deck principles to “provable practice” and that “you need standards, not just principles” [34-35][109-112]. He presented the C2PA open standard as a concrete example of an interoperable, non-proprietary framework that can embed provenance information directly into content [62-70].
– Content provenance (C2PA) as a concrete case study for responsible AI
The Coalition for Content Provenance and Authenticity (C2PA) provides “content credentials” that travel with media, enabling users to see the full genealogy of an asset - what model created it, which tools were used, etc. [55-66]. The initiative rests on three pillars – transparency, accountability and inclusivity – likened to “nutrition labels” for digital content [75-86].
– Enterprise-level implementation challenges and the “ART” governance model
Amol described responsible AI as an “orchestration of all layers” – technology, people, process and governance – and warned against a one-size-fits-all approach, stressing the need for guardrails and scalable templates [162-166][170-181]. Prativa echoed this with Adobe’s “ART” (Accountability, Responsibility, Transparency) framework, citing product-level examples such as Firefly’s built-in provenance and Acrobat Assistant’s safe-by-design workflow [196-210][221-228].
– Regulation as both catalyst and requirement, balanced with industry-led standards
Andy framed regulation (EU AI Act, US state laws, India’s IT rules) as a “catalyst for good practices” [107-108], while Vishal highlighted the necessity of transparency in transaction decisions and referenced the RBI’s responsible-AI guidelines [286-293]. Satya explained how Air India complies with multiple global aviation regulators while still innovating with a generative-AI virtual assistant [341-354].
– Ecosystem collaboration to bridge large enterprises and MSMEs
The panel repeatedly stressed that industry bodies (FICCI, C2PA, etc.) must disseminate frameworks so smaller players can adopt them. Amol called for “awareness → action → demonstration” and for industry partnerships to cascade guardrails downstream [322-336]. Prativa warned that without such shared standards, a stark divide will emerge between “big guys” and “MSMEs” [291-300].
Overall purpose / goal of the discussion
The session aimed to move the conversation on responsible AI in India from abstract principles to actionable, enterprise-level practices. By showcasing Adobe’s C2PA model, sharing governance approaches from Air India, RPG Group, and NPCI, and debating the interplay of regulation and industry standards, the participants sought to equip Indian corporates with concrete tools, frameworks, and collaborative pathways for deploying trustworthy, inclusive AI at scale.
Overall tone and its evolution
– The opening remarks were formal and aspirational, stressing the urgency of responsible AI [4-6].
– Andy’s presentation adopted an optimistic, solution-focused tone, highlighting a successful open-standard initiative [58-66].
– The panel discussion shifted to a pragmatic and candid tone, acknowledging real-world challenges (uneven adoption, cost, governance complexity) [90-101][162-181].
– As the conversation progressed, the tone became collaborative and constructive, with participants emphasizing shared responsibility, ecosystem support, and the need for balanced regulation [322-336][341-354].
– The closing remarks returned to an hopeful, call-to-action tone, urging continued dialogue and industry commitment [370-384].
Overall, the tone remained constructive throughout, moving from high-level inspiration to grounded, actionable discussion and ending with a collective commitment to advance responsible AI in India.
Speakers
– Announcer – Event announcer/moderator
– Vishal Anand Kanwati – Chief Technology Officer, National Payments Corporation of India (NPCI) – expertise in payments infrastructure and AI-driven fraud detection [S4][S5]
– Sarika Guliani – Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI – expertise in AI policy and industry collaboration [S6][S7]
– Dr. Satya Ramaswamy – Chief Digital and Technology Officer, Air India Limited – expertise in aviation AI applications and safety-critical systems [S8][S9][S10]
– Shantari Malaya – Editor, Economic Times – expertise in technology journalism and AI policy coverage [S11][S12]
– Prativa Mohapatra – Vice President and Managing Director, Adobe India – expertise in responsible AI product development and content authenticity [S13][S14]
– Andy Parsons – Global Head for Content Authenticity, Adobe (runs the Content Authenticity Initiative) – expertise in content provenance and AI transparency [S15][S16]
– Amol Deshpande – Chief Digital Officer and Head of Innovation, RPG Group – expertise in enterprise AI strategy and governance [S18][S19]
Additional speakers:
– None
Opening & Context – Adobe, in partnership with FICCI, opened the session on “Responsible AI from Principles to Practice in Corporate India” [1-2]. The moderator emphasized that India’s current digital moment demands not only rapid AI adoption but responsible deployment, with trust, transparency and accountability described as “foundational” rather than optional [4-6].
Andy Parsons – From Principles to Proven Practice
Parsons framed the central challenge: responsible AI must move from a slide-deck concept to an auditable discipline [33-35]. He warned that 2026 will be the year responsible AI becomes both a duty and an innovation opportunity [21-22] and that organisations will soon be asked not whether they are responsible, but whether they can prove it [31-32]. He highlighted the need to consider implementation cost and day-to-day operational overhead when adopting responsible-AI practices [384-386].
The regulatory backdrop he outlined included the EU AI Act, California law, and India’s new IT rules on Self-Generated-Content (SGI) [387-389]. He positioned regulation as a catalyst for good practice rather than a barrier [107-108].
Parsons described the trust crisis created by the massive scale of generative AI, noting that enterprises now produce or consume AI-generated content at “extraordinary” volumes [44-45] and that the crisis is “real … happening every day to our children” [390-392]. In India’s “world’s largest digital population” [47-50], synthetic media and misinformation are operational risks for businesses [51-52]. Without the ability to demonstrate what was made, how, and by which models, companies cannot meet corporate responsibility obligations [53-55].
To illustrate a concrete solution, Parsons introduced the Coalition for Content Provenance and Authenticity (C2PA). This cross-industry body-including Adobe, Microsoft, BBC, Sony, Qualcomm and others-has created an open, free, non-proprietary standard that embeds “content credentials” directly into media files [396-398]. The C2PA badge is already visible on LinkedIn posts, signalling provenance to viewers [393-395]. Its three pillars-transparency, accountability and inclusivity-are likened to “nutrition labels” for digital content, providing provenance information such as the generating model, tools used and camera metadata [75-86][70-74].
Parsons acknowledged practical challenges: many social-media platforms strip metadata, undermining provenance [90-98]; consumer awareness of the C2PA symbol remains low [92-95]; and the business case for provenance is challenging because it does not directly generate revenue [100-101].
Panel Introduction – Shantari Malaya – The moderator introduced the panelists (Andy Parsons, Amol Deshpande, Prativa Mohapatra, Satya Ramaswamy, Vishal Anand Kanwati).
Amol Deshpande – Orchestrating Responsible AI
Deshpande argued that responsible AI must be orchestrated across the five layers of the AI lifecycle as understood by the panel and cannot be reduced to a single checklist [162-166]. He stressed the importance of people, processes and guardrails, describing a “bring-your-own-AI” model where each function can adopt suitable templates while the enterprise provides common guardrails [176-183][187-190]. He warned against a “one-size-fits-all” solution, insisting that scalable, sector-specific templates are needed for enterprises ranging from manufacturing to services [180-183][186-188].
Prativa Mohapatra – Adobe’s ART Framework & Product Embedding
Mohapatra explained Adobe’s internal ART (Accountability, Responsibility, Transparency) governance model [196-198]. She said the first pillar of Adobe’s AI governance is “ART”: accountability, responsibility and transparency. Every new Adobe product follows a rigorous, multi-step methodology that embeds provenance at the core. For example, Firefly, Adobe’s generative-AI tool, automatically attaches a “nutrition-label” style provenance tag to every output, guaranteeing that inputs are licensed and that the resulting content can be audited for compliance [208-212][214-220]. Similarly, the Acrobat Assistant inherits the trusted PDF workflow, allowing users to trace the origin of any generated document and ensuring that high-stakes outputs are traceable and legally sound [224-228]. She emphasized that legal and compliance teams must be integrated into AI governance, otherwise an organisation may fall short of future regulatory and risk requirements [235-239].
Satya Ramaswamy – Air India’s Generative-AI Virtual Assistant
Ramaswamy shared Air India’s experience with a generative-AI virtual assistant launched in May 2023, which has handled over 13.5 million queries with a 97 % autonomous resolution rate [258-262]. The system employs “safety knobs” that can be dialled to balance user convenience against the risk of inappropriate responses; AI monitors its own performance, and customers are prompted to rate the answer’s appropriateness [260-262]. Satya explained that Air India uses generative-AI models to monitor the performance of its own virtual assistant [263-264]. The airline works with partners such as Adobe to obtain indemnity against failures [263-264] and complies with multiple international aviation regulators (EU, US FAA, Indian DGCA) without letting compliance constrain Indian innovation [341-345][350-354].
Vishal Anand Kanwati – NPCI’s Transparent Fraud-Detection
Kanwati illustrated how transparency can be operationalised in payments. NPCI has built a small language model that explains, in real time, why a transaction was declined, giving consumers clear, understandable reasons for fraud-related decisions [287-291]. He linked this practice to the RBI’s responsible-AI guidelines, stating that “the principles have to be adopted – there is absolutely no choice for us” [295-298]. For him, such transparency is essential to maintain trust in the nation’s digital payments ecosystem [286-293].
Discussion on MSMEs & Ecosystem – Both Amol and Prativa stressed that the first step for MSMEs is awareness of responsibility, followed by actionable frameworks disseminated through bodies such as FICCI [322-326][328-336]. Amol warned that large enterprises must create reusable compliance frameworks because smaller firms lack dedicated legal or AI-ethics teams [304-307]; Prativa echoed that without shared standards a stark divide will emerge between “big guys” and “MSMEs” [291-300]. The panel agreed that industry consortia should cascade templates and best-practice guidance to lower-resource organisations [328-336][316-321].
Global vs Indian Regulatory Alignment – Satya noted that complying with EU, US and Indian DGCA regulations does not stifle Indian innovation [341-345]. Vishal argued that mandatory safeguards are essential to prevent AI from “going berserk” in critical financial systems [322-327][360-366].
Regulation vs Self-Governance – A broad consensus emerged that regulation is inevitable and can act as a catalyst for good practice, but “principles alone are insufficient” – concrete, interoperable standards are required [107-108][109-112][328-336][360-366]. Tension remained between Andy’s advocacy for a universal open standard (C2PA) [62-66][70-71] and Amol’s view that industry-specific templates are necessary [180-183][328-336].
Closing – Sarika Guliani – Guliani framed responsible AI as a value-driven commitment that goes beyond a compliance checkbox, linking it to the broader “people, planet, progress” agenda [370-376]. She thanked the panelists and announced that FICCI will continue to facilitate dialogue and drive collaborative actions to translate the discussed principles into concrete industry initiatives [382-383].
Key take-aways
– Shift from abstract AI principles to provable, operational practice across people, process, technology and governance [31-35][162-170].
– Importance of open, interoperable, non-proprietary standards such as C2PA content credentials for building trust in AI-generated media [75-86][70-74][396-398][393-395].
– Adobe’s ART framework shows how accountability, responsibility and transparency can be baked into product lifecycles (Firefly, Acrobat Assistant) [196-210][221-228].
– Continuous human-in-the-loop monitoring and adjustable safety guardrails are critical for high-risk deployments (Air India, NPCI) [260-264][287-291].
– Regulation is viewed as a catalyst, not a constraint, and must be complemented by industry standards to avoid fragmented compliance [107-108][328-336][360-366].
– Ongoing challenges include metadata preservation, consumer awareness of provenance symbols, and the resource gap for MSMEs [90-98][304-307].
– Sector-specific implementations provide practical road-maps for responsible AI at scale.
Unresolved issues – Raising widespread consumer awareness of provenance symbols; providing affordable, reusable compliance toolkits for MSMEs; balancing “light-touch” regulation with mandatory safeguards; and designing detailed human-in-the-loop processes for safety-critical AI systems. The panel suggested a combined approach: baseline regulatory safeguards, open-standard adoption (e.g., C2PA), and industry-led dissemination of sector-specific templates to ensure both interoperability and flexibility [328-336][360-366][322-329].
Thought-provoking remarks – Andy’s 2026 prediction; his challenge to prove responsible AI; the description of C2PA credentials as an open, free, cross-industry standard; Amol’s “one size doesn’t fit all” reminder; Prativa’s “nutrition-label” analogy; Satya’s use of generative AI to monitor its own assistant; Vishal’s language model that explains transaction declines; and the consensus that regulation can be a catalyst rather than a hindrance [21-22][31-32][58-59][62-66][70-71][180-183][186-188][78-82][260-262][287-291][107-108][341-345].
The panel left with a shared commitment to embed open standards, sector-specific guardrails, and regulatory compliance into AI products, ensuring that responsible AI becomes a practical, measurable capability across India’s corporate ecosystem.
Welcome to this session titled, Responsible AI from Principles to Practice in Corporate India, presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite a guest. Andy Parsons, our Global Head for Content Authenticity at Adobe.
Andy, over to you.
Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.
I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity for innovation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.
And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point, but can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.
So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.
But it’s made the trust problem absolutely impossible to address. Ignore. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that.
Hundreds of millions of people consuming digital content every day. In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to provide some leadership in the C2PA, which hopefully many of you have heard of. have heard about this week, which is the Coalition for Content Provenance and Authenticity.
And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free. So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others.
And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials. I have encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others.
And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company. It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy, I think, is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency, provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used.
Simple ideas like knowing that a photograph is actually a photograph and not generated. These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.
Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.
Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I’ve often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m sure that you will see it. I’m sure that you will see it. I’m sure that you will see it. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.
What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.
And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excited to go first, that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again, and in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things, and that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries because we have representatives from all of them on our excellent panel today.
So we’re fortunate to have leaders from. Air India, PCI, RPG Group, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.
Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.
So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.
Right. So Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.
So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.
Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.
It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise. It has to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.
We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.
You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple levels.
Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Pratibha here. Welcome, Pratibha. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe has constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally and at the same time as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?
So all yours here.
Okay, thank you. So I think Andy set the context and since we are here not to learn about the principles but the practices, I think everybody should go back with certain practices. So the first practice of AI governance which we practice is art, which is accountability, responsibility and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course we are in the business of content for a very, very long time and now the same content is becoming available. It’s becoming the currency which everybody’s debating. So our principles have been there for a while, but how it is actualized.
And let me say how it’s translated into our products. And by the way, it’s in our products. It’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law. You will not be getting into any liability issues. Because how you do it is by feeding.
Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines. So Acrobat has this new feature called Acrobat Assistant.
It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.
So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to… Already Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI ensure that you tick all the three. If you miss any one you might not be ready for the future.
So that’s how I see it.
Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Prithiva. I’ll circle back to you time permitting. Let’s see how best we can get back. Dr. Satya calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.
How do these things really fall in place in terms of vision and metrics?
Thank you, Shantiri. Since the audience is international, real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.
So it was a global first in the whole airline industry. Today it handles. It has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to a human agent the remaining 50 % of the contact volume comes to A .G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when we were the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.
At the same time we don’t want any jailbreak to happen we don’t want problem injection to happen we don’t want any inappropriate thing to happen so we are watching the whole performance of the Virtual Assistant A .G as we call it all the time. So we use in fact Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer so at the end of the day when we send a response we also ask the customer did it answer your question and also allow them to give their reactions is it appropriate, inappropriate and thankfully over the last 20 years it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it but now we are as the technologies are maturing for example we now have interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem.
That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.
Excellent. And given the kind of scale you’re operating at, I think every day is a new day.
Yes, it is. We face challenges. There is new, brand new every day.
Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here or rather sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you. How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts?
One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be? Fair at the same time proactive and detective when it comes to looking at fraud. So how what are the aspects that you look at? keenly here
I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with this success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.
Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised
absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you something but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?
Is it getting risked?
Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.
So… I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.
So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?
Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is a… similar to electricity and steam, it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem.
Absolutely. Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face, you know, when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem, the industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit?
Shantari, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?
Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.
Mind you, this would change. It’s not like this one guardrails construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is, I think, very, very critical.
Than you for that, Amol. Dr. Satya, if, you know, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean, the principles, so on and so forth. And India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony
Absolutely, I think taking Air India, we are an international airline so we operate in many countries for example we go to North America US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world so by nature we are geared to looking at the regulation in all parts of the world in all parts of the world and we are looking at the regulation and we are looking at the regulation and we are looking at the regulation and being in compliance.
And our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right? So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control.
And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing. the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is, this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation.
For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it
So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts here.
Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions, you know. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important.
While. All of us realize. it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high
Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please
first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content. Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done. Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken thank you so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of Vicky and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.
That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Chantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.
So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.
### From Principles to Practice A central theme was the need to move beyond abstract principles toward concrete implementation tools, technical standards, and practical governance mechanisms.
EventThis provides a concrete, real-world example of how radical transparency can work in practice, moving beyond theoretical discussions to demonstrate practical implementation. It shows how trust can be …
EventStrong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption, safety by design, and transparency requirements. Speakers from different regio…
EventThese key comments shaped the discussion by highlighting the complexity of implementing ethical AI across different regulatory approaches, cultural contexts, and levels of governance. They moved the c…
Event“So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai…
Event# Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges – **Charlie Halford** – Principal Research Engineer at BBC R&D, works on C2PA technology implem…
EventBecause how you do it is by feeding. Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So her…
EventNadja Blagojevic: Yes, very happy to. And thank you so much for having Google here. We’re very happy to be speaking with you all today. When we think about the information ecosystem, I mean, this has …
EventThe implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani emphasised that manual compliance processes inevitably fail because people will skip …
EventDespite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap between principles adoption and actual governance implementation is particularly prono…
EventLow to moderate disagreement level. The speakers largely agreed on problem identification but differed on solutions and approaches. This suggests that while there is consensus on challenges, the path …
EventGalia :Three minutes? One minute each, three minutes together. But just to say, I think this has been a really, really rich conversation. This also made me optimistic that I think it is possible. I th…
EventAuke Pals: Thank you very much, Lisa. So I hope at this point, the EU AI Act could steer us in the right direction and hopefully could be adapted once we learn how to make use of it. But that’s …
EventBoosting standardization process can establish a strong lay of requirements By focusing on education, industry collaboration, and capacity building, Malaysia aims to effectively tackle cyber threats …
Event### SME Criticality and Transformation Urgency Bocar Ba: I think I don’t have time to be controversial, but I don’t like the term goodwill. Mr. Ba, I don’t like the term goodwill. This partnership, i…
EventCollaboration is emphasized as crucial for progress in Africa, specifically in facilitating cross-border payments, which are essential for the success of MSMEs. African Nenda is facilitating collabora…
EventHowever, women’s representation and empowerment in MSMEs are still limited. Currently, women are sole owners of only around 15% of MSMEs globally. There is also a global gender gap in managerial repre…
EventThe tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate for a high-level international gathering, with speakers expressing honor, gratitud…
EventThe discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in Baroness Shields’ opening about AI engineering “simulated intimacy”), evolved int…
EventThe discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around infrastructure, energy, skills, and governance, speakers consistently emphasize…
EventThe discussion maintained a consistently collaborative and solution-oriented tone throughout. It began with an authoritative presentation of frameworks and evolved into a pragmatic exchange of real-wo…
EventThe discussion maintained a consistently professional and collaborative tone throughout. It began with formal introductions and technical explanations, evolved into an enthusiastic presentation of pra…
EventThe tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and complexity of the challenges. Speakers maintained a pragmatic optimism, recognizing si…
EventThe tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a professional, confident demeanor while discussing serious societal challenges. The ton…
EventThe tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunities and potential ways forward. There was a sense of urgency about the need for …
EventThe tone was largely optimistic and solution-oriented. Speakers were enthusiastic about the potential of the Internet Backpack and similar technologies to make a positive impact. There was also a coll…
EventThe discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities and significant challenges in implementing data governance. The tone was notably…
EventThe discussion began with a technology-focused, optimistic tone about AI’s transformative potential but gradually shifted to a more pragmatic, human-centered perspective. The tone became increasingly …
EventThe discussion unfolded against a backdrop of significant global investment exceeding $40 billion in quantum technologies, with 2025 declared as the International Year of Quantum by the UN. The panel …
EventThe discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while acknowledging the complexity of the challenges. The tone was constructive but reali…
EventThe discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The tone became more cautionary and analytical in the middle sections when addressin…
EventThe tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized opportunities rather than obstacles, with particular enthusiasm around technology’s p…
EventThe discussion maintained a collaborative and constructive tone throughout, with participants showing mutual respect and genuine interest in finding solutions. While serious challenges were discussed—…
EventThe discussion maintained a collaborative and optimistic tone throughout, with panelists demonstrating mutual respect and building on each other’s points. The conversation was technical yet accessible…
EventThe discussion maintained a collaborative and constructive tone throughout, with participants showing mutual respect and building on each other’s contributions. While there were acknowledgments of ser…
EventThe tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlook about AI’s transformative potential while acknowledging significant challenge…
EventConcluding the address, the speaker alluded to further information that remained unshared due to time constraints. They closed by reiterating their thanks, leaving an impression of a forward-thinking …
EventThe overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stressed the need for immediate action rather than just words. While acknowledging the …
EventThe tone of the discussion was initially somber when describing the serious threats journalists face, but became more constructive and hopeful as participants shared ideas for solutions and positive i…
EventThe discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplishment, and forward-looking optimism. Speakers expressed appreciation for the wee…
EventThe discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mutual respect. While there were some tensions around specific content (particularl…
Event“2026 will be the year responsible AI becomes both a duty and an innovation opportunity.”
The knowledge base notes that by 2026 questions of AI responsibility and trust will move from after-thoughts to central concerns, and AI is expected to reshape management and organisational design that year, confirming the report’s view of 2026 as a pivotal moment [S104] and [S105].
“India’s new IT rules on Self‑Generated‑Content (SGI) require transparency in AI‑generated content.”
India’s Synthetic and Generated Intelligence (SGI) regulations have been announced, mandating transparency so users can distinguish AI-generated content, matching the report’s description of the SGI rules [S108].
“The regulatory backdrop includes the EU AI Act.”
The EU AI Act is identified in the knowledge base as a key piece of AI regulation, confirming its presence in the regulatory landscape referenced by the speaker [S109].
“Trust, transparency and accountability are foundational for responsible AI deployment in India.”
Other sources stress that trust infrastructure is as critical as technical infrastructure and that accountability, transparency, rule of law and explainability are essential for AI governance, providing additional context to the claim [S59] and [S102].
“Responsible AI must move from a slide‑deck concept to an auditable discipline.”
Discussion of AI governance in 2026 highlights the need for clear accountability mechanisms and auditable practices, adding nuance to the report’s framing of responsible AI as an auditable discipline [S104].
The discussion shows strong convergence among speakers on four pillars: (1) regulation as a catalyst paired with concrete standards; (2) transparency/provenance embedded in AI products; (3) the necessity of industry‑specific, flexible frameworks; (4) operationalising responsible AI through baked‑in product features and capacity building. These alignments cut across AI, data governance, and the enabling environment for digital development, indicating a mature consensus that can drive coordinated policy, standard‑setting and industry collaboration.
High consensus – most speakers echo each other’s positions, suggesting a unified industry stance that can facilitate rapid development of interoperable standards, supportive regulatory frameworks, and ecosystem‑wide capacity initiatives.
The panel largely shares a common vision of responsible AI as essential and sees regulation, standards, and industry collaboration as necessary. However, clear points of contention arise around whether a single open standard can serve all sectors versus the need for industry‑specific templates, the extent to which regulation should drive governance versus industry‑led self‑regulation, and how serious the current trust crisis truly is.
Moderate – while there is broad consensus on goals, the differing views on implementation pathways (universal standards vs sector‑specific frameworks; industry‑led governance vs mandatory regulation; perception of trust risk) could affect the speed and coherence of policy and product roll‑outs. These divergences suggest that coordinated multi‑stakeholder dialogue will be needed to reconcile approaches before large‑scale adoption can proceed smoothly.
The discussion was driven forward by a series of pivotal remarks that moved the dialogue from abstract ethics to concrete, actionable frameworks. Andy Parsons’ framing of a 2026 deadline and the demand for provable responsibility set a sense of urgency and introduced the need for measurable standards, which anchored the rest of the conversation. Subsequent comments—especially the introduction of the C2PA open standard, Amol’s ‘bring‑your‑own‑AI’ flexibility, Prativa’s product‑level provenance labels, Satya’s meta‑AI monitoring, and Vishal’s transaction‑explanation model—provided tangible examples that illustrated how principles can be embedded across industries. These insights prompted participants to explore implementation challenges, the role of regulation as an enabler, and the importance of industry collaboration. Collectively, the highlighted comments shaped the session into a forward‑looking, solution‑oriented exchange, emphasizing that responsible AI is not merely a compliance checkbox but a strategic, technically grounded capability that can be scaled across India’s diverse enterprise landscape.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

