Responsible AI in India Leadership Ethics & Global Impact part1_2
20 Feb 2026 18:00h - 19:00h
Responsible AI in India Leadership Ethics & Global Impact part1_2
Summary
The session opened with the moderator emphasizing that responsible AI-grounded in trust, transparency and accountability-is now a foundational requirement for Indian enterprises [1-6]. Andy Parsons of Adobe framed the discussion as a shift from abstract AI principles to “provable practice,” noting that 2026 will see responsible AI become both a regulatory duty and a business opportunity [33-34][20-21]. He described Adobe’s leadership in the Coalition for Content Provenance and Authenticity (C2PA), an open, cross-industry standard that embeds provenance metadata directly into media files, enabling anyone to verify how content was created [54-62]. The C2PA’s core principles-transparency, provenance, accountability and inclusivity-are presented as “nutrition labels” for digital content, allowing users to trace models, tools and data behind each asset [74-80][81-84]. Andy also warned of uneven adoption, metadata stripping by platforms, low consumer awareness and the difficulty of building a profitable business case for provenance, arguing that standards, not merely principles, are needed to move forward [90-99][108-110].
In the panel, Amol Deshpande highlighted that responsible AI must be orchestrated across all five AI layers, involve people, processes and technology, and cannot rely on a single “one-size-fits-all” solution, coining a “bring-your-own-AI” approach [162-166][177-180]. Prativa Mohapatra explained Adobe’s internal “ART” framework (accountability, responsibility, transparency) and gave concrete examples such as Firefly, which tags generated outputs with “nutrition” metadata, and Acrobat Assistant, which ensures traceable, lawful document creation [197-199][209-214][224-228]. She stressed that legal and compliance teams must redesign their workflows to embed AI governance throughout the input-output lifecycle, otherwise enterprises risk falling short of future regulatory expectations [235-238].
Satya Ramaswamy described Air India’s generative-AI virtual assistant that has handled 13.5 million queries with a 97 % autonomous success rate, while continuous safety monitoring and customer feedback loops prevent jailbreaks and inappropriate responses [257-263][264-268]. He noted that partnerships with firms like Adobe provide “prompt firewalls” and indemnities that boost confidence in managing AI risk at airline scale [269-271]. Vishal Anand Kanvaty of NPCI emphasized transparency for declined transactions, using a language model to explain reasons to users, and argued that regulatory safeguards are essential to prevent false-positive fraud decisions and maintain trust in the payments ecosystem [293-298][370-376].
Across the discussion, participants agreed that industry-led standards, cross-sector collaboration and regulatory frameworks are all necessary to translate responsible-AI principles into operational practice, especially for MSMEs that lack internal resources [332-340][379-383]. Sarika Guliani of FICCI reiterated that responsible AI is a commitment to shared human values and that the “people, planet, progress” agenda must guide future innovation, with FICCI pledging to advance the dialogue into concrete action [379-383][389-390]. Overall, the dialogue underscored that moving from principle to practice requires open standards, robust governance, and coordinated regulation to ensure trustworthy AI deployment across India’s diverse enterprise landscape [108-110].
Keypoints
Major discussion points
– From principles to provable practice – The panel framed responsible AI as moving beyond abstract ethics to demonstrable compliance, driven by new regulations such as the EU AI Act, California law and India’s IT rules, and positioning it as both a leadership imperative and a regulatory requirement [30-33][105-110][108-113].
– Open, cross-industry standards for transparency – Adobe highlighted the C2PA (Coalition for Content Provenance and Authenticity) as an open, free standard that embeds provenance metadata directly into media assets; this model is being baked into Adobe products (e.g., Firefly, Acrobat) to give enterprises verifiable “nutrition labels” for AI-generated content [54-66][61-70][209-219].
– Implementation challenges and governance needs – Speakers noted uneven adoption, metadata stripping by platforms, low consumer awareness, and the difficulty of building a business case for provenance. They stressed the necessity of robust governance, guardrails, and a shift from “check-list compliance” to operational frameworks [90-99][105-110][158-166].
– Sector-specific responsible-AI deployments – Real-world examples were shared: Air India’s generative-AI virtual assistant that balances safety knobs, continuous monitoring, and human-in-the-loop escalation [257-270]; NPCI’s transparent fraud-prevention model that explains transaction declines and leverages AI while insisting on regulatory safeguards [286-301][370-376]; and RPG’s “bring-your-own-AI” approach that stresses orchestration across data, people, process and technology layers [162-180][185-190].
Overall purpose / goal
The session aimed to translate high-level responsible-AI principles into concrete, enterprise-ready practices for Indian corporations. By showcasing standards, regulatory trends, and concrete industry pilots, the discussion sought to equip leaders with actionable frameworks and to foster a collaborative ecosystem that can scale responsible AI across sectors.
Overall tone
The conversation began with an optimistic, forward-looking tone, emphasizing opportunity and collaboration. As speakers moved into challenges-such as uneven adoption, regulatory pressure, and implementation costs-the tone became more cautionary yet remained constructive, focusing on solutions and shared responsibility. The closing remarks returned to a hopeful, commitment-driven tone, urging continued dialogue and collective action.
Speakers
– Vishal Anand Kanvaty
– Role/Title: Chief Technology Officer, National Payments Corporation of India (NPCI)
– Area of Expertise: Digital payments, AI-driven fraud detection and responsible AI governance [S1]
– Sarika Guliani
– Role/Title: Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI
– Area of Expertise: AI policy, industry standards, responsible AI implementation [S3]
– Dr. Satya Ramaswamy
– Role/Title: Chief Digital and Technology Officer, Air India Limited
– Area of Expertise: Aviation technology, AI-enabled customer service, safety-critical AI systems [S5]
– Shantheri Mallaya
– Role/Title: Editor, Economic Times (Panel Moderator)
– Area of Expertise: Journalism, technology policy, AI ethics and industry discourse [S8]
– Prativa Mohapatra
– Role/Title: Vice President and Managing Director, Adobe India
– Area of Expertise: Product governance, responsible AI, content authenticity and AI-driven creative tools [S11]
– Andy Parsons
– Role/Title: Global Head for Content Authenticity, Adobe (runs the Content Authenticity Initiative)
– Area of Expertise: Content provenance, AI transparency, standards development (C2PA) [S13]
– Amol Deshpande
– Role/Title: Group Chief Digital Officer and Head of Innovation, RPG Group
– Area of Expertise: Digital transformation, enterprise AI strategy, responsible AI implementation [S15]
– Moderator
– Role/Title: Session Moderator (unnamed)
– Area of Expertise: Event facilitation, AI discussion moderation [S19]
Additional speakers:
– Nita – mentioned in closing remarks; no role or expertise specified in the transcript.
– Nanya – mentioned in closing remarks; no role or expertise specified in the transcript.
The session, presented by Adobe in association with FICCI, opened with moderator Shantari Mallaya (Economic Times) welcoming participants to “Responsible AI from Principles to Practice in Corporate India.” She framed trust, transparency and accountability as “foundational, not optional” for India’s accelerating digital transformation [5-6].
Andy Parsons, Global Head for Content Authenticity at Adobe, set the tone by declaring 2026 the year responsible AI becomes both a regulatory duty and a strategic opportunity. He highlighted that the EU AI Act’s enforcement provisions take effect in August, that California’s first AI law is already in force, and that India’s new IT rules on SGI are being implemented, shifting the business question from “should we be responsible?” to “can you prove you are responsible?” [24-33]. Parsons introduced Adobe’s leadership in the Content Provenance and Authenticity (C2PA) credentials, an open, free, cross-industry standard that embeds provenance metadata directly into media files, enabling anyone to verify a piece of content’s origin, model and tools [55-62]. He described this “nutrition-label” approach as essential for India’s massive digital population, where synthetic content and AI-generated misinformation pose real operational risks. He also warned of challenges: social-media platforms often strip metadata [89-92], consumer awareness of provenance symbols remains low [95-99], and building a profitable business case for provenance remains challenging [108-110]. Consequently, he argued for standards-based infrastructure rather than mere principles, and likened regulation to a catalyst that pushes good practice without being punitive [105-108].
After the opening, Mallaya positioned the panel as a deep dive into translating responsible-AI principles-fairness, accountability, transparency, privacy and inclusivity-into concrete enterprise strategies [144-150].
Amol Deshpande, Chief Digital & Innovation Officer, RPG Group, responded that responsibility must be orchestrated across the five AI layers (data, model, inference, deployment, monitoring) and cannot rely on a single solution. He advocated a “bring-your-own-AI” approach, where each function selects appropriate guardrails while the organisation supplies a scalable, safe environment and governance templates adaptable to diverse business units [162-166][177-184]. He emphasized people as the critical stakeholder, calling for extensive up-skilling to embed human judgement into increasingly complex generative and agentic AI systems [169-176].
Prativa Mohapatra, Vice-President & Managing Director, Adobe India, outlined Adobe’s internal ART (Accountability, Responsibility, Transparency) philosophy and how it is baked into product development pipelines through hundreds of validation steps. Across Adobe’s portfolio-Firefly and the Acrobat Assistant-every AI-generated output carries a content-credential tag that confirms licensing, data compliance and model traceability, thereby shielding enterprises from legal liability and requiring legal and compliance teams to redesign workflows to embed AI governance throughout the input-output lifecycle [209-218][224-232][235-238].
Satya Ramaswamy, Chief Digital and Technology Officer, Air India, illustrated a sector-specific deployment: a generative-AI virtual assistant launched in May 2023 that has handled 13.5 million customer queries with a 97 % autonomous success rate. The system balances a “safety knob” that prevents jailbreaks and inappropriate responses with a seamless user experience, using generative AI both to serve customers and to monitor its own performance. He likened the design to an autopilot/red-button safety-critical analogy, emphasizing human-in-the-loop oversight and “prompt firewalls” provided through Adobe partnerships that bolster risk management without stifling innovation [257-274][332-336].
Vishal Anand Kanwati, CTO, National Payments Corporation of India (NPCI), described AI-driven fraud detection that maintains fairness. NPCI began with a low false-positive threshold and, through data-driven model refinement and industry collaboration, achieved higher accuracy. A small language model now explains to users why a transaction was declined, delivering transparency that builds trust in the payments ecosystem. He stressed that regulatory safeguards are indispensable to prevent AI from “going berserk” and referenced the RBI’s responsible-AI framework as a guiding standard [286-293][298-302][370-376].
Points of Agreement
* All speakers endorsed the need for transparent provenance of AI-generated content – via C2PA credentials (Andy) [55-62], Adobe’s ART-driven content-credential tags (Prativa) [209-218], and NPCI’s transaction-explanation model (Vishal) [286-293].
* They concurred that open, standards-based infrastructure and reusable frameworks are essential for scaling responsible AI, with industry bodies such as FICCI, C2PA and RBI playing pivotal dissemination roles [66-70][297-304][332-340][344-347].
* Regulation was uniformly seen as a catalyst that must coexist with innovation (Andy) [105-108].
* Both Satya and Amol highlighted the critical importance of human-in-the-loop oversight and adjustable guardrails for safety-critical applications [180-182][360-362].
Points of Disagreement
1. Regulation intensity – Vishal argued that mandatory safeguards are essential to prevent harmful AI behaviour [370-376]; Sarika Guliani cautioned that regulation should be balanced and proportionate [379-382]; Andy positioned regulation as a catalyst that encourages good practice without being punitive [105-108].
2. Scope of standards – Andy promoted a single, open C2PA standard as the foundation for provenance [55-62]; Amol counter-argued that “one size does not fit all”, advocating sector-specific templates and a “bring-your-own-AI” model [168-180]; Prativa warned that without free, universally accessible frameworks the divide between large enterprises and MSMEs would widen [297-304].
3. Primary driver of adoption – Amol emphasized an awareness → action → demonstration pathway, with industry bodies disseminating frameworks [332-340]; Vishal insisted that regulation is indispensable for ecosystem safety [370-376]; Sarika stressed that responsible AI is a commitment to shared human values, not merely a compliance checkbox, and should be guided by the “people, planet, progress” agenda [383-389].
Key Take-aways
– Responsible AI must move from high-level principles to provable, operational practice.
– Transparent provenance, enabled by open standards such as C2PA, is a cornerstone for trust.
– Effective governance requires coordinated people, process, technology and industry-body layers, not a simple checklist.
– Emerging regulations (EU AI Act, India’s IT rules, state-level AI laws) act as catalysts that should coexist with innovation.
– Sector-specific pilots-Air India’s AI assistant, NPCI’s fraud-explanation service, RPG’s flexible governance, Adobe’s ART-driven products-demonstrate practical pathways.
– Without open, free frameworks, responsible AI risks becoming a luxury for large firms, leaving MSMEs behind.
Closing Remarks
Sarika Guliani (FICCI) concluded that responsible AI is a commitment to shared human values rather than a mere compliance checkbox, and that the “people, planet, progress” agenda must guide all technological innovation. FICCI pledged to continue the dialogue and translate the insights into concrete actions for the Indian ecosystem [383-389][389-390].
The moderator thanked the panelists and the audience, signalling that the conversation will move from discussion to implementation.
I’d like to welcome you all to this session titled Responsible AI from Principles to Practice in Corporate India presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite Andy Parsons, our Global Head for Content Authenticity at Adobe.
Andy, over to you.
Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.
I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity. I’m going to talk about that in a minute. I’m going to talk about that in a minute. I’m going to talk about that in a minute. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.
And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point. But can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, responsible AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.
So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.
But it’s made the trust problem absolutely impossible to address. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that. Hundreds of millions of people consuming digital content every day.
In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to have leadership in the C2PA, which hopefully many of you have. have heard about this week, which is the Coalition for Content Provenance and Authenticity. And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free.
So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others. And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials.
I’ve encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio, or video. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others. And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company.
It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy I think is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency. Provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used. Simple ideas like knowing that a photograph is actually a photograph and not generated.
These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.
Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.
Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I have often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m not sure if that’s true. I’m not sure if that’s true. I’m not sure if that’s true. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.
What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.
And last, enterprises. Like many of yours in this room, that are willing and excited to go first that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again. And in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things. And that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries, because we have representatives from all of them on our excellent panel today.
So we’re fortunate to have leaders from Air India and PCI, RPCI, and the U .S. Department of Defense, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.
Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.
So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.
Right. So, Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So, Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.
So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.
Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.
It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise, to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.
We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.
You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. And that. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple
Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Prativa here. Welcome, Prativa. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe is constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally? And at the same time, as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?
So all yours here.
Okay, thank you. So I think Andy said the context. And since we are here not to learn about the principles, but the practices, I think everybody should go back with certain practices. So the first practice of AI governance, which we practice, is art. Which is accountability, responsibility, and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course, we are in the business of content for a very, very long time. And now the same content is becoming the currency which everybody’s debating. So our principles have been there for a while.
But. How it is actualized? and let me say how it’s translated into our products. And by the way, it’s in our products, it’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law, you will not be getting into any liability issues.
Because how you do it is by feeding. Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines.
So Acrobat has this new feature called Acrobat Assistant. It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.
So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to already, Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt and re -design and re -design to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI, ensure that you tick all the three.
If you miss any one, you might not be ready for the future. So that’s how I see it.
Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Pratibha. I’ll circle back to you, time permitting. Let’s see how best we can get back. Dr. Satya, calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.
How do these things really fall in place in terms of vision and metrics?
Thank you, Shantiri. Since the audience is international, a real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.
So it was a global first in the whole airline industry. Today, it has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to human agent the remaining 50 % of the contact volume comes to A .G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.
At the same time, we don’t want any jailbreak to happen. We don’t want problem injection to happen. We don’t want any inappropriate thing to happen. So we are watching the whole performance of the Gen A virtual assistant, A .G as we call it, all the time. So we use, in fact, Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer, right? So at the end of the day, when we send a response, we also ask the customer, you know, did it answer your question? And also allow them to give their reactions, right? Is it appropriate, inappropriate? And thankfully, over the last 20 years, it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it.
But now we are, as the technologies are maturing, for example, we now have… interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem. That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.
Excellent. And given the kind of scale you’re operating at, I think every day is a new day. Yes, it is. We face challenges. There is new, brand new every day. Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here, or rather I’ll sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you.
How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts? One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be, fair at the same time proactive and detective when it comes to looking at fraud? So how, what are the aspects that you look at? keenly here
I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with the success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.
Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised
absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you some things but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?
Is it getting risked?
Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.
So, I think that’s a big thing. I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.
So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?
Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is similar to electricity and it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem. Absolutely.
Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem? The industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit. So
Shantiri, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?
Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.
Mind you, this would change. It’s not like this one guardrail construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is I think very, very critical. Absolutely. Thank
you for that Amol. Dr. Satya, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean the principles, so on and so forth, and India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time, we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony absolutely, I
think taking Air India, we are an international airline so we operate in many countries, for example we go to North America, US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world, so by nature we are geared to looking at the regulation in all parts of the world and I think and being in compliance, and our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right?
So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco Airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control. And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing.
the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is – this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation. For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it in the same spirit like Adobe.
Absolutely.
So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts
Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important. While all of us realize.
it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high
Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please
first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken and the agency itself and the so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of FICCI and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.
That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Shantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.
So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.
And our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical…
EventThis comment reframes the entire discussion from theoretical principles to practical implementation. It shifts the focus from whether organizations should adopt responsible AI to how they can demonstr…
EventGlobal AI Governance Initiatives: Directions and TrajectoriesGlobal AI governance initiatives are heading toward multiple parallel frameworks with an emphasis on moving from principles to practice, th…
Event# Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges Charlie Halford: Thank you very much. Hello, everybody. Yes, I’m Charlie, as Miguel has let you…
Event– **Implementation Challenges Across Jurisdictions**: Participants highlighted the tension between rapid technological advancement and regulatory lag, with different regions (China, EU, US) developing…
Event– **Governance and Implementation Challenges**: Common barriers across all initiatives including unclear policy frameworks, capacity building needs, data gaps, enforcement difficulties, and the need f…
EventExamples of sectoral self-regulations are in the case of Mauritius in the perspective of increasing the capacity of existing Ethics Committees in the Health Sector.<a href=”https://dig.watch/event/ind…
Resource“The session was presented by Adobe in association with FICCI and titled “Responsible AI from Principles to Practice in Corporate India.””
The knowledge base explicitly states that the discussion titled “Responsible AI from Principles to Practice in Corporate India” was presented by Adobe in association with FICCI, confirming the partnership and session title [S2].
“Adobe leads the Content Provenance and Authenticity (C2PA) credentials, an open, free, cross‑industry standard that embeds provenance metadata directly into media files.”
C2PA is described as a technical standard that enables creators to attach cryptographically signed provenance metadata to media, and is supported by Adobe among other companies, confirming its open, cross-industry nature [S37] and [S76].
“Amol Deshpande advocated a “bring‑your‑own‑AI” approach for organisational governance.”
The discussion notes that the phrase “bring your own AI” was highlighted and praised during the session, confirming its use by speakers such as Amol Deshpande [S1].
“India’s new IT rules on SGI are being implemented, requiring platforms to label synthetic content and act on it.”
India has introduced rules that obligate social-media platforms to label AI-generated/deep-fake content and remove flagged material within three hours, providing concrete detail on the regulatory environment referenced in the report [S79].
The panel exhibits strong consensus on four core pillars: (1) embedding transparent provenance through open standards; (2) building open, reusable frameworks with industry‑body support; (3) viewing regulation as a necessary, balanced catalyst; and (4) ensuring human‑in‑the‑loop safety guardrails while balancing innovation and user experience.
High consensus across technical, business, and policy perspectives, indicating a unified direction for responsible AI implementation in India’s corporate sector. This alignment suggests that forthcoming initiatives are likely to prioritize open standards, collaborative governance, and proportionate regulation, facilitating scalable and trustworthy AI adoption.
The panelists uniformly agree that responsible AI, transparency, and accountability are essential for India’s digital future. However, they diverge on three main fronts: (1) how prescriptive regulation should be, ranging from mandatory safeguards to balanced, light‑touch frameworks; (2) whether a single open standard can satisfy all sectors or whether industry‑specific, flexible solutions are required; (3) the relative weight of industry bodies versus statutory regulation in driving adoption. These disagreements are moderate rather than polarising, reflecting differing strategic preferences rather than fundamental opposition.
Moderate disagreement – the differing views on regulatory intensity, standardisation strategy, and governance mechanisms could lead to fragmented implementation unless a coordinated consensus is reached. The implications are that policy makers and industry leaders must negotiate a hybrid model that blends baseline regulatory requirements with adaptable standards and strong industry‑body participation to avoid silos and ensure inclusive, trustworthy AI deployment.
The discussion was driven forward by a handful of pivotal remarks that moved the dialogue from abstract principles to concrete, measurable practices. Andy Parsons’ framing of 2026 as the deadline for provable responsible AI and his introduction of the C2PA standard set the agenda, prompting panelists to showcase how their organisations translate those ideas into product‑level safeguards (Prativa’s ART framework, Satya’s airline AI assistant, Vishal’s transparent payment explanations). Amol’s ‘bring your own AI’ and emphasis on people added nuance, steering the conversation toward flexible, human‑centric governance. Each of these insights sparked new sub‑topics—standards, auditability, scalability, and the balance between regulation and innovation—thereby deepening the analysis and shaping a cohesive narrative that blended technical solutions with ethical imperatives.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

