Secure Talk Using AI to Protect Global Communications & Privacy

Secure Talk Using AI to Protect Global Communications & Privacy

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion centered on the growing threat of digital fraud and scams, and how AI-driven solutions can protect citizens and secure the digital economy. The event was hosted by Tanla Platforms to showcase their Wisely.ai platform, an agentic AI system designed to identify, prevent, and eliminate spam and scam communications.


The main fireside chat featured Vikram Sinha, CEO of Indosat Ooredoo Hutchison, who shared how his company transformed from viewing fraud as a customer complaint issue to treating it as a board-level strategic priority. He revealed that Indonesians lose $5 billion annually to scams, with 65% of citizens facing spam or scam attempts weekly. After implementing Tanla’s AI solution, Indosat saw significant improvements including 9% ARPU growth compared to the industry’s 3%, and customer churn rates dropping from 3.6% to 1.6%.


A panel discussion with leaders from banking, fintech, and payments sectors explored why fraud remains persistent despite technological advances. Panelists identified key challenges including the fragmented nature of defenses across interconnected systems, sophisticated AI-powered scamming techniques, and the need for better data sharing across institutions. They emphasized that scams involve complex behavioral journeys that begin long before actual transactions occur, making them difficult to detect at payment points alone.


The discussion highlighted that while individual institutions are implementing AI-driven fraud detection, the attack surface is interconnected but defenses remain siloed. Speakers called for coordinated intelligence sharing, national-level initiatives, and stronger law enforcement to combat what has become an industrial-scale, cross-border criminal enterprise that threatens the foundation of the digital economy.


Keypoints

Major Discussion Points:

AI-Powered Anti-Fraud Platform Launch: The introduction of Wisely.ai, Tanla’s agentic AI platform designed to identify, prevent, and eliminate spam and scam communications at scale, with live deployments at Indosat Indonesia and BSNL India protecting millions of users in real-time.


Scale and Impact of Digital Fraud: Discussion of the massive global economic impact, with over $1 trillion lost annually to scams worldwide, $5 billion lost by Indonesians in 2024 alone, and 65% of Indonesians facing spam/scam attempts weekly, highlighting the urgent need for systematic solutions.


Ecosystem-Wide Collaboration Requirements: Emphasis on how fraud prevention cannot be solved by individual institutions alone, requiring coordinated intelligence sharing between banks, fintech companies, telecom operators, payment platforms, and regulatory bodies to address the interconnected nature of digital fraud.


Business Results and ROI of AI Implementation: Concrete business outcomes shared by Indosat CEO, including 9% ARPU growth (vs 3% industry average), customer churn reduction from 3.6-3.7% to 1.6%, and protection of an estimated $500 million in potential losses within six months of deployment.


Future Vision for Intelligent Networks: BSNL’s roadmap for AI-integrated telecommunications infrastructure, including customer-controlled network resources, federated learning at edge data centers, and building networks where citizens can be “100% safe,” representing the evolution from connectivity providers to protection platforms.


Overall Purpose:

The discussion served as a product launch and industry forum for Tanla Platforms to showcase their AI-driven anti-fraud solution while bringing together telecom operators, financial institutions, and technology leaders to address the growing threat of digital fraud and scams across South and Southeast Asia.


Overall Tone:

The discussion maintained a professional and urgent tone throughout, beginning with celebratory product launch energy but quickly shifting to serious concern about the scale of fraud affecting vulnerable populations. The tone became increasingly collaborative and solution-focused as speakers emphasized the need for industry-wide cooperation, ending on an optimistic note about technology’s potential to create safer digital ecosystems for citizens.


Speakers

Speakers from the provided list:


Wish Gurmukh Dev – Host/MC representing Tanla Platforms and group companies (Carex and Value First)


Sanjay Kapoor – Host for fireside chat, distinguished global telecom leader, former CEO of Bharti Airtel, board member at Tanla Platforms with nearly four decades of telecom experience


Vikram Sinha – President, Director and CEO of Indosat Orido Hutchison, seasoned global telecom leader with experience across Asia and Africa markets


Anshuman Kar – Chief Customer Success Officer of Tanla Platforms, moderator for the panel discussion on AI for Citizen Protection


Ratan Kumar Kesh – Executive Director and Chief Operating Officer of Bandhan Bank, leading technology, operations, customer experience and transformation functions


Bipin Preet Singh – Founder and CEO of MobiQuik, leading fintech entrepreneur in India’s digital payments space


Neha Gutma Mahatme – Director at Amazon Pay India, payments and fintech leader driving customer-centric digital financial experiences


A. Robert J. Ravi – Chairman and Managing Director of BSNL, telecom leader with over three decades of service, gold medalist in electronics and communication engineering


Audience – Attendees asking questions during the panel discussion


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion examined the escalating threat of digital fraud and the emergence of AI-driven solutions to protect citizens and secure the digital economy. Hosted by Tanla Platforms to showcase their Wisely.ai platform to enterprise customers, telco partners, and board members, the event brought together telecommunications operators, financial institutions, and technology leaders to address what has become a crisis-level challenge across South and Southeast Asia.


Event Context and Tanla’s Vision

The event was structured around Tanla Platforms’ demonstration of their Wisely.ai platform, with Wish Gurmukh Dev presenting the company’s three foundational principles that shaped the AI solution: innovation, collaboration, and impact. As a three-decade-old company with group entities including Karix and Value First, Tanla positioned this platform as their response to industrial-scale digital fraud affecting billions of users across their markets.


The Scale and Urgency of Digital Fraud

Sanjay Kapoor, serving as moderator and Tanla board member, opened with sobering global statistics: consumers have lost over $1 trillion to scams, with digital fraud evolving from isolated phishing attempts to AI-powered, cross-border, automated operations at industrial scale. The regional impact proves equally devastating, with specific examples highlighting the crisis magnitude.


Vikram Sinha, CEO of Indosat Ooredoo Hutchison, shared his awakening to the problem during a MasterCard advisory board meeting in London in early 2024, where he learned that Indonesian citizens lost $5 billion in 2024, with 65% of the population facing spam or scam attempts weekly. This revelation transformed his company’s approach from treating fraud as a customer service issue to a board-level strategic priority.


The Indian context, as presented by Anshuman Kar, reveals equally concerning statistics. A Supreme Court judgment identified losses of 54-56,000 crores to scams, affecting demographics ranging from senior citizens to IIT professors. The channel analysis shows that 70% of scams in India originate from SMS, with 65 billion SMS messages and 15 billion over-the-top (OTT) messages sent monthly, creating an enormous attack surface.


These figures transformed the discussion from abstract business concerns to urgent social responsibility, particularly as the fraud disproportionately affects vulnerable populations including middle-income and lower-income women, elderly citizens, and rural communities who may lack digital literacy to recognise sophisticated scams.


AI-Driven Solutions and Measurable Impact

The centerpiece discussion featured Vikram Sinha’s fireside chat with Sanjay Kapoor, detailing Indosat’s implementation of Tanla’s Wisely.ai platform. The agentic AI system, designed to identify, prevent, eliminate, and prosecute spam and scam communications at scale, is currently live at Indosat in Indonesia and BSNL in India, protecting millions of users in real-time.


Vikram provided compelling evidence of the platform’s effectiveness through specific metrics: “Close to 2 billion spam instances, scam clearly threat intelligence protected. 2.3 million scammers flagged.” More significantly, the business impact proved substantial. Following AI implementation, Indosat achieved 9% ARPU (Average Revenue Per User) growth compared to the industry average of 3%, while customer churn rates dropped dramatically from 3.6-3.7% to 1.6%.


The transformation extended beyond metrics to customer perception. Vikram recounted visiting a village where a customer, when asked what they liked about Indosat, specifically mentioned spam and scam protection rather than traditional service features. This anecdote illustrated how fraud prevention had evolved from a cost center to a core business differentiator providing “peace of mind” alongside connectivity.


The Ecosystem Challenge and Fragmented Defenses

The panel discussion, moderated by Anshuman Kar, revealed the interconnected nature of digital fraud versus the fragmented approach to defense. Bipin Preet Singh from MobiKwik illustrated this challenge, noting that 99% of fraud complaints his company receives involve money stolen from other banks but routed through their platform, highlighting how companies become unwilling participants in fraud chains.


Ratan Kumar Kesh from Bandhan Bank, which processes 60 lakh (6 million) UPI transactions daily, revealed two primary fraud challenges: customers being defrauded and accounts being used as “mules.” The latter proves more troubling, as it involves willing participation from citizens who rent their accounts for easy money. He provided specific examples of fraudsters offering employees double their salary—from ₹70,000 to ₹1,50,000—to share banking data, creating persistent vulnerabilities that technology alone cannot address.


Neha Gutma Mahatme from Amazon Pay India provided crucial insight into why traditional fraud prevention fails: scams involve behavioral journeys that begin long before payment transactions occur. Social engineering, deepfakes, voice cloning, and fake identities create layered deception that makes detection at the transaction point insufficient. Furthermore, she highlighted the asymmetric nature of the AI arms race, where offensive AI operates unconstrained while defensive AI faces privacy regulations, customer experience requirements, and compliance constraints.


Strategic Partnerships and Implementation Philosophy

Vikram Sinha emphasized the importance of strategic partnerships over vendor relationships, noting that solving real problems at scale requires deep collaboration. Indosat’s approach included engineering collaboration with Tanla, utilizing their own GPU clusters and AI factory to train models specifically for their market, including language nuances and local fraud patterns.


This contrasted with approaches favoring company-specific models. Bipin Preet Singh argued that individual fraud models trained on proprietary datasets perform better than industry-level solutions, as they capture unique transaction patterns and customer behaviors. However, this approach faces limitations when fraudsters operate across multiple platforms and institutions, highlighting the ongoing tension between customization and ecosystem-wide protection.


Regulatory Response and National Initiatives

An audience question about government initiatives revealed evolving regulatory responses. The Reserve Bank of India has established a Digital Payments Intelligence Authority to enable national-level data sharing across the payments ecosystem. The RBI’s “mule hunter” initiative represents another coordinated effort to identify and prevent account misuse.


However, speakers noted that regulatory responses often involve adding friction to transactions to prevent easy fraud, potentially impacting the seamless user experience that has driven digital adoption. Law enforcement gaps emerged as a persistent challenge, with speakers noting that despite knowing fraud origins and having technology to track perpetrators, effective coordinated response remains insufficient to create deterrent effects.


BSNL’s Vision for AI-Integrated Infrastructure

A. Robert J. Ravi, Chairman and Managing Director of BSNL, outlined an ambitious vision for AI-integrated telecommunications infrastructure. The company is implementing AI across network operations to enable intelligent customer-network interactions, where users could request specific bandwidth or security levels in real-time. His brief presentation touched on federated learning concepts and edge computing approaches that could provide localized fraud protection while addressing privacy concerns.


The vision extends to rural protection through distributed AI capabilities, representing a significant evolution from traditional connectivity provision to comprehensive citizen protection platforms.


Customer Experience and Protection Balance

A recurring theme throughout the event was balancing security measures with customer experience. Traditional approaches often create friction that impacts legitimate users while failing to stop sophisticated fraudsters. The AI-driven approach demonstrated by Indosat shows that protection can enhance rather than hinder customer experience, as evidenced by improved customer retention and satisfaction metrics.


Customer education emerged as equally important as technological solutions. The sophistication of modern scams, including AI-generated voice cloning and personalized social engineering, means that even technically sophisticated individuals can fall victim, requiring comprehensive awareness programs alongside technological defenses.


Unresolved Challenges and Future Directions

Several critical challenges remain unaddressed. International coordination mechanisms for cross-border fraud prevention are underdeveloped, despite fraudsters operating globally. Data sharing limitations across institutions continue to hamper comprehensive fraud detection, even with national initiatives underway. The economic incentives driving mule account participation require solutions beyond technology, potentially including alternative economic opportunities and stronger legal deterrents.


The discussion also highlighted the need for standardization of fraud prevention protocols across different types of financial institutions, and the development of more effective methods to detect malicious intent rather than just transaction anomalies.


Implications for Digital Economy Development

The conversation revealed that digital trust has evolved from a desirable feature to critical infrastructure for economic development. With India’s digital economy projected to cross $1 trillion by 2030 and Indonesia’s already exceeding $100 billion in gross merchandise value, fraud prevention becomes essential for sustaining growth and inclusion.


The transformation of telecommunications companies from connectivity providers to protection platforms represents a fundamental shift in industry responsibility, making protection a core business differentiator rather than a cost center.


Conclusion and Path Forward

The event demonstrated that while individual institutions are implementing sophisticated AI-driven fraud detection, the interconnected nature of digital fraud requires coordinated ecosystem response. The success of platforms like Wisely.ai shows that real-time, AI-driven protection can deliver both citizen safety and business value when implemented through strategic partnerships and comprehensive approaches.


The path forward requires continued innovation in AI capabilities, enhanced data sharing mechanisms, stronger law enforcement coordination, and recognition that digital trust is fundamental infrastructure for economic development. As A. Robert J. Ravi emphasized, the ultimate goal is building systems where citizens can be “100% confident in network safety,” transforming digital protection from reactive fraud detection to proactive citizen security.


This transformation represents not just technological advancement but a fundamental reimagining of corporate responsibility in the digital age, where protecting citizens becomes as important as serving them, and where AI serves as the enabling technology for creating trustworthy digital ecosystems at national scale.


Session transcriptComplete transcript of the session
Wish Gurmukh Dev

Thank you everyone. Thank you very much. Thank you. Once again, ladies and gentlemen, a very good evening and welcome to what promises to be a truly memorable evening. On behalf of Tanla Platforms and our group companies, Carex and Value First, I extend a warm and a hearty welcome to our enterprise and telco customers, our global strategic partners, board members, and our incredible team. At the core of Tanla’s DNA are three enduring principles, innovation, collaboration, and impact. For three decades, this DNA has driven us to build innovation at scale, touching billions of users, and what excites us the most is the greenfield landscape that we have always explored. Along with it, it has helped us work in close partnership with our esteemed customers, our regulatory ecosystem, our telco partners, and the broader ecosystem to ensure that every step has been collaborative and always ahead of the curve.

And lastly, it has helped us ensure that every innovation we pioneer creates a tangible and a measurable impact in the world. And it’s these principles that shaped Wisely .ai, our agentic AI platform built to identify, prevent, eliminate, and bring to the books the growing menace of spam and scam, not just in India, but world over. Today, Wisely .ai is live and delivering real impact at Indosat in Indonesia, at BSNL in India, and with our leading banks in India, safeguarding millions of users in real time every single day. Tonight, we don’t just want to talk about it, we want to bring it to life. So without further ado, let’s bring the story to life. Please welcome our guest for the fireside chat.

A fireside chat from the theme Vision to Impact, driving customer engagement with AI -driven trust. It gives me immense pleasure to invite our host for the fireside chat, who has spent nearly four decades as a distinguished global telecom leader, leading one of India’s most iconic companies as CEO of Bharti Airtel, shaping global mobile policy as a key voice on the board and executive committee of GSMA, and building a legacy that stretches from telecom to digital services and beyond. We are honored to have him as our board member at Tanla Platforms, where his global perspective and vision, continue to shape our journey towards becoming a world -class AI -driven communications enterprise. Ladies and gentlemen, please put your hands together for Mr.

Sanjay Kapoor. A guest on the fireside chat is a seasoned global telecom leader who has not only defined the arc of the industry, but also built it. A career spanning across some of the most dynamic markets across Asia and Africa, he has held senior leadership roles from being CEO of Bharti Airtel Africa and Managing Director of Bharti Airtel Seychelles to serving as a CEO of Orido Group in Maldives and Director -CEO of Indosat Orido before taking on his current role. Today, he leads one of Indonesia’s most transformative telcos, Indosat Orido Hutchison, driving its evolution from a telco into an AI tech co, anchored by a bold vision. of AI for all and a deep commitment to digital inclusion and security for every Indonesian.

Please join me in welcoming the President, Director and CEO of Indosat Oridu Hutchison, Mr. Vikram Sinha. I hand over the baton to our esteemed host, Sanjay, to take it forward and we all look forward to it. Thank you.

Sanjay Kapoor

Thank you. Thank you for your kind words and welcome Vikram. Before we really get down to asking a few questions from the person who is going to be on the firing range for today’s chat, let me set up a pollute for what we are going to be discussing. We all know that the global economy is rapidly digitizing, but the trust has become its most crucial foundation. Digital payments are expected to surpass $14 trillion annually by 2027, with more than 5 billion people online. In South and Southeast Asia, nearly 2 billion people are coming online at a record speed, driven by affordable smartphones, low -cost data, and national digital infrastructure initiatives. India’s digital economy is projected to cross $1 trillion by 2030, while Indonesia’s has already exceeded $100 billion in GMV.

Yet, this scale brings vulnerabilities. Both markets are facing rising cybercrime, digital fraud, and organized scam operations, causing billions of dollars worth of losses each year. Globally consumers have lost over a trillion US dollars in scams Today’s fraud is no longer isolated phishing It is AI powered, it is cross border, it is automated and industrial in scale This is not just a consumer experience issue anymore It is an economic issue, it is a systemic risk issue, a trust issue and it demands great leadership to combat it It’s my privilege to welcome Vikram who I have known for years and years We worked together at ATL2 He is the President, Director, CEO of Indosat, Orido, Hutchison and is serving over 100 million customers in that country Under his leadership, Indosat has accelerated its transformation into a digital first AI technology access across both urban and rural communities.

Indosat has evolved as an AI tech company and partnered with Tanla, guided by a powerful vision, AI for all. And that’s a very powerful statement that they make. So Vikram, welcome here. And we’ll get down to some sharing of insights and questions to you. We’ve all known about digital fraud becoming more intense. We all know it’s eroding trust. And you as a CEO and with your lens, when did you really move it from being a customer complaint issue to a board level issue? Because there must

Vikram Sinha

First of all, again, it’s an absolute honor and privilege, especially having it with Sanjay. You know, I have a long learning history. So thank you. Thank you, Sanjay. And it’s an absolute honor. I think coming back to your question, let me share with all of you a true story. I’m also on the advisory board of MasterCard. I still remember early 2024, one of the board meeting in London, advisory board meeting, the Asia SCAM and the GASA, Global Anti -Scam Association, presented the data and I was blown off. That report shows that in 2024 itself, 5 billion US dollar Indonesians have lost. What touched me is these are all middle income, lower income women, elderly women. This was an eye -opening data for me, number one.

Number two, the next key highlight, Sanjay, it was every Indonesian, 65 % of the Indonesians are facing spam or scam on a weekly basis. So that itself was a trigger for me that Indosat being such an iconic brand. Let me tell you, Indosat is like BSNL of Indonesia. 58 year old company, first company to connect Indonesia to the world. It became Indosat Oridu Hachisan. But people have lot of expectation. So as a CEO, that was the trigger Sanjay that our role is not only to connect. Our role is to also protect my 100 million customer. And that is where I got very serious that we need to solve this problem for our 100 million customer.

Sanjay Kapoor

Yeah, I mean, I think every board worth its while today. gets intimidated by this problem that is hitting them. And I’m so glad to hear from you that your board is fully aligned with you on this cause and you’ve been able to convince them to say, I really want to go ahead making some serious investments and changes because of this. And my leading question from here is that I just said that scammers are using AI, voice cloning, automated phishing campaigns, synthetic identities. How did you think of AI in the middle of all this? You know, because it is a new technology. People are still surfacing where they’re headed. But you’ve picked it up as the foundational infrastructure for protecting, you know, this at a national scale.

So tell us about that.

Vikram Sinha

So let me put it this way. I’m a very strong believer of fake it till you make it. So I started talking about AI two years back and I had very little understanding. Of what AI will do. I’m telling you a true story. I was in. many people still struggle with that today. But let me tell you fast forward. I was invited by, I think, Sundar and Google Circle. There were 15 CEOs. This was around a year back. I was on a breakfast table. The joke started by saying that AI is everywhere other than P &L. This is how the breakfast started. But within an hour, I understood companies, countries who have been all in and ahead of the curve and who are solving real problem, they have started seeing value.

So for me, if I have to solve a real problem at scale, and that is where we said that if these scammers are using AI, we have heard many stories and the way they clone their voice, you know, you will be so scared what all are happening. Then we were very clear, we want a partner, we don’t need a vendor We want a partner who can work with us and use AI to solve this real problem And I have to say Sanjay, I think you are on their board, Uday is here We work with 96 vendors, we categorize among them 20 strategic partners But there are 4 or 5 where I invest time, which becomes very strategic for us Because I was trying to solve a real problem, I met Uday and then our commitment was aligned And that is how we wanted to make sure that not only we solve it, we do it in a way which should become a global case study

Sanjay Kapoor

And with an AI -led model that you have put in place What are the benefits that are accruing to you at a customer level to begin with?

Vikram Sinha

Yes, because as I said, you know, there’s a lot of AI as a toy. We are very clear what we don’t want to do. So now that this is my first showcase, I put it on my quarterly result. And then you have been a CEO, you have reported quarterly after quarterly, until unless you have a substance, you don’t put any example on your investor deck. So if you look at my last quarterly investor deck, we have put it there that with Tanla platform, three things I’ll highlight. You know, quarter four, ARPU grew for the industry 3%, we grew 9%. Number one. Number two, our churn. Our churn, because markets are mature. You know, you don’t have to be over -obsessed with gross assets.

And you know, you have to deliver experience. our churn for serious base greater than 90 days from a level of 3 .6, 3 .7 have come down to 1 .6. And this is just a beginning, Sanjay, you know, because the model is getting trained. And I’m very confident that we will see much more value going forward.

Sanjay Kapoor

And, you know, from here, being an ex -CEO and being a board member for very many years, now, this question of ROI always haunts every board to say, you’re making an investment in this, it seems to be doing good for your customers. What about the ROI on what you’ve done?

Vikram Sinha

You know, this is, again, and it’s a fair question, you know, investment on AI is not small. So until and unless you see the impact of AI on your P &L, it will not be scalable. So. So very clearly, within six, eight months, we have seen whether it is ARPU, whether it is churn. And the most important thing is, Sanjay, where we lost, if I go back to my last two decades of experience, we as Telco, we were very inward looking. The biggest thing which we missed was focusing on customer love. I think this is very fundamental. This problem which I am solving is so, so fundamental that the role of Telco is not only to connect, it is also to give peace of mind.

Protection is a big statement. And the channel which is getting used is voice, WhatsApp. So you need to solve it for your customer. Otherwise, you have no business.

Sanjay Kapoor

I mean, I hear you and you are a passionate CEO who believes in keeping his ears in and the ears out. And I see you

Vikram Sinha

I had no idea about Tanla or anything. In fact, first time when Uday came to meet me, I thought it was a startup. I’m again being very honest. But then somebody told me they are solving for banks in India. I think we have to understand if somebody is solving this problem for banks. Because, you know, spam is one thing. But the bigger issue is scam. And these scams which happen, these are small tickets. These are like 50 rupees, 100 rupees, 500 rupees. These are like 50 rupees. and it goes up to as high as $10 ,000. You know, I’m just giving you example. But then I realized that they have done some good work in India. But I have to say, Sanjay, you know, where it moved from vendor to strategic partnership, we were also very keen that my team want to do a bit of an engineering with them.

So we have a full stack AI factory. We have our own GPU cluster. I think there are a few things where we have done before India. Our cluster of GB200, H100 was live. So I told Uday, let’s train the data because see the power of compute and GPU. You know, we all talk about TikTok. TikTok was all designed on GPU. They don’t even use CPU. So if you have to be ahead of the curve, today on that platform, let me give you two data points. Close to 2 billion spam instances. Scam. Clearly threat intelligence protected. 2 .3 million scammers flagged and customers are getting real time as you know we have grown up on the Airtel values I spent 5 days every month in the market I was in a village I was going to the new capital of Indonesia which is Far Flung on my way I stopped my car I saw an outlet I asked him my language still is not good in their language I asked him what do you like about Indosat IM3 he said this Sat Spam Spam Scam it’s solving real problem so again we just launched it on Whatsapp channel also I think Whatsapp is one of the challenge which is maximum getting misused so we have to continuously evolve and this is where Tanla has committed that we will make sure we do it together and we do it properly

Sanjay Kapoor

excellent you know these fireside chats have a time limitation so we have to keep it up and my stopwatch is telling me we’ve exceeded time already. So let me wind it off. Vikram, first and foremost, thank you for these insights. What stands out from our conversation today is how digital trust moves from concept to reality, which is what you’ve just described. When over 100 million subscribers are protected by AI, when billions of communications are analyzed in real time, and when millions of malicious actors are stopped within the ecosystem, trust is no longer a promise. It becomes an infrastructure. And what Indosat has shown through AI for All is that inclusion and protection are not trade -offs.

They must advance together. So thank you for your insights, Vikram, and it is a pleasure having you today.

Vikram Sinha

Thank you, Sanjay. Thank you.

Wish Gurmukh Dev

Can I request both of you just pose for a picture, please? Thank you very much, Sanjay. Thank you very much, Vikram. Wow. Two global leaders, one who defined the yesteryears of telcos across the world, and the other who’s redefining and bending the arc to set the future of telecom leveraging AI. Thank you so much, Vikram and Sanjay, for this scintillating talk. Thank you very much. Our next session, ladies and gentlemen, is… going to be a panel discussion wherein we have Anshuman Kaur Chief Growth my apologies Chief Customer Success Officer of Tanla Platforms who is going to moderate the panel AI for Citizen Protection and Securing the Digital Economy May I request Anshuman to kindly come on to the podium please First of all panelists Mr.

Ratankesh Executive Director and Chief Operating Officer Bandhan Bank Bandhan Bank is one of the largest and the fastest growing banks in India with over 32 million customers served by the across 35 states He is leading multiple functions including technology, operations, customer experience and transformation functions second of our panelists Mr. Bipinpreet Singh founder and CEO MobiQuik a leading fintech entrepreneur at the forefront of India’s digital payments evolution Bipinpreet Singh has built MobiQuik into India’s largest digital wallet with over 180 million users please welcome Mr. Bipinpreet okay we’ll go ahead with the third panelist while we wait for Bipin our third panelist Ms. Nehaji Mahatme director Amazon Pay India a payments and fintech leader driving customer centric digital financial experiences at scale shaping how millions transact seamlessly and securely through the Amazon Pay India app please welcome Ms.

Nehaji Mahatme Ms. Neha Kavya can I request you to just check with Bipin please Maybe Anshuman. Yeah. Oh, Bipin is on his way. To Bipin, ladies and gentlemen, founder and CEO Moby Quick, leading fintech entrepreneur at the forefront of India’s digital payments evolution. Thank you.

Anshuman Kar

Good evening, everyone. As we just heard in this fireside chat, the problem is big. We just heard numbers of over… over $1 trillion being lost in the global economy because of scams and frauds. If you think about India in particular, SMS as a channel, almost 70 % of the scams originate from that channel. And as messaging itself has expanded into other OTT channels as well, and I’ll share some numbers, now 65 billion SMSes are spent monthly in India. Another 15 billion are sent monthly over OTT channels. So when you look at these numbers, it is clear that it is a channel, while it is important and critical, it’s only proliferating even further. So in that context, when you think about, and I joined Tanla relatively recently, compared to the three decades of history as a chief customer officer, and it has been a privilege to see the build and the deployment of WISD AI platform.

And it is an honor to have Vikram here. As a CEO, and there is nothing better to hear that validation directly from the customer. And you heard the impact that it’s having. on the end users in terms of protecting them from scams and spams. In fact, the estimations are within six months of launch, we have protected almost $500 million in estimated losses. And then as you think about where this takes us in the future, scams and scamsters are continuing to evolve. They’re not sitting idle. And so we have to stay a few steps ahead in the innovation curve. That becomes critical. And when they’re becoming more sophisticated, they’re becoming more personalized, and they’re actually probably also becoming more successful at times.

So tonight, I will not, before I get into solutions, I want to focus on the problem. Is the problem really getting better? Or is it getting worse? And why? So we have a distinguished panel here, and they all provide very different vantage points in the industry. We have banking, who sees the transaction risk and, frankly, a lot of regulatory accountability as well. You have fintech, who sees a lot of velocity and scale, and they also are obsessed with customer experience. And then you have platforms like Amazon Pay that have the commerce side, they have the payment side. So they see a lot of behavior signals across multiple parts of the platform. But from a citizen perspective, an average user, it’s a seamless journey.

They don’t operate in this individually. So that’s something that we will delve into. And as part of that, we would love to deep dive in terms of, how the key stakeholders in this ecosystem need to work to thwart this menace that is in front of us. So with that, I want to welcome our distinguished panelists. Thank you for joining us for this discussion. Thank you. So let me start off with you, Pratham. Recent Supreme Court judgment, a couple of weeks back, talked about, I think, 54 or 56 ,000 crores being lost to scams. In fact, they called it dacoity. I don’t think I’ve heard that term lately. I think it was around Chambal and all. I used to watch movies when I had heard that term.

But it is of that magnitude and scale. So the question, Pratham, is why is this still a problem? And what is really not working?

Ratan Kumar Kesh

mostly senior citizens and at times even IIT Bombay professors so that’s like the spectrum you can look at it they are being defrauded the second is a lot of customers are now able to open accounts in most of the banking companies they open banking accounts and those accounts are being utilized to siphon off the funds stolen from somewhere and getting routed so there are two parts of the problem in different sense the bigger trouble is the second one the second one is being done willingly with a country having 1 .5 billion population there are a lot of people who would be willing to open accounts and then across multiple banks the India stack makes it pretty simple to have an account onboarded very quickly in just about few minutes and then they go and rent that account and get a fee per month and that number and the lure of making easy money is so high and that’s why it’s so difficult and to me that’s not working So at one end, we celebrate India AI Summit, all the global leaders, heads of states, the big AI celebrities are coming all over here.

And we are talking about our countrymen who are actually defrauding poor senior citizens, and they have got hard -earned money of their entire life, and those are being siphoned off. So that’s very, very sad to see. So I think what is not working is the mindset. It’s not going to stop so soon. So it’s a bit of a philosophical response, but I’ll come to the bit more technical response a little later. The second is our customers getting defrauded using multiple things. That part I think it has got improved significantly because most of the banks now have very sophisticated rule engines. So now as banks like ours, we process. millions of transactions. Like I was just saying that my UPI volume, we are just an 11 -year -old bank.

My daily UPI volume is something like 60 lakhs per day. Now, the volume and velocity of transactions are very, very high. But the good part is that depending on the customer’s profile, the customer routine transaction pattern, we can identify an out -of -routine transaction. If a customer happens to withdraw 10 ,000 rupees from a particular ATM, generally, he or she withdraws from somewhere else, we can say it’s a non -routine transaction. Someone withdraws 10 ,000, suddenly withdraws 25 ,000, we know it’s a non -routine. Somebody never makes a rent payment, suddenly starts making a rent payment using one of the payment channels, we say it’s an out -of -routine. So once you have out -of -routine transactions, we are able to identify those.

Sometimes we prevent those transactions. Sometimes we go back, do enhanced due diligence, and then allow. So that part is working fine, and the tools are getting more and more mature. AI is helping us to build the algo a lot better. So that part is clearly working. So if you are seeing the numbers, the fact that the velocity and the volume has gone up in terms of percentage is coming down. But the mule part of it is very, very scary, which is customer account being utilized as a rent. You know, the other day, one of my employees from the fraud prevention team called up one of the fraudsters, saying that, you know, this is a senior citizen, you called so many times, you tried to defraud, why are you doing this?

So his response was that, can you tell me how much money you make a month? It must be 50 ,000, 70 ,000? I’ll give you double, you start giving me data. The fraudster is telling my employees, saying that, can you share with me more data? Don’t worry, I’ll not tell anybody, just give me data, I’ll give you 70 ,000, I can even pay you 1 ,50 ,000 per account. Now, there again, we are using a whole bunch of technology, including the algorithm in terms of the transaction monitoring algorithm to really prevent that. And I think Carix has developed one which we are implementing now, which is this anti -phishing tool. which has got some very interesting capabilities. If some of you are interested, you could talk to the Karik spokes.

I think that is quite interesting. They sit on the DLT platform. They scan that particular SMS, where is it originated from, some of the links that they provide in terms of collecting data, whether it is genuine or fake. They look at the keywords using some of the AI algo and then come back and use some of the techniques to stop preventing and sending the SMS. To the potential customer who could then get defrauded. So I think these are some of the things which are working. So largely that is what is the spectrum I would see, Ansuman. It is a long answer, but that is what it is.

Anshuman Kar

No, this is fantastic insight and it is obviously great. In fact, my parents are so scared they do not even use ATM cards because of the risk of being defrauded. In fact, they wait for me to come and they will do recharge on the phone, otherwise they are going to the stores to do it. Because this is becoming really scary and especially vulnerable populations, like senior citizens and all, are particularly exposed. let me ask let me turn it over to you Neha right you sit at the intersection you see AI patterns I’m sure Amazon uses AI all over you analyze behavior patterns and all across commerce across payments right but why are we not able to stop scams across the whole journey

Neha Gutma Mahatme

so I want to kind of talk about three four aspects that we’ve learned on why it is difficult so first of all I think GAMP is not at a transaction or a payment transaction level I think it is a behavioral journey it starts much before the payment really happens and that’s really where the fundamental issue is and that’s what I think as an industry we need to kind of solve for the second and if I kind of belabor on that point I think the social engineering happens much ahead it is not really when the transaction is happening or the payment is happening the deepfakes, the voiceovers the fake identities the layering of transactions, I think it’s all making it very difficult to stop scams at the point of the transaction.

The second point is the data side was limits visibility. So while as Amazon we have really good data internally on the platform but I think we miss the data of how these social engineering patterns are getting created outside of Amazon and that really prevents us. The third is the human psychology evolves faster than models. So while you can really build models, refine algorithms but can’t beat the offensive AI because the AI is being used on both sides. It’s not that the preventive AI is working, there is also offensive AI. And the offensive AI works unconstrained while the defensive AI has constraints of privacy, constraints of regulations, constraints of customer experience. And so I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s the reason why I think that’s benefits that you need to provide.

So I think that’s the third part and therefore the last part which is really the crux of the point is that AI is helping detect anomalies. It is not helping detect the malintent or the behavior and unless and until I think we solve the malintent or the behavior I think the scans, the

Anshuman Kar

It’s a fantastic response. And basically what you’re saying is no one institution is seeing the whole journey. You’re seeing pieces of it. But there’s a lot of parts that are interconnected. Let me bring Vipin into the conversation. Vipin, you see a lot of transaction velocity and scale. But you’re also focused on customer experience because that can become a friction point if you do go overboard in times to protect. How does AI come in? How do you calibrate your model and AI to balance that potential friction that potentially can impact your growth and legitimate customers versus protecting from fraud?

Bipin Preet Singh

Thank you. Thank you, first of all, Ashutosh and Tanla Solutions for inviting me here. It’s a privilege. We are also customers of Tanla, so happy customers. Thank you. I think when it comes to AI and the usage of AI for fraud, I want to give some perspective with respect to first the kind of fraud. So we operate in payments and financial systems. And I think what’s happened in the last 10 years is the financialization and digitalization of financials has happened at an exponential scale. And so many different entities have gotten interconnected, just like what you are saying, that a loophole in just one place is sufficient to create fraud and scam throughout the ecosystem. So one thing we have to be very clear that one entity cannot control scams.

It’s very, very difficult. Because like the experience that we have at MobiQuake is 99 % of whatever scams that our customers complain of are not the money. It’s not the money that they get stolen out of MobiQuake, but it is actually stolen out of some other bank. and come into MobiQuick. So we are the recipients and, you know, and we get the complaints that the money has come here and we need to take action, whether it is coming through UPI, coming through credit card fraud and all those things. Now, therefore, you know, the standards of education and the standards of 2FA, second factor authentication, they need to be there. Perhaps are not equally enforced. The awareness and the education is not equally enforced.

And that brings me to the second point, which is that, you know, the AI, the scams have also become very, very sophisticated. Right? In our company, and we are a fintech company, we employ so many smart people. There are people who have fallen fraud to scams where they got a WhatsApp from me asking to buy gift cards with my photo and people have bought it, you know, without trying to verify or seeing what the number is. On the WhatsApp thing, because, you know, this is… It’s becoming, with AI, it’s becoming harder and more sophisticated. You know, the modus operandi of the scamsters is becoming extremely smart. And they are getting very, very smart at understanding the profile of the customer.

It’s not, they don’t target everyone. They have a very clear idea on who is likely to fall for a scam. So, there is need, for example, and there is a, I think there is an effort going on at the RBI’s end. And the government said to create a intelligence body which will share data across the entire payments ecosystem. I think it’s called Digital Payments Intelligence Authority or something like that. And that is something that’s extremely important because until data sharing starts to happen at an India level scale, you cannot identify. I can identify patterns. I can identify a pattern which works for me. But, you know, the scamsters will get smarter because they will keep changing their MO outside of Mobiquit.

So it’s very very difficult to keep adapting to that. So there is a national level initiative that is required. Third thing which I want to say is on the LEA front. I think the almost the entire country all the police everyone knows where the scams are. And this I am not able to understand why no action gets taken. It is the same places. It is the same origins. Right. But somehow no action gets taken. And I think until there is fear of law until enforcement has happened that that which will fraud you know payments fraud will come. At our end you know what we are doing is we are creating our own in -house models.

So as far as technology is gone in terms of machine learning and now with AI. We have created our own models. We have created our own models trained on our own data sets because they work best. for our kind of transaction pattern. But they may not work best for the kind of solutions that other companies may need. In fact, we have explored solutions of fraud and machine learning from other companies, but they have a very poor performance because they are trained on some industry -level data, which is not the same patterns that we get. So, at our end, we are a tech company, we can adapt. But I cannot say the same for all the entities, at least on the financial domain.

And that’s a big problem because until there is a national… And I think the regulator, especially RBI, is very, very concerned about this. I think we have heard now, and in my recent conclave where I went to, that they are saying that enough of making transactions easy. Now we need to go in the reverse direction. Now we need to make transactions a little difficult so that there is some friction because otherwise, you know, people are losing money.

Anshuman Kar

It’s great points. And as you said, I think the silos of data that we have, and in fact, you talked about training the models. In fact, you know, again, when we went to Indosat as well, you had to go and train them, even including language nuances as well. So these things become critical in terms of adapting. You mentioned about like RBI initiative and all, and I direct this to you, Ruthen. Who ultimately owns the responsibility of protecting citizens at national scale? Because can banks do it alone? Can RBI do it alone? Or are you dependent on upstream and downstream signals, upstream signals like that you get from telcos because a lot of these things originate from the channels?

How should responsibility be structured? to protect people and to help.

Ratan Kumar Kesh

I think Vipin spoke about that point that look, for a fraud to happen, we know that somebody would have gone to an e -commerce platform to make a payment and that payment is coming through a payment channel and the account is held in Bandhan Bank and the payment is made to let’s say for a product or through some platform the payment is made to an access bank credit card. Now and then the fraudster is actually sitting somewhere out there who is pretty much somebody amongst us. Now I will give you a very funny story of I lived in Mumbai for 20 plus years the local trains are full of used to be those days pickpockets. I came from I was living out of India and came back and I was going to meet a friend of mine that was my first trip in the local train and my purse got stolen and he said don’t worry how much money I said that’s okay but I had credit cards and all of that so somehow managed to block those cards and he says let’s go to the police station I said but I had my identity cards there he says don’t worry it’s fairly ethical pickpockets in Mumbai you go and tell the police you have to tell which train which local train from what to what and what exactly it must have opened probably so I said from Borivali to Andhri it must have happened in between it was a whatever 950 local train so after two days the police called me and handed over me the identity cards so so there was nothing else there but then I got back the identity card so I think that’s like the police has an ability to really find out who the people are and I find it quite baffling to figure out that this fraudster is somewhere there the telephone numbers are issued multiple of them by a telecom authority the other pan cards were issued which are used to get this sort of account opening and video QIC and the telephone numbers yet we are not able to find out who these people are and technology has to protect all of that which as all of us are saying that we are trying our best to protect and the bad part and the sad part is that whenever there is a fraud happens and a customer goes to the regulator like the ombudsman and it says 20 lakh fraud it’s okay which bank is went from it says it went from HDFC bank where did it fall first it fell in Bandhan bank okay two of you together 10 10 lakhs pay it and be done with this that’s easy isn’t it I mean we of course are regulated entity regulator has no choice but to sort of do whatever best they can do and which we do and that’s absolutely fine we must have had some lacunae in our process but how do the whole chain has to work together I mean including the citizens instead of becoming gullible in terms of having the awareness about banking product the banks and the payment companies and the country has to create more awareness the police, cyber police and the local police they have to work together minister of home affairs is working very hard in terms of really making it happen so it’s an ecosystem problem and I think if all of us come together, all of us create more awareness and make it really ruthless probably that’s the only way to happen otherwise it’s no easy

Anshuman Kar

Thank you, thanks a lot I am told time is up I wanted to solicit two questions as well from the audience but in the interest of time I will just summarize this session in fact we have all talked about going kind of end to end it’s not just identification using the AI models but the prevention, the elimination and ultimately holding the scampers accountable with law enforcement and that comes a big part there is law and the enforcement of the law they can sometimes mean two different things so ultimately one of the things I hope just one more point in the name of hyper personalization the amount of data that you collect and the amount of data that you collect and the amount of data that you collect and the amount of data that you collect and we have an ability to go back to Neha and say, you know what, you’ve been searching for a home.

I can tell you which is the right home for you. That’s great. Neha feels delighted because she can actually choose the right house. But the same data is getting misused to do other things. And as much as I can collect data as a bank or a real estate company, Proster can also collect the same data. So as you said, the AI is on both sides and they are like more offensive. And so it’s a question of who stays a few steps ahead of the other, right? What is striking from this discussion, hopefully, and I’m summarizing is like, as you can hear, everyone is doing something, right? Oh, please. Please go ahead. We can take one or two questions quickly.

Sure, please. Can you please help the gentleman with the mic? This is not the best way.

Audience

So there should be some integrated approach. So I think the government of India is already working on that and they have a digital payment intelligence platform. So it’s not for that purpose or you are referring to some other issue.

Anshuman Kar

Sorry, is that a question? I think the question is there are some government initiatives.

Audience

Initiative is already there. Is it not enough? To have an integrated model for this fraud protection and other thing. Because you said. financial institutions are having their own trend model and accordingly they are protecting their customer this thing but if we have that integrated model and government of India through their one company there they have initiated with the collaboration because that RBI is working on mule hunter through RBI mule hunter is the initiative yeah from RBI in his hub so that is already all the banks are doing and they are doing with their own three month five month data individually not a complete umbrella like that so for that everything is coming in the this digital payment intelligence platform so with this initiative Anna that it is so which you have referred will not be addressed to

Bipin Preet Singh

yes yes absolutely I think you know at least I feel that there is going to be there’s a strong potential because for the first time data across the financial economy is being used for the first time and it is being used for the first time ecosystem will come together in one place and I think that is a big deal and then once that data comes together then hopefully the best people will work on it and understand patterns at a national scale because the problem in digital is everything is connected so you have to study it in an integrated manner and I am very

Anshuman Kar

Thank you for that question and we are obviously hoping that all of these show results but at the same time we are talking about national but scammers are not limited to even national geographies they are international as well so the scope and the breadth the surface area of the threats are only expanding so we have to really look beyond and if I may from my personal experience one of the things is in the world of AI data is actually the differentiator not so much the models because all public and the willingness to share the data itself is a potential barrier real time data and this is where it is not about just banks and financial institutions it’s also potentially telecom, right, because they see a lot of initial signals in terms of messages being sent and communication and so forth, as Vikram just talked about as well.

So let me summarize again this session.

Audience

Can I ask one question?

Anshuman Kar

May I request you to take it offline, please, because of the time constraint, if you don’t mind. Thank you. Thank you for cooperating, sir. It’s great to hear. I’m sure there’s a lot of interest in this topic and it shows the resonance of what we are discussing. So, again, I think to summarize, as you said, the attack surface is all interconnected, but our defense is right now fragmented. And therein lies the opportunity. And while the next frontier cannot be just more smarter individual AI model, it has to be really coordinated intelligence. And that obviously has to happen real time. It has to happen across the ecosystem. And it has to happen within the guidelines of a national level trust architecture.

So with that I will want to thank you to all the panelists and also for all of you to participate in this discussion and really contributing to this to shaping what the future looks like because this is really not just trust, this is the foundation of creating the digital economy and the growth that underpins it. So thank you so much. Thank you very much all the panelists and Anshuman may I all request you to stay on the stage for a quick photograph. Wow. From using technology for transaction monitoring and layering it with AI to solving for behavioral intent and offensive AI along with solving for customer friction on customer experience by layering it on technology and captive models along with regulatory tenets.

It was a very insightful and a very meaningful panel. Thank you very much each one of you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Our last session for the evening is one of the most interesting ones. It’s what we do on our third element or the third pillar of DNA, that is impact. It’s the impact spotlight, wisely .ai, our client’s perspective. Very few leaders in India telecom landscape carry the depth of experience and the institutional weight that our next speaker brings to the stage. As chairman and managing director of BSNL, he has orchestrated one of the sector’s most compelling turnarounds, driving the rollout of India’s first indigenous 4G network and restoring the organization to a clear path of growth, profitability and purpose.

With over three decades of service spanning tri the government of Tamil Nadu and an advisory role to the government of Uganda, recognized with the Visist Sanchar. For distinguished service and a gold medalist in electronics and communication engineering, he remains one of the most. consequential voices in India’s telecom and digital governance story. Ladies and gentlemen, please put your hands together for A. Robert J. Ravi, Chairman and Managing Director, BSNL.

A. Robert J. Ravi

important step we are thinking about, that’s what I was talking about. On the network side, can I bring AI? In bringing AI in the network side, I can even get patents, customer patents, calling patents, network initiatives. We were able to even see exactly how and where most of the complaints were happening in the network. And this also helped me for tweaking my entire setup today, where I’m very sure at the end of probably the study and whatever research we are right now going on the AI, as a user, if you are a BSL customer, you can intelligently speak to your RAN. When I speak, when I tell intelligently speaking, it could be various things. So you can have…

you can request a specific dedicated data specific dedicated voice traffic that means today I am in this place I need to video stream I need a 1G so not a 1G at least under 10 throughput available all the time it will be made possible so that type of a user enabled platform which we are building that will control not only from the customer angle from the network angle if this becomes successful today no customer in future when this reality actually comes in no customer could be so easily fished or smacked that’s the way which we are trying to go ahead of course going into what exactly happened was the last one we can see that the user impact we were able to authoritatively say that that so many number of connections to close to 280 million spans we have installed today.

This is on one side. Now we are integrating this particular aspect on a customer experience platform also. How do we benefit out of it? In my customer experience today also, we have brought in we have something called the AI Vani system. It’s just a Vani which comes and says, whatever you want, you can speak to the particular agent. And then we brought in something called a BSNR recharge expert system. It’s a complete AI driven. So when user suppose even as a user because spam suppose when he stopped the spam for the SMS side, next thing we have to concentrate is on the data side, which again I’m talking of course we are speaking to you all how do we do for the data side.

Data is not only on whatsapps or social media how can we expand the horizon of this particular area that’s where what we thought can I build in intelligence into the system itself so when you wanted to do probably recharge or even it works as a worm in the network to easily identify the sites which needs to be blocked which should not be available for my customers such a sort of independent intelligent network needs to be built in which we are targeting up that’s the second pillar the last pillar before I wind up is going to be the rural thing the rural side with Bharatanat coming in at a very big space rolling out of Bharatanat’s network we are trying to see when you’re closing closely going close to the customer at the end we are seeing lot of fractured coming in can I put edge data centers using this edge data centers can I really run what we call even SLMs.

Today we talk about different LLM models. These LLM models require a lot of information. The data is a key engine for it. And we are all travesty to see why should I share my data. So this is the next concept of what we bring in what we call a federated learning. So your data resides with you. I just learn your data and I federate over it. So all this is possible when I go to the rural end of the edges. So there I will be able to protect the customer to a next level. I am very sure I think we can keep talking on this very interesting topic but since the time is short I thank the organizers for giving me opportunity.

But my request to you all still there is lot of work to be done. Unless we have built a system where we confidently say our citizens that you are 100 % safe in my network. our job is not done. And this is possible only when we bring in technology and play it across in a platform which really intelligently builds this network. Thank you.

Wish Gurmukh Dev

Thank you, it’s been a wonderful evening absolutely thrilling to have two CEOs exchange and share with the audience the real life problems and how they converted into an opportunity that is going to shape the future of telecom in one part of the world followed by a panel thank you Anshuman and thank you once again to all the panelists who made the effort to come in and share their own perspectives on what could be changed structurally from a regulatory perspective, from an ecosystem collaboration perspective to customer experience without friction and lastly dear CMD Mr. Robert Ravi for sharing the deep collaboration that BSNL and Tandla platforms have come into and are trying to set a lighthouse to what could really be a customer experience driving safe and secure customer transaction thank you very much it’s been a true honor and a privilege to host everyone here thank you I am very thankful on behalf of Tanla platforms, our group companies Carex and Value First for hosting you all here Thank you very much Thank you Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
W
Wish Gurmukh Dev
1 argument82 words per minute1069 words773 seconds
Argument 1
Tanla’s Wisely.ai platform is built on three principles: innovation, collaboration, and impact to combat spam and scam at scale
EXPLANATION
Wish Gurmukh Dev explains that Tanla’s DNA consists of three enduring principles that have driven the company for three decades. These principles shaped the development of Wisely.ai, their agentic AI platform specifically designed to identify, prevent, eliminate, and prosecute spam and scam operations globally.
EVIDENCE
Wisely.ai is live and delivering real impact at Indosat in Indonesia, at BSNL in India, and with leading banks in India, safeguarding millions of users in real time every single day
MAJOR DISCUSSION POINT
AI-driven platform development for fraud prevention
S
Sanjay Kapoor
2 arguments137 words per minute757 words329 seconds
Argument 1
Digital fraud has evolved from isolated phishing to AI-powered, cross-border, automated operations at industrial scale
EXPLANATION
Sanjay Kapoor describes how modern fraud has transformed from simple phishing attempts into sophisticated, AI-enabled operations that operate across borders with industrial-level automation. This represents a fundamental shift in the nature and scale of digital threats that requires equally sophisticated responses.
EVIDENCE
Globally consumers have lost over a trillion US dollars in scams. Today’s fraud is no longer isolated phishing – it is AI powered, it is cross border, it is automated and industrial in scale
MAJOR DISCUSSION POINT
Evolution and scale of digital fraud threats
AGREED WITH
Neha Gutma Mahatme, A. Robert J. Ravi
Argument 2
Global consumers have lost over $1 trillion to scams, with 65% of Indonesians facing spam or scam weekly
EXPLANATION
Sanjay Kapoor presents staggering statistics about the global impact of scams and fraud. He highlights that this is not just a consumer experience issue but has become an economic and systemic risk that demands leadership to combat effectively.
EVIDENCE
Digital payments are expected to surpass $14 trillion annually by 2027, with more than 5 billion people online. In South and Southeast Asia, nearly 2 billion people are coming online at a record speed. Both markets are facing rising cybercrime, digital fraud, and organized scam operations, causing billions of dollars worth of losses each year
MAJOR DISCUSSION POINT
Global scale and economic impact of digital fraud
AGREED WITH
Vikram Sinha, Ratan Kumar Kesh, Anshuman Kar
V
Vikram Sinha
5 arguments145 words per minute1305 words537 seconds
Argument 1
Indosat transformed from viewing fraud as customer complaints to a board-level strategic issue after learning Indonesians lost $5 billion to scams in 2024
EXPLANATION
Vikram Sinha describes a pivotal moment when he learned about the massive scale of fraud affecting Indonesians during a MasterCard advisory board meeting. This data showed that $5 billion was lost by Indonesians in 2024, with 65% facing spam or scam weekly, which transformed his perspective on the company’s responsibility to protect customers.
EVIDENCE
Early 2024, one of the board meeting in London, advisory board meeting, the Asia SCAM and the GASA, Global Anti-Scam Association, presented the data and I was blown off. That report shows that in 2024 itself, 5 billion US dollar Indonesians have lost. These are all middle income, lower income women, elderly women
MAJOR DISCUSSION POINT
CEO leadership transformation on fraud prevention
AGREED WITH
Sanjay Kapoor, Ratan Kumar Kesh, Anshuman Kar
Argument 2
AI implementation for fraud protection delivered measurable business results: 9% ARPU growth vs 3% industry average and churn reduction from 3.6% to 1.6%
EXPLANATION
Vikram Sinha provides concrete business metrics demonstrating the ROI of AI-driven fraud protection. The implementation resulted in significantly higher revenue per user growth compared to industry averages and substantial reduction in customer churn, proving that fraud protection directly impacts business performance.
EVIDENCE
Quarter four, ARPU grew for the industry 3%, we grew 9%. Our churn for serious base greater than 90 days from a level of 3.6, 3.7 have come down to 1.6. We have put it on my quarterly result and investor deck
MAJOR DISCUSSION POINT
Business ROI of AI-driven fraud protection
AGREED WITH
Anshuman Kar
DISAGREED WITH
Bipin Preet Singh
Argument 3
Partnership approach over vendor relationships is crucial for solving real problems at scale using AI
EXPLANATION
Vikram Sinha emphasizes the importance of strategic partnerships rather than traditional vendor relationships when implementing AI solutions for fraud prevention. He explains that solving real problems at scale requires deep collaboration and alignment of commitment between organizations.
EVIDENCE
We work with 96 vendors, we categorize among them 20 strategic partners. But there are 4 or 5 where I invest time, which becomes very strategic for us. We want a partner, we don’t need a vendor. We want a partner who can work with us and use AI to solve this real problem
MAJOR DISCUSSION POINT
Strategic partnership approach for AI implementation
AGREED WITH
Bipin Preet Singh
Argument 4
Real-time protection of 2 billion spam instances and flagging of 2.3 million scammers demonstrates AI’s operational impact
EXPLANATION
Vikram Sinha provides specific operational metrics showing the scale at which AI-driven fraud protection operates. The system processes billions of communications in real-time and identifies millions of potential scammers, demonstrating the industrial scale of both the threat and the response.
EVIDENCE
Close to 2 billion spam instances. Scam. Clearly threat intelligence protected. 2.3 million scammers flagged and customers are getting real time protection. We just launched it on Whatsapp channel also
MAJOR DISCUSSION POINT
Operational scale and impact of AI fraud protection
Argument 5
Telcos’ role has evolved from just connecting customers to providing protection and peace of mind
EXPLANATION
Vikram Sinha argues that telecommunications companies must expand their mission beyond traditional connectivity services. He believes that providing protection from fraud and scams is now a fundamental responsibility of telcos, as they control the channels through which many scams operate.
EVIDENCE
The role of Telco is not only to connect, it is also to give peace of mind. Protection is a big statement. And the channel which is getting used is voice, WhatsApp. So you need to solve it for your customer. Otherwise, you have no business
MAJOR DISCUSSION POINT
Evolution of telco responsibilities in digital age
A
Anshuman Kar
2 arguments152 words per minute1866 words733 seconds
Argument 1
In India, 70% of scams originate from SMS channels, with 65 billion SMS and 15 billion OTT messages sent monthly
EXPLANATION
Anshuman Kar provides specific statistics about the scale of messaging in India and identifies SMS as the primary channel for scam origination. The massive volume of messages across both traditional SMS and OTT channels creates a significant attack surface that requires sophisticated protection mechanisms.
EVIDENCE
SMS as a channel, almost 70% of the scams originate from that channel. 65 billion SMSes are spent monthly in India. Another 15 billion are sent monthly over OTT channels
MAJOR DISCUSSION POINT
Scale and channels of digital fraud in India
AGREED WITH
Sanjay Kapoor, Vikram Sinha, Ratan Kumar Kesh
Argument 2
Wisely.ai has protected an estimated $500 million in losses within six months of launch
EXPLANATION
Anshuman Kar presents the financial impact of the AI-driven fraud protection platform, demonstrating significant value creation through loss prevention. This metric shows the tangible economic benefit of implementing advanced AI systems for fraud detection and prevention.
EVIDENCE
The estimations are within six months of launch, we have protected almost $500 million in estimated losses
MAJOR DISCUSSION POINT
Financial impact and ROI of AI fraud protection
AGREED WITH
Vikram Sinha
R
Ratan Kumar Kesh
4 arguments181 words per minute1453 words480 seconds
Argument 1
Two main fraud problems: customers being defrauded and accounts being used as mules, with the latter being more troubling due to willing participation for easy money
EXPLANATION
Ratan Kumar Kesh identifies two distinct fraud challenges facing banks. While customer defrauding is serious, he considers the mule account problem more troubling because it involves willing participation by account holders who rent their accounts for monthly fees, making it harder to combat through traditional means.
EVIDENCE
There are a lot of people who would be willing to open accounts and then across multiple banks and then they go and rent that account and get a fee per month. The India stack makes it pretty simple to have an account onboarded very quickly in just about few minutes
MAJOR DISCUSSION POINT
Types and complexity of banking fraud challenges
DISAGREED WITH
Neha Gutma Mahatme
Argument 2
Supreme Court identified 54-56,000 crores lost to scams in India, affecting everyone from senior citizens to IIT professors
EXPLANATION
Ratan Kumar Kesh references a recent Supreme Court judgment that quantified the massive scale of fraud losses in India. He emphasizes that fraud affects people across all education and social levels, from vulnerable senior citizens to highly educated professionals, showing that sophistication of scams transcends traditional risk categories.
EVIDENCE
Recent Supreme Court judgment, a couple of weeks back, talked about 54 or 56,000 crores being lost to scams. They called it dacoity. The spectrum ranges from mostly senior citizens and at times even IIT Bombay professors
MAJOR DISCUSSION POINT
Scale and demographic impact of fraud in India
AGREED WITH
Sanjay Kapoor, Vikram Sinha, Anshuman Kar
Argument 3
Banks can identify out-of-routine transactions using sophisticated rule engines and AI algorithms, but the mule account problem persists
EXPLANATION
Ratan Kumar Kesh explains that banks have developed effective systems to detect unusual transaction patterns using AI and rule engines. However, the persistent challenge is accounts being willingly rented out as mules, which is harder to detect since the account holders are complicit in the fraud.
EVIDENCE
We process millions of transactions. My daily UPI volume is something like 60 lakhs per day. Depending on the customer’s profile, the customer routine transaction pattern, we can identify an out-of-routine transaction. AI is helping us to build the algo a lot better
MAJOR DISCUSSION POINT
Banking fraud detection capabilities and limitations
DISAGREED WITH
Bipin Preet Singh, Neha Gutma Mahatme
Argument 4
Customer education and awareness need improvement across the ecosystem to prevent gullible behavior
EXPLANATION
Ratan Kumar Kesh argues that improving customer awareness and education is crucial for fraud prevention. He believes that citizens need better understanding of banking products and fraud tactics, and that this requires coordinated effort across the entire ecosystem including banks, payment companies, and government agencies.
EVIDENCE
The citizens instead of becoming gullible in terms of having the awareness about banking product, the banks and the payment companies and the country has to create more awareness. It’s an ecosystem problem and if all of us come together, all of us create more awareness
MAJOR DISCUSSION POINT
Need for comprehensive customer education on fraud prevention
AGREED WITH
Bipin Preet Singh, Neha Gutma Mahatme
N
Neha Gutma Mahatme
4 arguments253 words per minute502 words118 seconds
Argument 1
Scams involve behavioral journeys that start before payment transactions, with social engineering happening upstream
EXPLANATION
Neha Gutma Mahatme explains that fraud is not just a transaction-level problem but involves complex behavioral manipulation that begins well before any payment occurs. The social engineering, deepfakes, voice cloning, and identity layering happen upstream, making it difficult to stop scams at the point of transaction.
EVIDENCE
Scam is not at a transaction or a payment transaction level, it is a behavioral journey. The social engineering happens much ahead, it is not really when the transaction is happening. The deepfakes, the voiceovers, the fake identities, the layering of transactions
MAJOR DISCUSSION POINT
Complexity of fraud as behavioral manipulation process
AGREED WITH
Sanjay Kapoor, A. Robert J. Ravi
DISAGREED WITH
Ratan Kumar Kesh
Argument 2
Data silos limit visibility as no single institution sees the complete fraud journey across platforms
EXPLANATION
Neha Gutma Mahatme identifies data fragmentation as a key challenge in fraud prevention. While individual companies like Amazon have good internal data, they lack visibility into how social engineering patterns develop outside their platforms, creating blind spots in fraud detection and prevention.
EVIDENCE
The data side was limits visibility. While as Amazon we have really good data internally on the platform but we miss the data of how these social engineering patterns are getting created outside of Amazon
MAJOR DISCUSSION POINT
Data fragmentation challenges in fraud prevention
AGREED WITH
Bipin Preet Singh, Ratan Kumar Kesh
DISAGREED WITH
Bipin Preet Singh, Ratan Kumar Kesh
Argument 3
Offensive AI operates unconstrained while defensive AI faces privacy, regulatory, and customer experience constraints
EXPLANATION
Neha Gutma Mahatme highlights the asymmetric nature of AI usage in fraud. Criminals using offensive AI operate without constraints, while legitimate organizations implementing defensive AI must comply with privacy regulations, customer experience requirements, and other legal limitations, creating an inherent disadvantage.
EVIDENCE
The offensive AI works unconstrained while the defensive AI has constraints of privacy, constraints of regulations, constraints of customer experience. Human psychology evolves faster than models
MAJOR DISCUSSION POINT
Asymmetric constraints between offensive and defensive AI
Argument 4
AI helps detect anomalies but struggles with detecting malintent and behavioral patterns
EXPLANATION
Neha Gutma Mahatme points out a fundamental limitation of current AI systems in fraud detection. While AI excels at identifying statistical anomalies and unusual patterns, it struggles to understand and detect malicious intent and complex behavioral manipulation, which are key components of sophisticated fraud schemes.
EVIDENCE
AI is helping detect anomalies. It is not helping detect the malintent or the behavior and unless and until we solve the malintent or the behavior, the scams will continue
MAJOR DISCUSSION POINT
Limitations of AI in detecting malicious intent
B
Bipin Preet Singh
6 arguments158 words per minute949 words358 seconds
Argument 1
99% of fraud complaints at MobiQuik involve money stolen from other banks but routed through their platform, highlighting interconnected fraud ecosystem
EXPLANATION
Bipin Preet Singh illustrates how fraud operates across interconnected financial systems, where money stolen from one institution flows through multiple platforms. This demonstrates that fraud prevention cannot be solved by individual entities alone, as the ecosystem’s interconnected nature creates vulnerabilities that fraudsters exploit.
EVIDENCE
99% of whatever scams that our customers complain of are not the money stolen out of MobiQuik, but it is actually stolen out of some other bank and come into MobiQuik. We are the recipients and we get the complaints
MAJOR DISCUSSION POINT
Interconnected nature of financial fraud ecosystem
Argument 2
Individual fraud models trained on company-specific data perform better than industry-level solutions
EXPLANATION
Bipin Preet Singh argues that fraud detection models work best when trained on specific company data rather than generic industry datasets. This is because each platform has unique transaction patterns and user behaviors that generic models cannot effectively capture, leading to poor performance when applied across different contexts.
EVIDENCE
We have created our own models trained on our own data sets because they work best for our kind of transaction pattern. We have explored solutions from other companies, but they have a very poor performance because they are trained on some industry-level data
MAJOR DISCUSSION POINT
Effectiveness of customized vs. generic fraud detection models
AGREED WITH
Vikram Sinha
DISAGREED WITH
Ratan Kumar Kesh, Neha Gutma Mahatme
Argument 3
Fraud prevention requires ecosystem-wide collaboration as no single entity can control scams alone
EXPLANATION
Bipin Preet Singh emphasizes that the interconnected nature of digital financial services means that fraud prevention requires coordinated effort across the entire ecosystem. He argues that vulnerabilities in any single part of the system can be exploited to create fraud throughout the network, making individual efforts insufficient.
EVIDENCE
One entity cannot control scams. It’s very, very difficult. The financialization and digitalization has happened at an exponential scale. So many different entities have gotten interconnected, a loophole in just one place is sufficient to create fraud and scam throughout the ecosystem
MAJOR DISCUSSION POINT
Need for ecosystem-wide fraud prevention collaboration
AGREED WITH
Neha Gutma Mahatme, Ratan Kumar Kesh
Argument 4
Despite knowing fraud origins and having technology to track perpetrators, law enforcement action remains insufficient
EXPLANATION
Bipin Preet Singh expresses frustration that while the technology exists to identify fraud sources and the locations are well-known to authorities, adequate law enforcement action is not being taken. He argues that until there is fear of legal consequences, payment fraud will continue to proliferate.
EVIDENCE
Almost the entire country, all the police, everyone knows where the scams are. It is the same places. It is the same origins. But somehow no action gets taken. Until there is fear of law, until enforcement has happened, payment fraud will continue
MAJOR DISCUSSION POINT
Gap between fraud detection capabilities and law enforcement action
Argument 5
RBI is developing a Digital Payments Intelligence Authority for national-level data sharing across the payments ecosystem
EXPLANATION
Bipin Preet Singh discusses a promising regulatory initiative where RBI is creating a centralized intelligence body to enable data sharing across the entire payments ecosystem. This represents a shift toward national-level coordination in fraud prevention, which he believes is essential for effective fraud detection at scale.
EVIDENCE
There is an effort going on at the RBI’s end to create an intelligence body which will share data across the entire payments ecosystem. It’s called Digital Payments Intelligence Authority. For the first time, data across the financial economy is being used in one place
MAJOR DISCUSSION POINT
National-level regulatory initiatives for fraud prevention
Argument 6
The regulatory response includes making transactions more difficult to add friction and prevent easy fraud
EXPLANATION
Bipin Preet Singh reports that regulators, particularly RBI, are considering reversing the trend of making transactions easier by introducing friction to prevent fraud. This represents a significant policy shift from prioritizing convenience to prioritizing security in digital payments.
EVIDENCE
The regulator, especially RBI, is very concerned about this. They are saying that enough of making transactions easy. Now we need to go in the reverse direction. Now we need to make transactions a little difficult so that there is some friction
MAJOR DISCUSSION POINT
Regulatory shift from convenience to security in digital payments
DISAGREED WITH
Vikram Sinha
A
A. Robert J. Ravi
3 arguments129 words per minute757 words349 seconds
Argument 1
BSNL is implementing AI across network operations to enable intelligent customer-network interactions and proactive spam/scam prevention
EXPLANATION
A. Robert J. Ravi describes BSNL’s comprehensive AI implementation strategy that goes beyond traditional fraud detection to create intelligent network infrastructure. The vision includes enabling customers to intelligently communicate with the network for specific service requirements while building proactive protection against fraud at the network level.
EVIDENCE
We are thinking about bringing AI in the network side. As a user, if you are a BSNL customer, you can intelligently speak to your RAN. You can request specific dedicated data, specific dedicated voice traffic. We have brought in AI Vani system and BSNL recharge expert system
MAJOR DISCUSSION POINT
Comprehensive AI integration in telecom network operations
Argument 2
BSNL is developing federated learning systems where customer data remains local while enabling AI learning across edge data centers
EXPLANATION
A. Robert J. Ravi outlines BSNL’s approach to privacy-preserving AI through federated learning, particularly for rural deployments. This system allows AI models to learn from customer data without requiring data to leave the customer’s location, addressing privacy concerns while enabling sophisticated AI capabilities at the network edge.
EVIDENCE
With Bharatanet coming in, we are trying to see can I put edge data centers using these edge data centers can I really run SLMs. This is the next concept of federated learning. Your data resides with you. I just learn your data and I federate over it
MAJOR DISCUSSION POINT
Privacy-preserving AI through federated learning in rural networks
Argument 3
The goal is building systems where citizens can be 100% confident in network safety
EXPLANATION
A. Robert J. Ravi sets an ambitious target for network security, stating that the ultimate objective is to create systems that provide complete confidence to citizens in network safety. He emphasizes that this requires intelligent network infrastructure that can proactively protect users rather than just reactively respond to threats.
EVIDENCE
Unless we have built a system where we confidently say our citizens that you are 100% safe in my network, our job is not done. This is possible only when we bring in technology and play it across in a platform which really intelligently builds this network
MAJOR DISCUSSION POINT
Vision for complete citizen confidence in network security
AGREED WITH
Sanjay Kapoor, Neha Gutma Mahatme
A
Audience
1 argument159 words per minute188 words70 seconds
Argument 1
Government initiatives like RBI’s mule hunter and digital payment intelligence platform show promise for integrated fraud prevention
EXPLANATION
An audience member points out existing government initiatives, particularly RBI’s mule hunter program and the digital payment intelligence platform, as examples of integrated approaches to fraud prevention. The question suggests that these initiatives may address some of the coordination challenges discussed by the panelists.
EVIDENCE
Government of India is already working on digital payment intelligence platform. RBI is working on mule hunter through RBI, all the banks are doing with their own three month five month data individually, everything is coming in the digital payment intelligence platform
MAJOR DISCUSSION POINT
Existing government initiatives for coordinated fraud prevention
Agreements
Agreement Points
AI-driven fraud protection delivers measurable business and operational results
Speakers: Vikram Sinha, Anshuman Kar
AI implementation for fraud protection delivered measurable business results: 9% ARPU growth vs 3% industry average and churn reduction from 3.6% to 1.6% Wisely.ai has protected an estimated $500 million in losses within six months of launch
Both speakers provide concrete metrics demonstrating the tangible business value and financial impact of AI-driven fraud protection systems, showing that these investments deliver measurable ROI
Fraud prevention requires ecosystem-wide collaboration rather than individual solutions
Speakers: Bipin Preet Singh, Neha Gutma Mahatme, Ratan Kumar Kesh
Fraud prevention requires ecosystem-wide collaboration as no single entity can control scams alone Data silos limit visibility as no single institution sees the complete fraud journey across platforms Customer education and awareness need improvement across the ecosystem to prevent gullible behavior
All three speakers agree that fraud is an interconnected problem that cannot be solved by individual institutions alone, requiring coordinated effort across banks, fintechs, platforms, and regulatory bodies
The scale and sophistication of digital fraud has reached crisis levels requiring urgent action
Speakers: Sanjay Kapoor, Vikram Sinha, Ratan Kumar Kesh, Anshuman Kar
Global consumers have lost over $1 trillion to scams, with 65% of Indonesians facing spam or scam weekly Indosat transformed from viewing fraud as customer complaints to a board-level strategic issue after learning Indonesians lost $5 billion to scams in 2024 Supreme Court identified 54-56,000 crores lost to scams in India, affecting everyone from senior citizens to IIT professors In India, 70% of scams originate from SMS channels, with 65 billion SMS and 15 billion OTT messages sent monthly
All speakers acknowledge that digital fraud has reached unprecedented scale with massive financial losses globally, affecting all demographics and requiring board-level attention and strategic response
Traditional reactive approaches to fraud detection are insufficient for modern threats
Speakers: Sanjay Kapoor, Neha Gutma Mahatme, A. Robert J. Ravi
Digital fraud has evolved from isolated phishing to AI-powered, cross-border, automated operations at industrial scale Scams involve behavioral journeys that start before payment transactions, with social engineering happening upstream The goal is building systems where citizens can be 100% confident in network safety
Speakers agree that modern fraud requires proactive, intelligent systems rather than reactive detection, as threats have evolved to sophisticated, AI-powered operations that begin well before transactions occur
Strategic partnerships are essential for effective AI implementation in fraud prevention
Speakers: Vikram Sinha, Bipin Preet Singh
Partnership approach over vendor relationships is crucial for solving real problems at scale using AI Individual fraud models trained on company-specific data perform better than industry-level solutions
Both speakers emphasize that successful AI implementation requires deep strategic partnerships and customized solutions rather than generic vendor relationships or one-size-fits-all approaches
Similar Viewpoints
Both telecom leaders agree that the fundamental mission of telecommunications companies has expanded beyond connectivity to include comprehensive customer protection and security as core responsibilities
Speakers: Vikram Sinha, A. Robert J. Ravi
Telcos’ role has evolved from just connecting customers to providing protection and peace of mind The goal is building systems where citizens can be 100% confident in network safety
Both speakers highlight the asymmetric challenges faced by legitimate organizations in fighting fraud, where criminals operate without constraints while defenders must comply with regulations and maintain customer experience
Speakers: Neha Gutma Mahatme, Bipin Preet Singh
Offensive AI operates unconstrained while defensive AI faces privacy, regulatory, and customer experience constraints Despite knowing fraud origins and having technology to track perpetrators, law enforcement action remains insufficient
Both financial services leaders identify the interconnected nature of fraud where money flows across multiple institutions, making individual prevention efforts insufficient and highlighting the mule account problem
Speakers: Ratan Kumar Kesh, Bipin Preet Singh
Two main fraud problems: customers being defrauded and accounts being used as mules, with the latter being more troubling due to willing participation for easy money 99% of fraud complaints at MobiQuik involve money stolen from other banks but routed through their platform, highlighting interconnected fraud ecosystem
Unexpected Consensus
Need for increased transaction friction to combat fraud
Speakers: Bipin Preet Singh
The regulatory response includes making transactions more difficult to add friction and prevent easy fraud
It’s unexpected that a fintech CEO would support making transactions more difficult, as this goes against the industry’s traditional focus on seamless user experience. This represents a significant shift in thinking where security is prioritized over convenience
Privacy-preserving AI through federated learning
Speakers: A. Robert J. Ravi
BSNL is developing federated learning systems where customer data remains local while enabling AI learning across edge data centers
It’s unexpected to see a traditional government telecom operator leading in advanced AI privacy techniques like federated learning, typically associated with cutting-edge tech companies. This shows progressive thinking in balancing AI capabilities with privacy protection
AI being used by both fraudsters and defenders
Speakers: Neha Gutma Mahatme, Vikram Sinha
Offensive AI operates unconstrained while defensive AI faces privacy, regulatory, and customer experience constraints Partnership approach over vendor relationships is crucial for solving real problems at scale using AI
There’s unexpected consensus that AI is fundamentally changing the fraud landscape by being weaponized by criminals, requiring defenders to also adopt AI but within ethical and regulatory constraints. This represents a new paradigm in cybersecurity
Overall Assessment

Strong consensus exists on the crisis-level scale of digital fraud, the need for AI-driven solutions, ecosystem-wide collaboration, and the evolution of organizational responsibilities beyond traditional boundaries

High level of consensus with significant implications for industry transformation. All speakers agree that traditional approaches are insufficient and that AI-powered, collaborative solutions are essential. The consensus suggests a fundamental shift in how organizations view their responsibilities in the digital ecosystem, moving from siloed approaches to integrated, proactive protection strategies. This alignment across different sectors (telecom, banking, fintech, platforms) indicates readiness for coordinated action and regulatory support for comprehensive fraud prevention initiatives.

Differences
Different Viewpoints
Individual vs. ecosystem-wide approach to fraud prevention
Speakers: Bipin Preet Singh, Ratan Kumar Kesh, Neha Gutma Mahatme
Individual fraud models trained on company-specific data perform better than industry-level solutions Banks can identify out-of-routine transactions using sophisticated rule engines and AI algorithms, but the mule account problem persists Data silos limit visibility as no single institution sees the complete fraud journey across platforms
Bipin argues for company-specific models while others emphasize the need for ecosystem-wide collaboration and data sharing to address fraud comprehensively
Regulatory approach to transaction friction
Speakers: Bipin Preet Singh, Vikram Sinha
The regulatory response includes making transactions more difficult to add friction and prevent easy fraud AI implementation for fraud protection delivered measurable business results: 9% ARPU growth vs 3% industry average and churn reduction from 3.6% to 1.6%
Bipin reports regulators want to add friction to prevent fraud, while Vikram demonstrates that AI can maintain customer experience while providing protection
Primary source of fraud problem
Speakers: Ratan Kumar Kesh, Neha Gutma Mahatme
Two main fraud problems: customers being defrauded and accounts being used as mules, with the latter being more troubling due to willing participation for easy money Scams involve behavioral journeys that start before payment transactions, with social engineering happening upstream
Ratan focuses on mule accounts as the primary concern while Neha emphasizes upstream social engineering as the root cause
Unexpected Differences
Law enforcement effectiveness
Speakers: Bipin Preet Singh, Ratan Kumar Kesh
Despite knowing fraud origins and having technology to track perpetrators, law enforcement action remains insufficient Customer education and awareness need improvement across the ecosystem to prevent gullible behavior
While both acknowledge systemic issues, Bipin is more critical of law enforcement inaction despite known fraud locations, while Ratan focuses more on customer behavior and education as solutions
Data privacy vs. fraud prevention trade-offs
Speakers: Neha Gutma Mahatme, A. Robert J. Ravi
Offensive AI operates unconstrained while defensive AI faces privacy, regulatory, and customer experience constraints BSNL is developing federated learning systems where customer data remains local while enabling AI learning across edge data centers
Neha sees privacy constraints as limiting defensive AI effectiveness, while Robert proposes federated learning as a solution that preserves privacy while enabling AI capabilities
Overall Assessment

The main disagreements center around implementation approaches rather than fundamental goals. All speakers agree fraud is a serious ecosystem-wide problem requiring AI and collaboration, but differ on whether solutions should be centralized vs. distributed, company-specific vs. industry-wide, and technology-focused vs. education-focused.

Moderate disagreement with high consensus on core issues. The disagreements are constructive and focus on tactical approaches rather than strategic objectives, suggesting potential for convergence through hybrid approaches that combine different perspectives.

Partial Agreements
All speakers agree that fraud is an ecosystem-wide problem requiring collaboration, but disagree on implementation – Bipin emphasizes national-level data sharing initiatives, Neha focuses on breaking down data silos, and Ratan emphasizes customer education
Speakers: Bipin Preet Singh, Neha Gutma Mahatme, Ratan Kumar Kesh
Fraud prevention requires ecosystem-wide collaboration as no single entity can control scams alone Data silos limit visibility as no single institution sees the complete fraud journey across platforms Customer education and awareness need improvement across the ecosystem to prevent gullible behavior
Both agree on comprehensive AI implementation for fraud prevention, but Vikram emphasizes strategic partnerships while Robert focuses on in-house AI development and federated learning
Speakers: Vikram Sinha, A. Robert J. Ravi
Partnership approach over vendor relationships is crucial for solving real problems at scale using AI BSNL is implementing AI across network operations to enable intelligent customer-network interactions and proactive spam/scam prevention
Takeaways
Key takeaways
Digital fraud has evolved into an AI-powered, industrial-scale threat requiring coordinated ecosystem response rather than individual institutional solutions Telcos must transform from connectivity providers to protection platforms, with AI-driven fraud prevention becoming a core business differentiator Real-time AI implementation delivers measurable business results – Indosat achieved 9% ARPU growth vs 3% industry average and reduced churn from 3.6% to 1.6% The attack surface is interconnected across the digital ecosystem, but current defense mechanisms remain fragmented across institutions Partnership-based approaches with specialized AI platforms like Wisely.ai prove more effective than vendor relationships for solving fraud at scale Customer education and behavioral change are as critical as technological solutions in combating sophisticated social engineering attacks National-level data sharing and intelligence platforms are essential for effective fraud prevention given the cross-border, multi-platform nature of modern scams
Resolutions and action items
BSNL committed to expanding AI implementation across network operations to enable intelligent customer-network interactions Indosat demonstrated successful deployment of Wisely.ai platform protecting 100 million subscribers with measurable business impact Industry consensus on need for ecosystem-wide collaboration rather than siloed approaches to fraud prevention Recognition that RBI’s Digital Payments Intelligence Authority and mule hunter initiatives require industry support and participation Agreement on implementing federated learning systems where customer data remains local while enabling AI learning across platforms
Unresolved issues
Law enforcement gaps persist despite known fraud origins and available tracking technology Data sharing limitations across institutions continue to hamper comprehensive fraud detection Balance between customer experience friction and security measures remains challenging to optimize Offensive AI capabilities continue to outpace defensive AI due to fewer constraints on malicious actors International coordination mechanisms for cross-border fraud prevention are underdeveloped Standardization of fraud prevention protocols across different types of financial institutions Effective methods to prevent willing participation in mule account schemes for easy money
Suggested compromises
RBI’s approach to add transaction friction to balance ease of use with security requirements Federated learning models that enable AI training while keeping customer data locally stored Ecosystem-wide responsibility sharing rather than placing full burden on individual institutions Graduated response systems that apply different security levels based on transaction patterns and risk profiles Public-private partnership models for national fraud prevention infrastructure development
Thought Provoking Comments
That report shows that in 2024 itself, 5 billion US dollar Indonesians have lost. What touched me is these are all middle income, lower income women, elderly women… every Indonesian, 65% of the Indonesians are facing spam or scam on a weekly basis. So that itself was a trigger for me that Indosat being such an iconic brand… Our role is not only to connect. Our role is to also protect my 100 million customer.
This comment reframes the fundamental purpose of telecom companies from mere connectivity providers to guardians of digital safety. The specific statistics ($5 billion losses, 65% weekly exposure) and focus on vulnerable populations (women, elderly) transforms the discussion from abstract business concerns to urgent social responsibility.
This shifted the entire conversation from viewing fraud as a customer service issue to recognizing it as a core business imperative. It established the moral and business case for massive AI investments and set the tone for discussing telecom companies as protectors rather than just service providers.
Speaker: Vikram Sinha
I’m a very strong believer of fake it till you make it. So I started talking about AI two years back and I had very little understanding… But within an hour, I understood companies, countries who have been all in and ahead of the curve and who are solving real problem, they have started seeing value.
This brutally honest admission from a CEO about initially not understanding AI while publicly championing it reveals the reality of technology adoption in large organizations. It challenges the typical narrative of confident, all-knowing leadership and shows authentic learning in action.
This vulnerability opened up more honest discussion about AI implementation challenges. It gave permission for other panelists to discuss their own learning curves and uncertainties, making the conversation more authentic and practical rather than purely promotional.
Speaker: Vikram Sinha
SCAM is not at a transaction or a payment transaction level, it is a behavioral journey it starts much before the payment really happens… The offensive AI works unconstrained while the defensive AI has constraints of privacy, constraints of regulations, constraints of customer experience.
This insight fundamentally reframes the fraud problem from a point-in-time transaction issue to a complex behavioral journey. The observation about asymmetric AI capabilities (offensive vs defensive) introduces a sophisticated understanding of the technological arms race.
This comment elevated the technical sophistication of the discussion and helped explain why traditional fraud prevention fails. It led to deeper exploration of ecosystem-wide solutions rather than individual company approaches, and highlighted the inherent disadvantages faced by legitimate businesses.
Speaker: Neha Gutma Mahatme
99% of whatever scams that our customers complain of are not the money stolen out of MobiQuick, but it is actually stolen out of some other bank and come into MobiQuick. So we are the recipients… one entity cannot control scams. It’s very, very difficult.
This reveals the interconnected nature of the fraud ecosystem where companies become unwilling participants in fraud chains. It challenges the assumption that individual companies can solve fraud independently and highlights the blame-shifting that occurs in the ecosystem.
This observation redirected the conversation toward systemic solutions and shared responsibility. It helped explain why individual AI models and company-specific solutions have limited effectiveness, leading to discussion of national-level coordination and data sharing initiatives.
Speaker: Bipin Preet Singh
The fraudster is telling my employees, saying that, can you share with me more data? Don’t worry, I’ll not tell anybody, just give me data, I’ll give you 70,000, I can even pay you 1,50,000 per account.
This anecdote reveals the sophisticated recruitment tactics of fraudsters and the economic incentives driving the fraud ecosystem. It shows how fraudsters are actively trying to corrupt the very systems designed to protect against them, highlighting the human element in cybersecurity.
This story brought a visceral reality to the discussion, moving beyond abstract statistics to show the personal and institutional vulnerabilities. It emphasized that technology solutions alone are insufficient without addressing human factors and economic incentives.
Speaker: Ratan Kumar Kesh
Unless we have built a system where we confidently say our citizens that you are 100% safe in my network, our job is not done. And this is possible only when we bring in technology and play it across in a platform which really intelligently builds this network.
This sets an absolute standard for success (100% safety) rather than incremental improvements, and positions network-level AI integration as a national infrastructure imperative. It elevates the discussion from business optimization to citizen protection as a fundamental right.
This comment provided a visionary endpoint for the discussion, establishing citizen safety as the ultimate measure of success. It helped frame all the previous technical and business discussions within a larger national security and social responsibility context.
Speaker: A. Robert J. Ravi
Overall Assessment

These key comments fundamentally transformed what could have been a typical technology product discussion into a profound examination of corporate responsibility, systemic vulnerabilities, and societal protection. Vikram’s honest admission about learning AI while implementing it set a tone of authentic dialogue, while his statistics about fraud impact established the moral imperative. Neha’s insight about behavioral journeys and asymmetric AI capabilities elevated the technical sophistication, while Bipin’s revelation about cross-institutional fraud flows highlighted systemic interconnectedness. The banking perspective on insider threats and the BSNL vision of 100% citizen safety provided bookends of current reality and future aspiration. Together, these comments shaped a discussion that moved from individual company solutions to ecosystem-wide collaboration, from reactive fraud detection to proactive citizen protection, and from technology implementation to social responsibility. The conversation evolved into a call for coordinated national-level action rather than fragmented individual efforts.

Follow-up Questions
How can AI-driven fraud prevention be expanded to protect against data-side scams beyond SMS, particularly on WhatsApp and social media platforms?
BSNL’s CMD identified that while SMS spam protection is being addressed, the next frontier is protecting users from scams on data channels like WhatsApp and social media, which requires expanding the AI protection horizon.
Speaker: A. Robert J. Ravi
How can federated learning be implemented at rural edge data centers to protect customer data while still enabling AI-driven fraud protection?
This addresses the challenge of protecting customer privacy while still leveraging their data for fraud prevention, particularly important for rural customers who may be more vulnerable to scams.
Speaker: A. Robert J. Ravi
What specific engineering and technical collaboration opportunities exist between telcos and AI platforms for training fraud detection models?
Vikram mentioned that Indosat wanted to do engineering work with Tanla using their GPU clusters, suggesting there are untapped technical collaboration opportunities that could enhance fraud detection capabilities.
Speaker: Vikram Sinha
How can the Digital Payments Intelligence Platform initiative be optimized to ensure real-time data sharing across the entire financial ecosystem?
While the government initiative exists, questions remain about its effectiveness in enabling real-time, comprehensive data sharing needed to combat sophisticated, cross-platform fraud schemes.
Speaker: Bipin Preet Singh and Audience Member
What mechanisms can be developed to better coordinate law enforcement action against known fraud hotspots and repeat offenders?
Both speakers noted that fraud origins are often known but enforcement action is lacking, suggesting need for research into more effective law enforcement coordination mechanisms.
Speaker: Bipin Preet Singh and Ratan Kumar Kesh
How can AI models be trained to detect malicious intent and behavioral patterns rather than just transaction anomalies?
Current AI focuses on detecting anomalies but struggles with identifying malicious intent, which is crucial since social engineering happens before the actual transaction.
Speaker: Neha Gutma Mahatme
What standards and protocols are needed for cross-border fraud prevention given that scammers operate internationally?
As fraud becomes increasingly international while current solutions are largely national, research is needed into international cooperation frameworks and standards.
Speaker: Anshuman Kar
How can the balance between defensive AI constraints (privacy, regulations, customer experience) and offensive AI capabilities be optimized?
The asymmetry between constrained defensive AI and unconstrained offensive AI used by fraudsters needs to be addressed through research into more effective defensive strategies.
Speaker: Neha Gutma Mahatme
What customer education and awareness strategies are most effective in preventing citizens from becoming fraud victims or unwitting accomplices?
The ‘mule account’ problem where citizens rent their accounts to fraudsters suggests need for research into more effective education and awareness programs.
Speaker: Ratan Kumar Kesh

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Responsible AI in India Leadership Ethics & Global Impact part1_2

Responsible AI in India Leadership Ethics & Global Impact part1_2

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on translating responsible AI principles into practical enterprise implementation across various industries in India. The session, presented by Adobe in association with FICCI, examined how organizations can move beyond theoretical commitments to demonstrable responsible AI practices as regulatory frameworks like the EU AI Act and India’s new IT rules take effect in 2026.


Andy Parsons from Adobe introduced the Content Authenticity Initiative (C2PA), which provides transparency for AI-generated content through open standards that act like “nutrition labels” for digital media. He emphasized that responsible AI must be built into systems from the ground up rather than added as an afterthought, highlighting the shift from asking whether to be responsible with AI to proving that organizations have been responsible.


The panel discussion revealed diverse approaches across industries. Air India’s Dr. Satya Ramaswamy described their global-first generative AI virtual assistant that handles 40,000 daily customer queries while maintaining strict safety protocols and human oversight capabilities. NPCI’s Vishal Kanwati explained their fraud detection systems that prioritize minimizing false positives while maintaining transparency, allowing customers to understand why transactions are declined through AI-powered explanations.


Adobe’s Prativa Mohapatra outlined their “ART” framework – Accountability, Responsibility, and Transparency – embedded in products like Firefly and Acrobat Assistant, ensuring enterprise-grade AI tools provide traceable, licensed content. RPG Group’s Amol Deshpande emphasized the need for scalable governance frameworks that can accommodate diverse business units while providing appropriate guardrails for different AI applications.


The discussion highlighted challenges including uneven adoption, limited consumer awareness, and the risk of responsible AI becoming a luxury for large enterprises while smaller organizations struggle with implementation costs. Panelists agreed that industry leaders must create accessible frameworks and standards to democratize responsible AI practices. The conversation concluded with consensus that while industry self-regulation is valuable, regulatory intervention is inevitable and necessary to ensure AI systems serve human values and societal benefit at scale.


Keypoints

Major Discussion Points:

Transition from AI principles to practical implementation: The discussion emphasized moving beyond theoretical responsible AI commitments to demonstrable, measurable practices that can prove compliance and accountability, especially with upcoming regulations like the EU AI Act taking effect in 2026.


Industry-specific approaches to responsible AI governance: Panelists shared how different sectors (aviation, payments, conglomerates, creative tools) implement responsible AI differently based on their unique risk profiles, regulatory requirements, and operational contexts, highlighting that “one size doesn’t fit all.”


Content authenticity and transparency standards: Adobe’s Content Authenticity Initiative and C2PA standards were presented as a concrete example of responsible AI implementation, focusing on content provenance, transparency, and “nutrition labels” for digital content to combat misinformation and synthetic content risks.


Balancing innovation with safety and compliance: Multiple speakers addressed the challenge of maintaining rapid AI adoption and innovation while ensuring proper guardrails, risk management, and regulatory compliance, particularly in high-stakes industries like aviation and financial services.


Democratizing responsible AI across enterprise sizes: The panel discussed the risk of responsible AI becoming a “luxury” for large enterprises only, emphasizing the need for industry leaders to create frameworks and standards that smaller organizations and MSMEs can adopt and implement effectively.


Overall Purpose:

The discussion aimed to provide practical guidance for translating responsible AI principles into actionable enterprise strategies, moving beyond theoretical commitments to concrete implementation practices that ensure accountability, transparency, and compliance across different industries and organization sizes.


Overall Tone:

The tone was professional, collaborative, and pragmatically optimistic throughout. Speakers maintained a solution-oriented approach, sharing real-world examples and acknowledging challenges while emphasizing collective responsibility. The discussion remained constructive and forward-looking, with panelists building on each other’s insights rather than debating opposing viewpoints. The moderator kept the pace brisk but allowed for substantive exchanges, and the closing remarks reinforced the collaborative spirit with commitments to continue the dialogue beyond the session.


Speakers

Speakers from the provided list:


Moderator – Session moderator for the Responsible AI discussion


Andy Parsons – Global Head for Content Authenticity at Adobe, runs the Content Authenticity Initiative at Adobe


Shantheri Mallaya – Editor at Economic Times, panel moderator


Dr. Satya Ramaswamy – Chief Digital and Technology Officer at Air India Limited


Prativa Mohapatra – Vice President and Managing Director of Adobe India


Amol Deshpande – Group Chief Digital Officer and Head of Innovation at RPG Group


Vishal Anand Kanvaty – Chief Technology Officer, National Payments Corporation of India (NPCI)


Sarika Guliani – Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI


Additional speakers:


None – all speakers mentioned in the transcript were included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion on responsible AI implementation in corporate India, presented by Adobe in association with FICCI as part of the AI Impact Summit, marked a pivotal moment in the transition from theoretical principles to practical enterprise deployment. Moderated by Shantheri Mallaya from Economic Times, the session brought together industry leaders from diverse sectors to examine how organisations can move beyond aspirational commitments to demonstrable responsible AI practices, particularly as regulatory frameworks prepare to take effect globally in 2026.


Setting the Context: From Principles to Provable Practice

Andy Parsons, Adobe’s Global Head for Content Authenticity, established the foundational premise that 2026 represents a critical inflection point for responsible AI. With the EU AI Act’s enforcement provisions, California’s first AI law, and India’s new IT rules on SGI taking effect, responsible AI will transition from being “a slide in a deck” to becoming a core compliance strategy and business opportunity. This shift fundamentally changes the enterprise question from “should we be responsible with AI?” to “can your systems actually prove that you have been responsible with AI?”


Parsons, who described himself as “a mere engineer at Adobe” and felt “unqualified” to talk about policy, emphasised that this transformation requires moving beyond theoretical frameworks to working code and products that demonstrate responsibility through transparency, accountability, and inclusivity. He positioned Adobe’s Content Authenticity Initiative (C2PA) as a concrete example of this approach, describing it as creating “nutrition labels” for digital content—a metaphor that Prime Minister Modi had mentioned the previous day. Just as consumers have the right to know what ingredients are in their food, Parsons argued that people deserve transparency about how digital content is created, what AI models were used, and whether an image is a genuine photograph or AI-generated content.


The C2PA standard is built on open standards developed with partners including Microsoft, BBC, OpenAI, Sony, and others, creating content credentials that provide transparency about digital content creation. However, Parsons acknowledged significant challenges: “adoption is uneven,” consumer awareness is “very early,” and the business case has been “challenging.” As he noted, “doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money,” though this is changing as compliance requirements create business imperatives for transparency.


Industry-Specific Approaches to Responsible AI Implementation

The panel discussion revealed how different sectors approach responsible AI governance based on their unique risk profiles, regulatory requirements, and operational contexts. This diversity underscored the principle that “one size doesn’t fit all” when implementing responsible AI frameworks.


Aviation: Safety-Critical AI with Human Oversight

Dr. Satya Ramaswamy from Air India provided compelling insights into implementing AI in safety-critical environments. Air India operates 300 aircraft, carries over 100,000 customers daily, and has “a few hundred airplanes on order.” Their generative AI virtual assistant, called “A.G,” was developed starting in November 2022 and launched in May 2023 as the global airline industry’s first such system. Over 2.5 years, it has handled 13.5 million queries, currently processing approximately 40,000 customer queries daily at just 1% of the cost of traditional contact centres. The system maintains a 97% autonomous success rate, with only 3% of queries requiring escalation to human agents.


The aviation approach demonstrates sophisticated risk management through embedded safety procedures and continuous monitoring. Dr. Ramaswamy explained how they balance the “safety dial”—too much safety creates customer inconvenience, whilst insufficient safeguards risk system failures or inappropriate responses. Their solution involves using AI to monitor AI performance, combined with customer feedback mechanisms and human oversight capabilities. He mentioned using “prompt firewalls where we can centralize all these controls” as part of their safety architecture. Importantly, the system has never provided an inappropriate response over its operational period, demonstrating that robust safety frameworks can coexist with high performance.


The aviation industry’s regulatory complexity—operating across multiple jurisdictions with different aviation authorities—provides a model for managing diverse compliance requirements without constraining innovation. Dr. Ramaswamy noted that Air India’s global-first AI implementation emerged from India whilst maintaining compliance with international aviation regulations, proving that regulatory frameworks can catalyse rather than constrain innovation.


Financial Infrastructure: Balancing Accuracy with User Impact

Vishal Anand Kanvaty from the National Payments Corporation of India (NPCI) offered insights into implementing AI in high-volume, high-stakes financial systems. NPCI’s approach to fraud detection reveals a counterintuitive but crucial principle: starting with lower accuracy whilst prioritising the minimisation of false positives. This strategy ensures that genuine transactions aren’t incorrectly flagged as fraudulent, which could severely impact user trust and system adoption. As Kanvaty explained, if UPI transactions were all getting declined, they have safeguards including limiting the percentage of transactions that can be declined.


NPCI’s implementation demonstrates the importance of transparency in AI decision-making. They’ve developed small language models that allow customers to understand why transactions are declined, providing explanations such as “you normally don’t send this transaction” or “this is the first time you’re scanning this QR code.” This transparency builds trust whilst maintaining security, showing how responsible AI can enhance rather than compromise user experience.


The payments infrastructure perspective highlighted the necessity of regulatory frameworks, with Kanvaty arguing that industry self-governance alone is insufficient because “AI can go berserk” and have widespread systemic impacts. However, he emphasised that regulations must be developed collaboratively with industry to ensure they’re practical and effective.


Conglomerate Complexity: Orchestrated Governance Across Diverse Business Units

Amol Deshpande from RPG Group addressed the unique challenges of implementing responsible AI across diverse business portfolios spanning infrastructure, healthcare, IT, agriculture, and manufacturing. His concept of “bring your own AI” reflects the reality that different business functions require different AI solutions, making centralised, uniform approaches impractical.


Deshpande’s framework emphasises three critical elements: providing scalable playgrounds for business units to operate with agility, investing heavily in people development and awareness, and establishing process and governance frameworks that provide guardrails without stifling innovation. This approach recognises that responsible AI in conglomerates requires orchestration across all AI layers rather than isolated centre-of-excellence approaches.


The RPG experience demonstrates how large enterprises can balance centralised compliance with decentralised innovation needs. Their approach involves creating templates and frameworks that can be adapted across different industries and functions, then sharing these learnings through industry bodies to benefit smaller organisations that lack similar resources.


Creative Technology: Embedding Responsibility in Product Design

Prativa Mohapatra from Adobe India outlined how responsible AI principles can be embedded directly into product development through their “ART” framework—Accountability, Responsibility, and Transparency. This approach goes beyond compliance to make responsible AI a core product philosophy that guides development decisions from inception.


Adobe’s Firefly generative AI tool exemplifies this approach by using only licensed training data and embedding content credentials in all generated content. This ensures that enterprises using Firefly won’t face liability issues from unauthorised content use. Similarly, Acrobat Assistant applies the same trust principles that have made PDF a universally accepted format, allowing users to work with authenticated sources whilst maintaining full traceability.


Mohapatra highlighted real-world implications of AI misuse, citing Supreme Court concerns about lawyers using fictitious case references generated by AI. She emphasised that enterprises must simultaneously address business strategy, ethical strategies, and regulatory compliance when implementing AI solutions. Missing any of these three elements leaves organisations unprepared for the future regulatory landscape. She also highlighted the need for organisational restructuring, noting that legal and compliance teams must evolve to handle AI-specific guidelines across multiple jurisdictions.


Addressing the Digital Divide in Responsible AI

A significant theme throughout the discussion was the risk of responsible AI becoming a “luxury” available only to large enterprises whilst smaller organisations struggle with implementation costs and complexity. Prativa Mohapatra articulated this challenge clearly, noting that whilst large enterprises can restructure teams, hire additional legal expertise, and invest in comprehensive AI governance frameworks, MSMEs lack these resources. The transformation from digital transformation to AI transformation requires significant organisational changes that smaller businesses cannot easily accommodate.


The panellists agreed that large enterprises and technology creators bear responsibility for developing frameworks and standards that smaller organisations can adopt. Adobe’s approach of making C2PA standards completely free and open exemplifies this principle—an independent creator in India can access the same content authenticity capabilities as a Fortune 500 enterprise at zero cost.


Industry bodies like FICCI play a crucial role in this democratisation process by facilitating knowledge dissemination and creating domain-specific templates that MSMEs can access. Amol Deshpande emphasised that these frameworks must be tailored to different industries and functions, recognising that a manufacturing MSME faces different AI challenges than a healthcare startup.


The Role of Regulation and Standards

The discussion revealed nuanced perspectives on the relationship between industry self-governance and regulatory intervention. Whilst all panellists agreed that regulation is inevitable and necessary, they viewed it as a catalyst for good practices rather than a constraint on innovation.


Andy Parsons positioned regulation as helping enterprises move from reactive to proactive responsible AI adoption. The upcoming regulatory landscape provides clarity and urgency that can accelerate the adoption of responsible practices, though he acknowledged his limitations in discussing policy matters as an engineer.


Dr. Satya Ramaswamy’s aviation perspective demonstrated how multiple regulatory frameworks can coexist without constraining innovation. Air India operates under various aviation authorities globally whilst maintaining its innovative edge, suggesting that well-designed regulation can provide structure without stifling creativity.


However, Vishal Kanvaty argued most directly for regulatory necessity, stating that industry self-governance alone is insufficient given AI’s potential for widespread systemic impact. His experience with financial infrastructure informed this perspective on the need for external oversight.


The discussion highlighted the importance of collaborative regulation development, where industry expertise informs regulatory frameworks to ensure they’re both effective and practical. This approach can help avoid the pitfalls of either overly restrictive regulations that stifle innovation or insufficient oversight that fails to address genuine risks.


Technical Implementation Challenges and Organisational Transformation

The panellists identified several concrete technical and operational challenges in implementing responsible AI systems. Content authenticity faces significant adoption hurdles, including social media platforms that strip metadata when content is uploaded, removing the transparency information that content credentials provide. Consumer awareness remains low, with many people unfamiliar with content authenticity symbols and their significance.


The transition to responsible AI requires significant organisational changes that go beyond technology implementation. Prativa Mohapatra emphasised that enterprises must restructure legal and compliance teams to handle AI-specific guidelines across multiple jurisdictions. This involves not just hiring additional expertise but fundamentally rethinking how these teams operate and integrate with product development and business strategy.


People development emerged as a critical success factor, with Amol Deshpande noting that enterprises must invest significantly in building AI awareness and skills across their value chains. This goes beyond technical training to include ethical reasoning, risk assessment, and decision-making capabilities that enable employees to work effectively with AI systems whilst maintaining responsible practices.


Future Outlook and Continuing Challenges

The discussion concluded with Sarika Guliani from FICCI emphasising that “responsibility is not anymore a compliance check” but rather “a commitment of the technology” to shared human values that should guide technological development. Despite the tight time constraints noted by moderator Shantheri Mallaya, the session demonstrated remarkable consensus among industry leaders on fundamental principles and approaches.


Several unresolved challenges emerged from the discussion. The technical challenge of maintaining content authenticity across platforms that strip metadata remains unsolved, as does the broader question of how to harmonise multiple international regulatory frameworks whilst maintaining domestic innovation capabilities. Consumer awareness and adoption of transparency standards continue to lag behind technical capabilities, creating a gap between what’s possible and what’s practically effective.


This alignment suggests that the field is maturing beyond theoretical debates toward practical implementation, with 2026 representing a crucial milestone where responsible AI transitions from aspiration to operational requirement. The session’s commitment to continuing dialogue through FICCI and other industry bodies reflects recognition that responsible AI implementation is an ongoing process requiring sustained collaboration across sectors, organisation sizes, and stakeholder groups. As India positions itself as a global leader in digital innovation, the approaches developed and refined through these discussions could influence responsible AI practices worldwide, demonstrating how emerging economies can lead in establishing ethical frameworks for transformative technologies.


Session transcriptComplete transcript of the session
Moderator

I’d like to welcome you all to this session titled Responsible AI from Principles to Practice in Corporate India presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite Andy Parsons, our Global Head for Content Authenticity at Adobe.

Andy, over to you.

Andy Parsons

Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.

I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity. I’m going to talk about that in a minute. I’m going to talk about that in a minute. I’m going to talk about that in a minute. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.

And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point. But can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, responsible AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.

So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.

But it’s made the trust problem absolutely impossible to address. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that. Hundreds of millions of people consuming digital content every day.

In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to have leadership in the C2PA, which hopefully many of you have. have heard about this week, which is the Coalition for Content Provenance and Authenticity. And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free.

So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others. And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials.

I’ve encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio, or video. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others. And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company.

It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy I think is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency. Provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used. Simple ideas like knowing that a photograph is actually a photograph and not generated.

These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.

Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.

Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I have often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m not sure if that’s true. I’m not sure if that’s true. I’m not sure if that’s true. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.

What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.

And last, enterprises. Like many of yours in this room, that are willing and excited to go first that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again. And in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things. And that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries, because we have representatives from all of them on our excellent panel today.

So we’re fortunate to have leaders from Air India and PCI, RPCI, and the U .S. Department of Defense, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.

Shantheri Mallaya

Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.

So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.

Right. So, Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So, Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.

So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.

Amol Deshpande

Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.

It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise, to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.

We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.

You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. And that. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple

Shantheri Mallaya

Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Prativa here. Welcome, Prativa. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe is constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally? And at the same time, as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?

So all yours here.

Prativa Mohapatra

Okay, thank you. So I think Andy said the context. And since we are here not to learn about the principles, but the practices, I think everybody should go back with certain practices. So the first practice of AI governance, which we practice, is art. Which is accountability, responsibility, and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course, we are in the business of content for a very, very long time. And now the same content is becoming the currency which everybody’s debating. So our principles have been there for a while.

But. How it is actualized? and let me say how it’s translated into our products. And by the way, it’s in our products, it’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law, you will not be getting into any liability issues.

Because how you do it is by feeding. Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines.

So Acrobat has this new feature called Acrobat Assistant. It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.

So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to already, Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt and re -design and re -design to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI, ensure that you tick all the three.

If you miss any one, you might not be ready for the future. So that’s how I see it.

Shantheri Mallaya

Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Pratibha. I’ll circle back to you, time permitting. Let’s see how best we can get back. Dr. Satya, calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.

How do these things really fall in place in terms of vision and metrics?

Dr. Satya Ramaswamy

Thank you, Shantiri. Since the audience is international, a real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.

So it was a global first in the whole airline industry. Today, it has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to human agent the remaining 50 % of the contact volume comes to A .G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.

At the same time, we don’t want any jailbreak to happen. We don’t want problem injection to happen. We don’t want any inappropriate thing to happen. So we are watching the whole performance of the Gen A virtual assistant, A .G as we call it, all the time. So we use, in fact, Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer, right? So at the end of the day, when we send a response, we also ask the customer, you know, did it answer your question? And also allow them to give their reactions, right? Is it appropriate, inappropriate? And thankfully, over the last 20 years, it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it.

But now we are, as the technologies are maturing, for example, we now have… interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem. That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.

Shantheri Mallaya

Excellent. And given the kind of scale you’re operating at, I think every day is a new day. Yes, it is. We face challenges. There is new, brand new every day. Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here, or rather I’ll sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you.

How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts? One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be, fair at the same time proactive and detective when it comes to looking at fraud? So how, what are the aspects that you look at? keenly here

Vishal Anand Kanvaty

I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with the success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.

Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised

Shantheri Mallaya

absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you some things but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?

Is it getting risked?

Prativa Mohapatra

Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.

So, I think that’s a big thing. I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.

So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?

Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is similar to electricity and it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem. Absolutely.

Shantheri Mallaya

Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem? The industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit. So

Amol Deshpande

Shantiri, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?

Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.

Mind you, this would change. It’s not like this one guardrail construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is I think very, very critical. Absolutely. Thank

Shantheri Mallaya

you for that Amol. Dr. Satya, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean the principles, so on and so forth, and India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time, we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony absolutely, I

Dr. Satya Ramaswamy

think taking Air India, we are an international airline so we operate in many countries, for example we go to North America, US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world, so by nature we are geared to looking at the regulation in all parts of the world and I think and being in compliance, and our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right?

So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco Airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control. And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing.

the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is – this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation. For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it in the same spirit like Adobe.

Absolutely.

Shantheri Mallaya

So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts

Vishal Anand Kanvaty

Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important. While all of us realize.

it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high

Shantheri Mallaya

Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please

Sarika Guliani

first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken and the agency itself and the so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of FICCI and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.

That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Shantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.

So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Andy Parsons
8 arguments190 words per minute2010 words632 seconds
Argument 1
2026 will mark the shift from AI responsibility being a slide in a deck to actual compliance strategy and opportunity due to regulatory enforcement
EXPLANATION
Andy argues that 2026 represents a pivotal year when responsible AI will transition from theoretical presentations to practical implementation requirements. This shift is driven by upcoming regulatory enforcement that will make AI responsibility both a compliance necessity and a business opportunity.
EVIDENCE
The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. India has new IT rules on SGI and is actively shaping its own path.
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
AGREED WITH
Amol Deshpande, Prativa Mohapatra, Sarika Guliani, Moderator
Argument 2
The question has changed from “should we be responsible with AI?” to “can your systems prove you have been responsible with AI?”
EXPLANATION
Andy emphasizes that the debate about whether to be responsible with AI is settled, and the focus has shifted to demonstrating and proving responsible AI practices. Organizations must now show concrete evidence of their responsible AI implementation rather than just stating intentions.
EVIDENCE
The shift from principles to provable practice is the theme of our panel today. For all of us, what does it cost in terms of implementation and day-to-day usage?
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
AGREED WITH
Amol Deshpande, Prativa Mohapatra, Sarika Guliani, Moderator
Argument 3
Adobe’s Content Authenticity Initiative provides transparency for AI-generated content through C2PA content credentials, acting as “nutrition labels” for digital content
EXPLANATION
Andy describes Adobe’s approach to content transparency through the C2PA standard, which provides provenance information for digital content. This system allows users to understand the origin and creation process of content, similar to how nutrition labels inform consumers about food ingredients.
EVIDENCE
Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have encountered a content credential. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks.
MAJOR DISCUSSION POINT
Content Authenticity and Transparency Standards
AGREED WITH
Prativa Mohapatra, Dr. Satya Ramaswamy, Vishal Anand Kanvaty
Argument 4
Content transparency must be baked into tools at their core rather than grafted on as features, requiring open standards and cross-industry collaboration
EXPLANATION
Andy argues that responsible AI through content transparency cannot be an afterthought but must be fundamental to product design. This requires industry-wide collaboration and open standards that are not proprietary to any single company.
EVIDENCE
At Adobe, we decided five years ago that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, but had to be baked into the tools at their very core. It’s truly a cross-industry coalition that includes Microsoft, BBC, OpenAI, Sony, meta camera manufacturers, and silicon manufacturers like Qualcomm.
MAJOR DISCUSSION POINT
Content Authenticity and Transparency Standards
Argument 5
Responsible AI conversation has matured and now requires pragmatic implementation with demonstrable practices rather than just principles on websites
EXPLANATION
Andy contends that the industry has moved beyond theoretical discussions about responsible AI and must now focus on practical implementation. Organizations need to demonstrate their responsible AI practices through actual systems and processes, not just policy statements.
EVIDENCE
Responsible AI commitment on a website is a starting point, but not a meaningful milestone. You need standards, not just principles. It should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck.
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
AGREED WITH
Amol Deshpande, Prativa Mohapatra, Sarika Guliani, Moderator
Argument 6
Open standards should be available to independent creators at zero cost, same as Fortune 500 enterprises, ensuring inclusivity in AI adoption
EXPLANATION
Andy emphasizes that responsible AI standards must be accessible to all creators regardless of their size or resources. The C2PA standard is designed to be free and open, ensuring that small independent creators have the same access to content authenticity tools as large corporations.
EVIDENCE
Our standard is open and free. An independent creator here in India can apply the same kind of provenance at the same zero cost as a Fortune 500 enterprise.
MAJOR DISCUSSION POINT
Inclusive AI and Bridging the Digital Divide
AGREED WITH
Amol Deshpande, Prativa Mohapatra
Argument 7
Adoption remains uneven due to social media platforms stripping metadata, early consumer awareness, and challenging business cases for provenance
EXPLANATION
Andy acknowledges significant challenges in implementing content authenticity standards, including technical barriers where platforms remove metadata and social barriers where consumers are not yet aware of these tools. The business case for provenance has also been difficult to establish.
EVIDENCE
Many social media platforms strip metadata and remove that transparency when content is uploaded. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it. The business case for provenance has been challenging.
MAJOR DISCUSSION POINT
Operational Implementation Challenges
Argument 8
Regulation serves as a catalyst for good practices rather than a constraint, helping enterprises move from reactive to proactive responsible AI adoption
EXPLANATION
Andy views upcoming AI regulations not as limitations but as positive drivers that encourage organizations to adopt responsible AI practices proactively. Regulation helps establish standards and creates incentives for good practices across the industry.
EVIDENCE
What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem.
MAJOR DISCUSSION POINT
Regulatory Framework and Industry Self-Governance
AGREED WITH
Dr. Satya Ramaswamy, Vishal Anand Kanvaty
A
Amol Deshpande
3 arguments181 words per minute759 words251 seconds
Argument 1
Moving from generative AI to more complex scenarios and agentic AI requires orchestration across all AI layers, not just center of excellence approaches
EXPLANATION
Amol argues that as AI technology evolves beyond simple generative applications to more complex agentic systems, organizations need comprehensive orchestration across all layers of AI implementation. The traditional approach of having isolated centers of excellence is no longer sufficient for enterprise-scale AI deployment.
EVIDENCE
We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what is happening, but now it has come to a scale.
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
AGREED WITH
Andy Parsons, Prativa Mohapatra, Sarika Guliani, Moderator
Argument 2
Large conglomerates need to balance centralized compliance with decentralized business unit needs by providing scalable, safe environments with guardrails
EXPLANATION
Amol explains that large diversified organizations like RPG Group face the challenge of maintaining consistent AI governance while allowing individual business units the flexibility to implement AI solutions appropriate to their specific needs. This requires creating standardized frameworks with built-in safety measures.
EVIDENCE
You need to provide the playground for the enterprise, to operate function with agility. It’s more of a bring your own AI kind of a scenario in every function. You cannot provide one solution. One size doesn’t fit all. A scalable, safe environment with protected with guardrails is a key thing for us.
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
Argument 3
Industry bodies and partnerships are crucial for disseminating responsible AI learnings and creating domain-specific templates that MSMEs can access
EXPLANATION
Amol emphasizes that smaller organizations lack the resources to develop comprehensive AI governance frameworks independently. Industry associations and partnerships play a vital role in sharing knowledge and creating adaptable templates that can be customized for different sectors and company sizes.
EVIDENCE
Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of information. This would change from industry to industry, function to function, and that kind of cascading through industry bodies like FICCI is very critical.
MAJOR DISCUSSION POINT
Inclusive AI and Bridging the Digital Divide
AGREED WITH
Andy Parsons, Prativa Mohapatra
P
Prativa Mohapatra
5 arguments156 words per minute1126 words432 seconds
Argument 1
Firefly embeds content credentials and uses only licensed input data to ensure enterprises avoid liability issues when generating content
EXPLANATION
Prativa explains how Adobe’s Firefly tool implements responsible AI by using only licensed training data and embedding content credentials in generated outputs. This approach ensures that enterprises using the tool won’t face legal issues related to copyright infringement or content provenance.
EVIDENCE
Firefly, which is our Gen AI tool, actually embeds content credentials. Anything that you have being generated out of this product will have that nutrition level. The input has to be something which will not land you in trouble. You cannot take somebody else’s data. So here it is everything licensed.
MAJOR DISCUSSION POINT
Content Authenticity and Transparency Standards
AGREED WITH
Andy Parsons, Dr. Satya Ramaswamy, Vishal Anand Kanvaty
Argument 2
Acrobat Assistant follows the same trust principles as PDF creation, allowing users to work with authenticated sources and maintain accountability
EXPLANATION
Prativa describes how Adobe’s AI-powered Acrobat Assistant maintains the same trust standards that users expect from PDF documents. By working with user-provided files and maintaining transparency about sources, it helps prevent issues like fictitious legal references that have appeared in some AI-generated legal documents.
EVIDENCE
You feed the data or you feed files from your own machine. So you’re confident that what comes out of it, you can go back. The Supreme Court was very worried that there were certain lawyers who had petitions with reference to cases which do not exist or certain laws stated which are fictitious.
MAJOR DISCUSSION POINT
Content Authenticity and Transparency Standards
AGREED WITH
Andy Parsons, Dr. Satya Ramaswamy, Vishal Anand Kanvaty
Argument 3
Enterprises must address business strategy, ethical strategies, and regulatory compliance simultaneously when implementing AI solutions
EXPLANATION
Prativa argues that successful AI implementation requires organizations to consider three critical dimensions concurrently: business objectives, ethical considerations, and regulatory requirements. Missing any one of these elements can leave organizations unprepared for future challenges.
EVIDENCE
Enterprises they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI, ensure that you tick all the three. If you miss any one, you might not be ready for the future.
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
AGREED WITH
Andy Parsons, Amol Deshpande, Sarika Guliani, Moderator
Argument 4
Large enterprises creating AI technologies have responsibility to develop frameworks that smaller organizations can adopt, preventing a stark divide between big and small players
EXPLANATION
Prativa warns about the risk of creating a significant gap between large enterprises that can afford comprehensive AI governance and smaller organizations that cannot. She argues that technology creators have a responsibility to develop accessible frameworks that can be adopted across different organization sizes.
EVIDENCE
We stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. The creators of these technologies have to come together and keep on creating this method and methodology for others to adopt.
MAJOR DISCUSSION POINT
Inclusive AI and Bridging the Digital Divide
AGREED WITH
Andy Parsons, Amol Deshpande
Argument 5
Organizations need to restructure legal and compliance teams to handle AI-specific guidelines across multiple jurisdictions and create new organizational frameworks
EXPLANATION
Prativa explains that AI implementation requires significant organizational changes, including expanding legal and compliance teams to handle AI-specific regulations across different countries and creating new organizational structures. Smaller organizations may lack the resources for such comprehensive restructuring.
EVIDENCE
The big companies have to quickly create a new org structure, have to create the legal teams, which had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries. So small organizations cannot do that.
MAJOR DISCUSSION POINT
Operational Implementation Challenges
D
Dr. Satya Ramaswamy
4 arguments183 words per minute1035 words338 seconds
Argument 1
Air India’s generative AI virtual assistant handles 40,000 queries daily at 1% the cost of contact centers while maintaining 97% autonomous success rate through embedded safety procedures
EXPLANATION
Dr. Satya presents Air India’s AI implementation as a successful example of balancing efficiency with responsibility. The system demonstrates significant cost savings and high performance while maintaining safety through built-in procedures and continuous monitoring.
EVIDENCE
In May of 2023, we launched the global airline industry’s very first generative AI virtual assistant. Today, it has handled about 13.5 million queries, about 40,000 queries a day and it operates at a cost which is 1% of a contact center. It handles 97% of the queries autonomously, only 3% are escalated further to the agent.
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
Argument 2
Aviation industry’s safety-critical nature requires human-in-the-loop controls, allowing pilots to override automated systems when safety is at risk
EXPLANATION
Dr. Satya explains how the aviation industry’s safety-first culture provides a model for responsible AI implementation. The industry has long-established practices of maintaining human oversight and control over automated systems, which can be applied to AI governance.
EVIDENCE
There is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing, the autopilot control is not in the right thing and quickly cancel and take our control. This concept is well embedded in the airline industry.
MAJOR DISCUSSION POINT
Operational Implementation Challenges
Argument 3
Balancing AI safety controls with customer convenience requires continuous monitoring and allowing customer feedback on AI system performance
EXPLANATION
Dr. Satya describes the challenge of setting appropriate safety levels for AI systems – too much safety creates customer inconvenience, while too little risks inappropriate responses. Air India addresses this through continuous monitoring and customer feedback mechanisms.
EVIDENCE
If you dial the safety knob too much then it is an inconvenience to the customer. At the same time, we don’t want any jailbreak to happen. We use Generative AI to watch the performance of the Generative AI chatbot and we ask the customer, did it answer your question? And allow them to give their reactions.
MAJOR DISCUSSION POINT
Operational Implementation Challenges
AGREED WITH
Andy Parsons, Prativa Mohapatra, Vishal Anand Kanvaty
DISAGREED WITH
Vishal Anand Kanvaty
Argument 4
International airlines must comply with multiple regulatory frameworks across different countries while maintaining innovation capabilities
EXPLANATION
Dr. Satya explains that as an international airline, Air India must navigate and comply with various regulatory frameworks across different jurisdictions. This experience demonstrates that regulatory compliance doesn’t constrain innovation but rather provides a structured approach to responsible development.
EVIDENCE
We operate in many countries, US where federal aviation regulation is the key regulator, Europe, India where DGCA is the regulator. We comply with all the regulations, and it doesn’t in any way constrain Indian innovation. We launched the global airline industry’s first-generation virtual agent out of India.
MAJOR DISCUSSION POINT
Regulatory Framework and Industry Self-Governance
AGREED WITH
Andy Parsons, Vishal Anand Kanvaty
DISAGREED WITH
Vishal Anand Kanvaty
V
Vishal Anand Kanvaty
3 arguments184 words per minute582 words189 seconds
Argument 1
NPCI prioritizes low false positives over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraudulent
EXPLANATION
Vishal explains NPCI’s approach to fraud detection AI, where they prioritize minimizing false positives (legitimate transactions being blocked) over achieving maximum accuracy. This approach protects genuine users while gradually improving the system’s performance as more data becomes available.
EVIDENCE
We have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very high. Over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we were able to achieve higher accuracy.
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
DISAGREED WITH
Dr. Satya Ramaswamy
Argument 2
NPCI provides transparency by allowing customers to understand why transactions fail through small language models that explain decision-making processes
EXPLANATION
Vishal describes how NPCI implements transparency in their AI systems by providing customers with explanations for transaction failures. They use small language models to create an interactive system where customers can query and understand the reasons behind AI decisions.
EVIDENCE
We build a small language model where you can go and actually chat and say what happened to this transaction why is it declined. Even if it is declined due to fraudulent transactions, we can actually tell him that you normally don’t send this transaction or you don’t scan a QR ever, this is the first time you’re doing so.
MAJOR DISCUSSION POINT
Inclusive AI and Bridging the Digital Divide
AGREED WITH
Andy Parsons, Prativa Mohapatra, Dr. Satya Ramaswamy
Argument 3
Industry-led governance alone is insufficient; regulatory intervention is necessary because AI can have widespread systemic impacts
EXPLANATION
Vishal argues that while industry self-regulation is important, formal regulatory frameworks are essential for AI governance due to the potential for widespread systemic impacts. He emphasizes that regulations must be developed collaboratively with industry input but are ultimately necessary for managing AI risks.
EVIDENCE
AI can go berserk. Today all the UPI transactions can get declined. Those safeguards are very much required. And when this has to be across the ecosystem, the regulations are mandatory. While all of us realize it’s a great opportunity, regulation is one thing that we have to really take it as part of the initiatives.
MAJOR DISCUSSION POINT
Regulatory Framework and Industry Self-Governance
AGREED WITH
Andy Parsons, Dr. Satya Ramaswamy
DISAGREED WITH
Dr. Satya Ramaswamy
S
Sarika Guliani
1 argument142 words per minute590 words249 seconds
Argument 1
Responsible AI development should be viewed as a commitment to shared human values rather than just a compliance checkbox
EXPLANATION
Sarika emphasizes that responsible AI should be approached as a fundamental commitment to human values and societal benefit, not merely as a regulatory requirement to be checked off. She argues that the choices made in AI development will define the future, making it crucial to prioritize people-centered approaches.
EVIDENCE
Responsibility is not anymore a compliance check which is supposed to be there, it’s a commitment of the technology that we should develop which has shared human values. What we choose to create is something what will get defined. The theme of the summit people planet and progress should be kept in mind while doing any technological innovation.
MAJOR DISCUSSION POINT
Regulatory Framework and Industry Self-Governance
AGREED WITH
Andy Parsons, Amol Deshpande, Prativa Mohapatra, Moderator
M
Moderator
1 argument132 words per minute132 words59 seconds
Argument 1
Trust, transparency and accountability are foundational requirements for responsible AI deployment, not optional features
EXPLANATION
The moderator establishes that as India advances in its digital journey with AI as a powerful engine for innovation and productivity, the key differentiator is not speed of adoption but responsible deployment. These three principles must be embedded as foundational elements rather than treated as optional add-ons.
EVIDENCE
India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational.
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
AGREED WITH
Andy Parsons, Amol Deshpande, Prativa Mohapatra, Sarika Guliani
S
Shantheri Mallaya
4 arguments159 words per minute1631 words611 seconds
Argument 1
India is charting the course for the world in building trustworthy and inclusive AI at a momentous time
EXPLANATION
Shantheri positions India as a global leader in responsible AI development, emphasizing that the country is setting standards and approaches that will influence worldwide AI governance. She highlights the importance of understanding how premium leaders in India are thinking about this challenge.
EVIDENCE
Very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this.
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
Argument 2
Large conglomerates face the dual risk of either over-centralizing AI governance as mere compliance or fragmenting it into disconnected business unit checklists
EXPLANATION
Shantheri identifies a critical challenge for large organizations in implementing responsible AI – finding the right balance between centralized oversight and decentralized implementation. She highlights that organizations must avoid both extremes of rigid centralization and fragmented approaches.
EVIDENCE
In large multiple businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise checklist. So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized.
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
Argument 3
There is a risk that responsible AI could become a luxury for large enterprises while smaller businesses struggle to implement proper frameworks
EXPLANATION
Shantheri raises concern about the potential for responsible AI to become accessible only to large organizations with sufficient resources, while MSMEs and growing businesses may not be able to implement proper governance frameworks. This could create an unfair divide in AI adoption and safety.
EVIDENCE
Responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise
MAJOR DISCUSSION POINT
Inclusive AI and Bridging the Digital Divide
Argument 4
Balancing AI-driven innovation with regulation, accountability, and customer experience requires careful consideration of vision and metrics
EXPLANATION
Shantheri emphasizes the complex challenge facing organizations, particularly in critical sectors like aviation, where they must simultaneously drive innovation, comply with regulations, maintain accountability, and deliver excellent customer experiences. This requires sophisticated approaches to measuring and managing these competing priorities.
EVIDENCE
Aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience. How do these things really fall in place in terms of vision and metrics?
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
Agreements
Agreement Points
Transition from AI principles to practical implementation
Speakers: Andy Parsons, Amol Deshpande, Prativa Mohapatra, Sarika Guliani, Moderator
2026 will mark the shift from AI responsibility being a slide in a deck to actual compliance strategy and opportunity due to regulatory enforcement The question has changed from “should we be responsible with AI?” to “can your systems prove you have been responsible with AI?” Responsible AI conversation has matured and now requires pragmatic implementation with demonstrable practices rather than just principles on websites Moving from generative AI to more complex scenarios and agentic AI requires orchestration across all AI layers, not just center of excellence approaches Enterprises must address business strategy, ethical strategies, and regulatory compliance simultaneously when implementing AI solutions Responsible AI development should be viewed as a commitment to shared human values rather than just a compliance checkbox Trust, transparency and accountability are foundational requirements for responsible AI deployment, not optional features
All speakers agree that the industry has moved beyond theoretical discussions about responsible AI and must now focus on concrete, demonstrable implementation with proper governance frameworks
Need for transparency and accountability in AI systems
Speakers: Andy Parsons, Prativa Mohapatra, Dr. Satya Ramaswamy, Vishal Anand Kanvaty
Adobe’s Content Authenticity Initiative provides transparency for AI-generated content through C2PA content credentials, acting as “nutrition labels” for digital content Firefly embeds content credentials and uses only licensed input data to ensure enterprises avoid liability issues when generating content Acrobat Assistant follows the same trust principles as PDF creation, allowing users to work with authenticated sources and maintain accountability Balancing AI safety controls with customer convenience requires continuous monitoring and allowing customer feedback on AI system performance NPCI provides transparency by allowing customers to understand why transactions fail through small language models that explain decision-making processes
Speakers consistently emphasize the importance of transparency mechanisms that allow users to understand AI decision-making processes and content provenance
Importance of addressing the digital divide in AI adoption
Speakers: Andy Parsons, Amol Deshpande, Prativa Mohapatra
Open standards should be available to independent creators at zero cost, same as Fortune 500 enterprises, ensuring inclusivity in AI adoption Industry bodies and partnerships are crucial for disseminating responsible AI learnings and creating domain-specific templates that MSMEs can access Large enterprises creating AI technologies have responsibility to develop frameworks that smaller organizations can adopt, preventing a stark divide between big and small players
All speakers recognize the risk of creating a divide between large enterprises and smaller organizations in AI adoption, emphasizing the need for accessible frameworks and standards
Regulatory frameworks as catalysts rather than constraints
Speakers: Andy Parsons, Dr. Satya Ramaswamy, Vishal Anand Kanvaty
Regulation serves as a catalyst for good practices rather than a constraint, helping enterprises move from reactive to proactive responsible AI adoption International airlines must comply with multiple regulatory frameworks across different countries while maintaining innovation capabilities Industry-led governance alone is insufficient; regulatory intervention is necessary because AI can have widespread systemic impacts
Speakers view regulation as a positive force that enables innovation within structured frameworks rather than as a limitation on development
Similar Viewpoints
Both speakers emphasize that responsible AI requires fundamental architectural changes in both technology design and organizational structure, not superficial additions
Speakers: Andy Parsons, Prativa Mohapatra
Content transparency must be baked into tools at their core rather than grafted on as features, requiring open standards and cross-industry collaboration Organizations need to restructure legal and compliance teams to handle AI-specific guidelines across multiple jurisdictions and create new organizational frameworks
Both speakers demonstrate practical approaches to balancing AI efficiency with safety, prioritizing user experience while maintaining robust safeguards
Speakers: Dr. Satya Ramaswamy, Vishal Anand Kanvaty
Air India’s generative AI virtual assistant handles 40,000 queries daily at 1% the cost of contact centers while maintaining 97% autonomous success rate through embedded safety procedures NPCI prioritizes low false positives over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraudulent
Both speakers recognize the special responsibility of large organizations to create scalable frameworks that can be adapted across different contexts and organization sizes
Speakers: Amol Deshpande, Prativa Mohapatra
Large conglomerates need to balance centralized compliance with decentralized business unit needs by providing scalable, safe environments with guardrails Large enterprises creating AI technologies have responsibility to develop frameworks that smaller organizations can adopt, preventing a stark divide between big and small players
Unexpected Consensus
Human-in-the-loop as essential safety mechanism
Speakers: Andy Parsons, Dr. Satya Ramaswamy, Vishal Anand Kanvaty
The question has changed from “should we be responsible with AI?” to “can your systems prove you have been responsible with AI?” Aviation industry’s safety-critical nature requires human-in-the-loop controls, allowing pilots to override automated systems when safety is at risk NPCI prioritizes low false positives over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraudulent
Despite coming from very different industries (content creation, aviation, and payments), all speakers converged on the importance of maintaining human oversight and control mechanisms in AI systems, suggesting this is a universal principle across sectors
Open standards as foundation for responsible AI
Speakers: Andy Parsons, Amol Deshpande, Prativa Mohapatra
Content transparency must be baked into tools at their core rather than grafted on as features, requiring open standards and cross-industry collaboration Industry bodies and partnerships are crucial for disseminating responsible AI learnings and creating domain-specific templates that MSMEs can access Large enterprises creating AI technologies have responsibility to develop frameworks that smaller organizations can adopt, preventing a stark divide between big and small players
Unexpectedly, speakers from both technology providers and enterprise users agreed on the critical importance of open, non-proprietary standards, suggesting a shift away from competitive advantage through proprietary AI governance approaches
Overall Assessment

The speakers demonstrated remarkable consensus across multiple dimensions of responsible AI implementation, including the need for practical implementation over theoretical principles, transparency mechanisms, inclusive access to AI governance frameworks, and the positive role of regulation. There was also unexpected agreement on human oversight mechanisms and open standards approaches across different industries.

High level of consensus with significant implications for the responsible AI landscape. The agreement suggests that industry leaders are aligned on fundamental principles and ready to move toward coordinated implementation. This consensus could accelerate the development of industry-wide standards and collaborative approaches to AI governance, particularly important as regulatory frameworks emerge globally in 2026.

Differences
Different Viewpoints
Industry self-regulation versus regulatory intervention necessity
Speakers: Vishal Anand Kanvaty, Dr. Satya Ramaswamy
Industry-led governance alone is insufficient; regulatory intervention is necessary because AI can have widespread systemic impacts International airlines must comply with multiple regulatory frameworks across different countries while maintaining innovation capabilities
Vishal argues that regulations are mandatory because AI can have systemic impacts (‘AI can go berserk’), while Dr. Satya suggests that existing regulatory compliance doesn’t constrain innovation and can work effectively within current frameworks
Approach to AI safety controls and risk management
Speakers: Dr. Satya Ramaswamy, Vishal Anand Kanvaty
Balancing AI safety controls with customer convenience requires continuous monitoring and allowing customer feedback on AI system performance NPCI prioritizes low false positives over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraudulent
Dr. Satya focuses on balancing safety with convenience through monitoring and feedback, while Vishal emphasizes starting with conservative approaches that prioritize avoiding false positives over achieving high accuracy
Unexpected Differences
Timeline urgency for responsible AI implementation
Speakers: Andy Parsons, Other panelists
2026 will mark the shift from AI responsibility being a slide in a deck to actual compliance strategy and opportunity due to regulatory enforcement Various implementation approaches without specific timeline emphasis
While Andy emphasizes 2026 as a critical deadline driven by regulatory enforcement, other panelists discuss implementation without the same sense of regulatory urgency, suggesting different perspectives on the timeline pressure for responsible AI adoption
Overall Assessment

The discussion revealed relatively low levels of fundamental disagreement, with most speakers aligned on the need for responsible AI implementation. The main disagreements centered on regulatory approaches and risk management strategies rather than core principles.

Low to moderate disagreement level. Speakers generally agreed on fundamental principles but differed on implementation approaches, regulatory necessity, and risk management strategies. This suggests a maturing field where basic concepts are accepted but operational details remain contested. The implications are positive for the field as it indicates consensus on core values while allowing for diverse implementation approaches tailored to different sectors and organizational needs.

Partial Agreements
All speakers agree that responsible AI must move from principles to practice, but they disagree on implementation approaches – Andy emphasizes open standards and cross-industry collaboration, Prativa focuses on embedding responsibility in product design, while Amol advocates for orchestrated frameworks across business units
Speakers: Andy Parsons, Prativa Mohapatra, Amol Deshpande
Responsible AI conversation has matured and now requires pragmatic implementation with demonstrable practices rather than just principles on websites Enterprises must address business strategy, ethical strategies, and regulatory compliance simultaneously when implementing AI solutions Moving from generative AI to more complex scenarios and agentic AI requires orchestration across all AI layers, not just center of excellence approaches
Both agree that large enterprises must help smaller organizations access responsible AI frameworks, but Prativa emphasizes the responsibility of technology creators to develop accessible frameworks, while Amol focuses on industry bodies and partnerships as the mechanism for knowledge dissemination
Speakers: Prativa Mohapatra, Amol Deshpande
Large enterprises creating AI technologies have responsibility to develop frameworks that smaller organizations can adopt, preventing a stark divide between big and small players Industry bodies and partnerships are crucial for disseminating responsible AI learnings and creating domain-specific templates that MSMEs can access
Takeaways
Key takeaways
2026 marks a critical transition point where responsible AI shifts from principles to mandatory compliance and demonstrable practices due to regulatory enforcement (EU AI Act, California laws, India’s IT rules) Responsible AI requires implementation across all layers of AI systems with proper orchestration, not just isolated center of excellence approaches Content authenticity and transparency must be built into AI tools at their core, using open standards like C2PA content credentials that act as ‘nutrition labels’ for digital content Large enterprises have a collective responsibility to create frameworks and standards that smaller organizations and MSMEs can adopt to prevent a digital divide in responsible AI access Successful enterprise AI implementation requires balancing three elements: business strategy, ethical strategies, and regulatory compliance simultaneously Industry-led self-governance alone is insufficient – regulatory intervention is necessary due to AI’s potential for widespread systemic impact Practical implementation challenges include uneven adoption, early consumer awareness, platform metadata stripping, and the need for continuous human oversight Cross-industry collaboration and open standards are essential for creating interoperable, scalable responsible AI infrastructure
Resolutions and action items
FICCI committed to continuing the dialogue and translating discussions into actionable initiatives with industry support Adobe’s ART framework (Accountability, Responsibility, Transparency) was presented as a practical philosophy for organizations to implement Industry bodies like FICCI should facilitate knowledge dissemination and create domain-specific responsible AI templates for different sectors Large technology creators must develop open frameworks and standards that can be adopted across the ecosystem Organizations need to restructure legal and compliance teams to handle AI-specific guidelines and create new organizational frameworks Enterprises should implement input-output validation processes for AI systems, ensuring both data sources and outputs meet responsibility standards
Unresolved issues
The specific balance between light-touch regulation versus comprehensive regulatory frameworks remains undefined and requires further discussion How to effectively scale responsible AI practices from large enterprises to MSMEs without creating prohibitive barriers to innovation The challenge of maintaining innovation speed while implementing comprehensive responsible AI governance across diverse industry sectors Consumer awareness and adoption of content authenticity standards remains low, with unclear timelines for widespread recognition The business case for AI transparency and provenance continues to be challenging, particularly for smaller organizations How to harmonize multiple international regulatory frameworks while maintaining domestic innovation capabilities The technical challenge of preventing social media platforms from stripping metadata that enables content transparency
Suggested compromises
Start with lower AI accuracy but prioritize reducing false positives to build trust gradually, as demonstrated by NPCI’s approach to fraud detection Implement scalable environments with guardrails that allow ‘bring your own AI’ flexibility while maintaining safety standards Balance AI safety controls with customer convenience through continuous monitoring and customer feedback mechanisms Use AI systems to monitor other AI systems while maintaining human oversight and control mechanisms Adopt phased implementation approaches that allow for learning and adjustment rather than immediate full-scale deployment Create industry-specific templates and frameworks rather than one-size-fits-all solutions to accommodate diverse business needs Leverage existing trust infrastructures (like PDF for documents) as models for building trust in new AI applications
Thought Provoking Comments
2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity… The question for everyone in this room has changed from should we be responsible with AI? But can your systems actually prove that you have been responsible with AI, and how do you go about doing that?
This comment reframes the entire discussion from theoretical principles to practical implementation. It shifts the focus from whether organizations should adopt responsible AI to how they can demonstrate and prove their responsibility – a much more concrete and actionable challenge.
This set the foundational tone for the entire panel discussion, moving away from abstract principles to practical implementation. All subsequent panelists referenced this shift from ‘principles to practice’ and focused on concrete examples and operational challenges rather than theoretical frameworks.
Speaker: Andy Parsons
We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food… you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.
This analogy brilliantly simplifies a complex technical concept by comparing AI content transparency to something universally understood – food nutrition labels. It makes the abstract concept of content provenance tangible and relatable.
This metaphor became a recurring reference point throughout the discussion, with other panelists building on the concept of transparency and ‘knowing what’s in’ AI-generated content. It helped ground the technical discussion in everyday consumer experience.
Speaker: Andy Parsons
It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function. You cannot provide one solution. One size doesn’t fit all.
This introduces a paradigm shift in how enterprises should think about AI deployment – from centralized, uniform solutions to distributed, function-specific approaches. The ‘bring your own AI’ concept challenges traditional IT governance models.
This comment shifted the discussion toward the practical challenges of governance in decentralized AI adoption. It influenced subsequent speakers to address how to maintain responsible AI principles across diverse, distributed implementations rather than through centralized control.
Speaker: Amol Deshpande
So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now… Which is accountability, responsibility, and transparency.
The clever use of ‘ART’ as an acronym makes responsible AI principles memorable and actionable. It transforms abstract concepts into a simple framework that attendees can immediately implement in their organizations.
This provided a concrete takeaway that other panelists and the moderator referenced. It demonstrated how complex principles can be distilled into practical, memorable frameworks that drive organizational behavior.
Speaker: Prativa Mohapatra
I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started.
This reveals a crucial insight about implementing AI in high-stakes environments – the counterintuitive approach of accepting lower accuracy to minimize false positives. It shows how responsible AI sometimes means making trade-offs that prioritize user experience over technical metrics.
This comment introduced the critical concept of balancing technical performance with user impact, influencing the discussion toward practical trade-offs in AI implementation rather than pursuing maximum technical accuracy at all costs.
Speaker: Vishal Anand Kanvaty
So the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt… So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology.
This identifies a critical ecosystem responsibility – that large technology creators have an obligation to develop frameworks that smaller organizations can adopt. It addresses the democratization challenge of responsible AI beyond just the technology itself.
This comment elevated the discussion from individual organizational responsibility to ecosystem-wide responsibility, prompting other panelists to consider their roles in supporting smaller organizations and the broader industry ecosystem.
Speaker: Prativa Mohapatra
Is industry-led governance realistically possible, or is regulatory intervention an inevitability?
This question cuts to the heart of a fundamental tension in AI governance – whether the industry can self-regulate effectively or whether external regulation is necessary. It forces panelists to take a position on a critical policy question.
This question prompted the most direct policy discussion of the session, with panelists having to articulate their views on the role of regulation versus self-governance, leading to nuanced responses about the necessity and inevitability of regulatory frameworks.
Speaker: Shantheri Mallaya
Overall Assessment

These key comments fundamentally shaped the discussion by moving it from theoretical principles to practical implementation challenges. Andy Parsons’ opening reframing set the tone for the entire session, while the ‘nutrition labels’ analogy provided a accessible framework for understanding complex technical concepts. The panelists’ contributions built on each other progressively – from Amol’s ‘bring your own AI’ concept highlighting decentralization challenges, to Prativa’s ‘ART’ framework providing actionable principles, to Vishal’s insights on balancing accuracy with user impact. The discussion evolved from individual organizational challenges to ecosystem-wide responsibilities, culminating in fundamental questions about governance models. These comments created a coherent narrative arc that took the audience from understanding the ‘why’ of responsible AI to grappling with the ‘how’ of implementation across different scales and contexts.

Follow-up Questions
How can systems actually prove that you have been responsible with AI, and what does it cost in terms of implementation and day-to-day usage?
This represents a shift from theoretical principles to practical implementation and measurement of responsible AI practices, which is crucial for enterprise adoption and compliance.
Speaker: Andy Parsons
How to balance the safety dial in AI systems – if you dial the safety knob too much it becomes inconvenient to customers, but you don’t want jailbreaks or prompt injection to happen?
This addresses the practical challenge of finding the right balance between AI safety measures and user experience in real-world applications.
Speaker: Dr. Satya Ramaswamy
How to ensure AI compliance teams are properly structured and resourced across legal, business strategy, ethical strategies, and regulatory compliance?
This highlights the organizational restructuring needed for enterprises to properly implement responsible AI practices across multiple domains.
Speaker: Prativa Mohapatra
How can industry frameworks and learnings be effectively disseminated to MSMEs who may not have access to the same resources as large enterprises?
This addresses the equity challenge in responsible AI adoption, ensuring smaller businesses aren’t left behind due to resource constraints.
Speaker: Amol Deshpande
What specific regulatory framework would work best – light touch regulation versus balanced regulation?
This was identified as requiring another dedicated session to properly explore the optimal regulatory approach for responsible AI.
Speaker: Sarika Guliani
How to create domain-specific and function-specific responsible AI templates that can work across different industries?
This addresses the need for customized approaches to responsible AI implementation across diverse business sectors and use cases.
Speaker: Amol Deshpande
How to improve consumer awareness and user interfaces for AI transparency features like content credentials?
This addresses the challenge of making AI transparency tools more accessible and understandable to end users.
Speaker: Andy Parsons
How to prevent social media platforms from stripping metadata and removing transparency when AI-generated content is uploaded?
This highlights a technical and policy challenge in maintaining content provenance across different platforms and systems.
Speaker: Andy Parsons

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Responsible AI in India Leadership Ethics & Global Impact

Responsible AI in India Leadership Ethics & Global Impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion, titled “Responsible AI from Principles to Practice in Corporate India,” examined how enterprises can translate responsible AI principles into practical implementation strategies. The session was presented by Adobe in association with FICCI and featured industry leaders from Air India, NPCI, RPG Group, and Adobe.


Andy Parsons from Adobe opened by emphasizing that 2026 will mark a pivotal shift where responsible AI becomes both a regulatory requirement and business opportunity, particularly with the EU AI Act and new regulations in California and India taking effect. He highlighted Adobe’s Content Authenticity Initiative and the C2PA standard as examples of building transparency into AI systems from the ground up, comparing it to nutrition labels that provide essential information about digital content origins and creation methods.


The panel discussion revealed diverse approaches to implementing responsible AI across industries. Amol Deshpande from RPG Group emphasized the need for orchestrated responsibility across all five layers of AI deployment, advocating for a “bring your own AI” approach with proper guardrails rather than one-size-fits-all solutions. Dr. Satya Ramaswamy from Air India shared their experience launching the airline industry’s first generative AI virtual assistant, which now handles 40,000 customer queries daily while maintaining strict safety protocols and human oversight capabilities.


Pratibha Mohapatra from Adobe introduced the “ART” framework – Accountability, Responsibility, and Transparency – as a practical approach for organizations to implement responsible AI governance. She stressed that large enterprises have a responsibility to create frameworks that smaller companies can adopt, preventing responsible AI from becoming a luxury only available to large corporations. Vishal Kanwati from NPCI discussed balancing fraud detection accuracy with minimizing false positives in payment systems, emphasizing the importance of transparency in explaining AI decisions to customers.


The discussion concluded that while industry-led governance is valuable, regulatory intervention is inevitable and necessary to ensure responsible AI deployment at scale across India’s diverse digital ecosystem.


Keypoints

Major Discussion Points:

Transition from AI principles to provable practice: The discussion emphasized moving beyond theoretical commitments to responsible AI toward demonstrable, measurable implementation. Andy Parsons highlighted that 2026 will be pivotal as regulatory frameworks like the EU AI Act take effect, making compliance both a legal requirement and business opportunity.


Industry-specific implementation challenges and solutions: Panelists shared concrete examples of responsible AI deployment across different sectors – Air India’s generative AI virtual assistant handling 40,000 daily queries, NPCI’s fraud detection systems balancing accuracy with false positives, and Adobe’s content authenticity initiatives with embedded transparency features.


Governance frameworks and organizational structure: The conversation explored how large enterprises can avoid both over-centralized compliance and fragmented business unit approaches. Key themes included the “bring your own AI” concept, the need for cross-functional teams (legal, compliance, technical), and the importance of human-in-the-loop systems.


Democratization vs. enterprise luxury concern: Panelists discussed whether responsible AI practices risk becoming accessible only to large enterprises, leaving MSMEs behind. The conversation highlighted the collective responsibility of technology creators and industry leaders to develop accessible frameworks and standards.


Regulatory landscape and industry self-governance: The discussion concluded with debate over whether industry-led governance is sufficient or if regulatory intervention is inevitable, with consensus that regulations are necessary given AI’s potential societal impact, especially in critical sectors like payments and aviation.


Overall Purpose:

The discussion aimed to provide practical guidance for translating responsible AI principles into actionable enterprise strategies, moving beyond theoretical frameworks to real-world implementation across different industries and organizational scales.


Overall Tone:

The tone was professional and pragmatic throughout, with speakers sharing concrete examples and practical insights rather than abstract concepts. The conversation maintained an optimistic yet realistic perspective, acknowledging both the opportunities and challenges of responsible AI implementation. There was a collaborative spirit among panelists, with each building upon others’ insights while sharing industry-specific experiences. The tone remained consistently forward-looking, emphasizing collective responsibility and the urgency of establishing proper frameworks before regulatory deadlines.


Speakers

Speakers from the provided list:


Announcer – Session presenter/moderator


Andy Parsons – Global Head for Content Authenticity at Adobe, runs the Content Authenticity Initiative at Adobe


Shantari Malaya – Editor at Economic Times, panel moderator


Dr. Satya Ramaswamy – Chief Digital and Technology Officer at Air India Limited


Vishal Anand Kanwati – Chief Technology Officer, National Payments Corporation of India (NPCI)


Amol Deshpande – Group Chief Digital Officer and Head of Innovation at RPG Group


Prativa Mohapatra – Vice President and Managing Director of Adobe India


Sarika Guliani – Senior Director, Head AI Technology in Industry 4.0, Communications, Mobile Manufacturing and Language Technologies at FICCI


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion, titled “Responsible AI from Principles to Practice in Corporate India,” brought together industry leaders from Adobe, Air India, NPCI, RPG Group, and FICCI to examine how enterprises can translate theoretical responsible AI principles into practical implementation strategies. The session addressed the critical transition from aspirational AI governance to measurable, compliant systems as regulatory frameworks take effect globally.


The Paradigm Shift: From Principles to Provable Practice

Andy Parsons, Adobe’s Global Head for Content Authenticity, opened by establishing a fundamental reframing of the responsible AI conversation. He emphasized that 2026 will mark a pivotal transition where responsible AI evolves from voluntary corporate commitments to mandatory regulatory requirements, driven by the EU AI Act enforcement in August 2026, California’s first US legislation, and India’s emerging SGI rules. This shift transforms the central question from “should we be responsible with AI?” to “can your systems actually prove that you have been responsible with AI?”


Parsons argued that organizations can accelerate their AI adoption by implementing responsible practices proactively, positioning responsibility as an enabler of innovation rather than a constraint.


Content Authenticity as a Model for Implementation

Adobe’s Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity (C2PA) served as a concrete case study. Parsons introduced the concept of “nutrition labels” for digital content—referencing remarks made by Prime Minister Modi—where consumers have the right to understand how digital content was created. The C2PA standard, developed with Microsoft, BBC, OpenAI, Sony, and other partners, provides transparent context about media creation, including which AI models were used and whether content is AI-generated.


However, Parsons acknowledged implementation challenges, including uneven platform adoption, limited consumer awareness, and social media platforms stripping transparency metadata. The business case for provenance investments remains challenging, as benefits to democratic discourse don’t always translate directly to revenue.


Industry-Specific Implementation Strategies

Aviation: Safety-Critical AI with Human Oversight

Dr. Satya Ramaswamy from Air India shared their experience launching the airline industry’s first generative AI virtual assistant in May 2023. The system handles approximately 40,000 customer queries daily from 100,000+ daily customers across 300 aircraft, processing 13.5 million queries to date at one-hundredth the cost of traditional contact centers, with a 97% autonomous resolution rate.


Air India continuously monitors the AI system using additional AI tools while maintaining customer feedback mechanisms. The aviation industry’s existing regulatory framework provided a natural foundation for responsible AI, with embedded human-in-the-loop control concepts. Their international operations require compliance with multiple regulatory frameworks, demonstrating that regulatory compliance need not constrain innovation.


Financial Infrastructure: Prioritizing User Experience

Vishal Anand Kanwati from NPCI provided insights into implementing AI in India’s critical payment infrastructure. NPCI’s approach prioritizes minimizing false positives—genuine transactions incorrectly flagged as fraudulent—recognizing that declining legitimate transactions severely damages user trust. They use small language models to explain to customers why transactions were declined, providing transparency for AI decisions.


NPCI implements safeguards preventing systemic failures, such as limiting the percentage of transactions that can be declined. As Kanwati noted, AI systems can “go berserk” and potentially decline all UPI transactions, requiring circuit breakers to prevent cascading failures across the payment ecosystem.


Conglomerate Governance: Orchestrated Responsibility

Amol Deshpande from RPG Group addressed implementing responsible AI across a diverse conglomerate spanning infrastructure, healthcare, IT, agriculture, and manufacturing. He introduced the “bring your own AI” concept, recognizing that different business functions require different AI solutions while maintaining consistent safety standards.


Deshpande emphasized that awareness is the first step toward responsibility, requiring significant investment in building AI literacy across the value chain. RPG’s approach provides scalable, safe environments with appropriate guardrails while allowing business units operational agility.


The ART Framework and Product-Embedded Responsibility

Prativa Mohapatra from Adobe India introduced the “ART” framework—Accountability, Responsibility, and Transparency—as a practical governance approach. This framework is embedded throughout Adobe’s product development, ensuring responsible AI principles are built into products rather than added afterward.


Adobe’s Firefly generative AI embeds content credentials directly into generated content, enabling enterprises to use AI-generated materials confidently without intellectual property concerns. Adobe’s Acrobat Assistant maintains trust principles that have made PDF reliable, addressing concerns raised by India’s Supreme Court about legal documents containing references to non-existent cases through careful source validation.


Democratization Challenges and Collective Responsibility

A critical theme was the risk that responsible AI practices might become accessible only to large enterprises, creating disadvantages for MSMEs. While large companies can shift resources to accommodate AI compliance requirements, smaller organizations lack this flexibility.


Industry bodies like FICCI play crucial roles in democratization, serving as conduits for knowledge transfer and framework dissemination. The collaborative approach mirrors India’s UPI success, which required cooperation across multiple stakeholders to create an open, interoperable system.


Regulatory Landscape and Industry Response

The discussion revealed consensus that regulatory frameworks are inevitable and necessary for managing AI risks at scale. Kanwati’s observation about AI systems potentially declining all UPI transactions illustrated why regulatory safeguards are essential for critical infrastructure.


However, panelists viewed regulation as a catalyst for good practices rather than innovation constraints. Dr. Ramaswamy’s experience with international aviation regulations demonstrated that compliance with multiple frameworks can coexist with innovation and competitive advantage through proactive engagement rather than reactive compliance.


Open Standards and Interoperability

A recurring theme was the importance of open standards for responsible AI implementation. Parsons drew parallels between C2PA and India’s UPI infrastructure, both succeeding through collaborative development and open access. This philosophy extends to governance frameworks and best practices, enabling organizations to build upon shared foundations rather than isolated solutions.


Implementation Challenges and Practical Solutions

Despite consensus on principles, significant practical challenges remain. Consumer awareness of content authenticity standards is low, user interfaces for transparency features are evolving, and business cases for responsible AI investments can be difficult to justify.


The panelists proposed practical solutions: starting with lower accuracy but minimal false positives (NPCI), using AI to monitor AI systems (Air India), and creating industry-specific templates rather than universal solutions (RPG). These approaches acknowledge that responsible AI implementation must be pragmatic and context-sensitive.


Future Outlook

The session concluded with FICCI’s commitment to continue translating discussions into actionable frameworks. Sarika Guliani emphasized that responsibility is not merely compliance but a commitment to developing technology with shared human values.


The transition from principles to practice represents both challenge and opportunity for Corporate India. Organizations implementing responsible AI frameworks proactively will be better positioned for 2026’s regulatory environment, while those delaying may face competitive disadvantages. The discussion provided a roadmap emphasizing practical implementation over theoretical commitments and collective responsibility over individual compliance, ensuring India’s AI development serves broader societal goals while maintaining global competitive advantage.


Session transcriptComplete transcript of the session
Announcer

Welcome to this session titled, Responsible AI from Principles to Practice in Corporate India, presented by Adobe in association with FICCI. India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and productivity. But the real differentiator, is it about how quickly do we adopt AI? No. It’s about how responsibly we deploy it. Trust, transparency and accountability are no longer optional. They are foundational. And that’s exactly what we are going to be talking here today. The conversation will center on advancing safe and trusted AI in the corporate landscape. To set the context and to get us started, it’s my privilege to invite a guest. Andy Parsons, our Global Head for Content Authenticity at Adobe.

Andy, over to you.

Andy Parsons

Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Thank you. Thank you so much for having me. I know we’re against a tight time frame here, so I’m going to speak for just about five or six minutes and then turn it over to our wonderful panel where all the action will be. I’m Andy Parsons. I run the Content Authenticity Initiative at Adobe, and I’ll tell you a little bit about what we do. I think it’s a good example of how responsible AI can be adopted, promoted, and effective in enterprise. I want to start with a simple observation, and I promise not to talk too much about policy because I’m unqualified to do that.

I’m a mere engineer at Adobe. But 2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity for innovation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. I’m going to start with a simple observation. of you in this room. This means it stops being a slide in a deck, and it will now be sort of a piece of our compliance strategy, but also, as I said, an important opportunity. The EU AI Act’s enforcement provisions take effect in August, as does the first law in the United States in California. And of course, we have the new IT rules here in India on SGI, and India is actively shaping its own path.

And it’s good to be here while that’s happening and talking to many of you about how AI and transparency can be effective in the very short term. So the question for everyone in this room has changed, I would say, this week and certainly will continue to change this year, from should we be responsible with AI? I think that debate is well settled at this point, but can your systems actually prove that you have been responsible with AI, and how do you go about doing that? And for all of us, what does it cost in terms of implementation and day -to -day usage? So we want to position responsibility, AI today as a leadership and operating must and discipline rather than a regulatory obligation, although in 2026, I think it will be both.

So this shift from principles to provable practice is the theme of our panel today. It is what I want to spend just a couple of minutes on. I won’t speak about this from a theoretical perspective. I’ll tell you a little bit about what my team at Adobe does and why I think it matters and perhaps sets an example for others. The trust crisis with AI is real and it’s concrete and it’s happening every day to our children and our businesses and ourselves. Anyone who consumes media, especially across a cultural vastness and disparate languages that we have here in India, is certainly well aware of this. Generative AI has done remarkable things for creativity. Of course, we live and breathe that at Adobe every day and you’ll hear more about that from my colleague Pratibha in a moment.

But it’s made the trust problem absolutely impossible to address. Ignore. So I think every enterprise in this room now produces or consumes AI. generated content at scale, whether it’s marketing assets or news or customer communications, product imagery, et cetera. The volume is extraordinary, and it’s absolutely accelerating every day. The potential is huge if the risks are managed, which is what we’re here to talk about today. In fact, I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive. India has the world’s largest digital population. Of course, you all know that.

Hundreds of millions of people consuming digital content every day. In this environment, synthetic content and AI -generated misinformation are not abstract risks. They’re real ones and operational ones for all of our businesses. So the corporate responsibility question is if you deploy AI that generates or modifies content, as almost all of you, I would imagine, do now or will shortly, can you demonstrate what was made, how it was made, and by which models and products? And that’s what my job at Adobe is. I’m very privileged to provide some leadership. I’m very privileged to provide some leadership in the C2PA, which hopefully many of you have heard of. have heard about this week, which is the Coalition for Content Provenance and Authenticity.

And our sort of piece of the responsible AI landscape is around transparency for content that’s generated by our tools, but also providing a global standard so anyone can do this without licensing totally free. So perhaps this is a case study, and I’ll just spend my last couple of minutes on this. At Adobe, we decided five years ago now, around the time that I joined the team, that responsible AI via content transparency wasn’t a feature that could be grafted onto our products like Photoshop and Premiere, digital experience products, but had to be baked into the tools kind of at their very core. And we went about doing that. While we considered how to do it, we contemplated developing a global standard with partners like Microsoft, BBC, OpenAI, Sony, and many others.

And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse LinkedIn and see this symbol, you have the C2PA content credentials. I have encountered a content credential, and it will provide transparent context about a piece of media, whether it’s a video, audio. or image. And as we see adoption increase, we realize this is perhaps a model for AI adoption in a responsible manner based on open standards and interoperability. So we are built on an open standard. It’s truly a cross -industry coalition that includes all the companies I mentioned, meta camera manufacturers like Sony, Nikon, a growing number of media organizations, perhaps some in this room, and also silicon manufacturers like Qualcomm and others.

And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one company. It should be standards -based. It should not be proprietary, but available to everyone. And this shared philosophy, I think, is especially important here in India. And it should be conveyed by working code and products that leverage that working code, not theory and slides in a slide deck and statements on a website. So what are the principles that the Content Authenticity Initiative and the CTPA reflect? Transparency, provenance information travels with assets when you make them. You can build an entire genealogy tree to understand where your corporate or daily content comes from, what models made that content, what products were used, what cameras were used.

Simple ideas like knowing that a photograph is actually a photograph and not generated. These are simple ideas. I’d say we’re well overdue having this ability, but we need it now more than ever. Accountability. You can trace those AI models, understand what was used, how it was used, and thereby understand the prominence if you wish to access it. We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children. No one’s going to stop you from buying it or consuming it, but you have a right to know what’s in it, and we think that digital content has to have that same foundation of transparency.

Last, inclusivity. Perhaps this matters enormously for this room. Our standard is open and free. An independent creator here in India. We can apply the same kind of provenance at the same zero cost. as a Fortune 500 enterprise. I won’t say that everything is roses and happiness. There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI transparency and responsible AI will present you, and I’m sure you’ll hear more about that with our esteemed panel. But adoption is uneven. Many social media platforms strip metadata and remove that transparency when content is uploaded. Legislation may help here. We’ll see. Consumer awareness is still very early. I saw many of you squint and look at this pin because you probably haven’t seen it.

Our hope is that you will see it ubiquitously across the world starting in 2026. And because consumer awareness is early, user interfaces are also quite early. But there are telephones, there are phones, cameras that are now beginning to show this symbol as a symbol of transparency. And the business case for provenance has been challenging. I’ve often said that doing something that helps perhaps preserve democracy and democratic discourse is maybe not a good way to make money. I’m sure that you will see it. I’m sure that you will see it. I’m sure that you will see it. But it is critically important. And now we’re seeing that change as enterprises have to be compliant and have to be leaders when it comes to AI transparency and responsibility.

What’s changing, of course, is regulation. I view regulation, like what’s happening here in India, as a catalyst for good practices. We don’t want to be reactive, but we do want to help catalyze the responsible AI ecosystem. You need standards, not just principles. Responsible AI commitment on a website is a starting point, but not a meaningful milestone. And that’s really the difference that we’re here to talk about today, the difference between principle and practice. I think you need cross -industry infrastructure. Techniques you use for responsible AI should be interoperable, open, and standardized. And I think you have a long track record here in India of mobilizing things like UPI payment infrastructure that not a single bank could do or a single government agency, but required massive scale cooperation for openness, standards, and most importantly, interoperability.

And last, enterprises. Like many of yours in this room, I’m sure you’ve all heard the phrase, that are willing and excited to go first, that really look at transparency and AI responsibility as an opportunity, not a requirement and a consequence of AI. So let me bring this all together before I introduce our panel. The responsible AI conversation has matured, and now we have to move it to pragmatic implementation. It should favor fairness, accountability, transparency, privacy, inclusivity. We’ve all heard these words over and over again, and in 2026, I think it’s absolutely critical to put them to work and demonstrate that all of our enterprises are doing those things, and that’s the hard part. That’s going to look different depending on whether you’re operating aviation systems where safety is critical, running payments infrastructure at massive scale, governing AI across a conglomerate, or building creative enterprise tools as we do at Adobe, which is exactly why I carefully selected those industries because we have representatives from all of them on our excellent panel today.

So we’re fortunate to have leaders from. Air India, PCI, RPG Group, and Adobe. each of whom is navigating and translating the sort of remarks that I’ve made in different ways for different outcomes and providing, I would say, exemplary ways to find their pathway through the challenges I mentioned. And we have a fantastic guide for this conversation, Shantari Malaya, editor at the Economic Times, who covers these infrastructure and societal sort of breaking points every day in her coverage of our various industries. So I’m going to stop there. Thank you so much for having me. And let’s get on to our panel. Shantari.

Shantari Malaya

Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a warm welcome once again from my end. My name is Shantari Malaya. I’m editor at Economic Times. Welcoming you all to the panel. Right at the heart of the AI Impact Summit. It’s been spectacular. And to take this discussion forward, we are looking at responsible AI. very momentous time in India. India is really charting the course for the world. And as we look at building trustworthy and inclusive AI, industry, infrastructure, policy perspectives, it really becomes important to know what some premium leaders in the country are thinking about this. So this dialogue will examine some parts of responsible AI and how it’s really going to shape, reshape enterprise strategy.

So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology Officer at Air India Limited. Warm welcome, Satya. We have Mr. Vishal Anand Kanwati, Chief Technology Officer, National Payments Corporation of India, NPCI. We also have Mr. Amol Deshpande, Group Chief Digital Officer and Head of Innovation at the RPG Group. And I have the pleasure of inviting Prativa Mohapatra, Vice President and Managing Director of Adobe at India. This is a fantastic lineup and we shall get some very sharp insights from our panelists over the next half hour or so. So at the very outset, if you really see building trustworthy and inclusive AI is all going to be about how responsible AI principles, whether it’s fairness, accountability, transparency, privacy and inclusivity, is really going to actually realistically be translated into enterprise strategy frameworks and how we are going to go about it.

Right. So Amol, let me call you into the discussion. Warm welcome. I’m keeping a tight watch on the timer right behind us. As we know, this is a summit of scale and we really need to help the organizers to clock good time. So Amol, very quickly, as I invite you into this discussion, you represent enterprises at, you know, while you are participating. As part of the RPG group, you also represent enterprises that are deploying AI at scale. and many more that are just finding their footing as well. So there’s a huge spectrum and diversity in the kind of organizations that you are representing. Two things for you very quickly. One is in large multiple, you know, businesses such as the RPG group, how are you really preventing responsible AI from really becoming something like a just mere centralized compliance exercise or something that’s on the other flip side becoming a fragmented business unit wise, you know, checklist.

So there are two risks that can happen, right? In a group, in a conglomerate, large scale or decentralized. So how are you really looking at the balance here? And how do you really see your role in an industry body as well? So all yours.

Amol Deshpande

Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You ask a very pertinent question. You know, it’s to be or not to be, kind of a scenario when it comes to AI, but that not to be is not really a choice. and it did mention about the responsible AI and I would take a little stab at peeling and looking at where the responsible AI comes from when it comes to industries. It comes across all those five layers of the AI when we are looking at it. When any enterprise is looking at deploying AI and being responsible for the usage of those elements, whatever you are using for, the responsibility needs to be there at every layer.

It’s not one or the other. It has to be an orchestration of all the things. So far, AI in its very nascent forms had been a thing of center of excellences, trying use cases and seeing what it is happening, but now it has come to a scale. And when it comes to enterprises and manufacturing enterprises, which we have significantly higher share in terms of as a consumer of AI technologies, there is a very clear -cut view on how it is to be done. So you need to provide the playground for the enterprise. It has to operate function with agility. The other part is about people. The people is a very, very important stakeholder in the whole thing.

We are moving from generative AI to AI ML, more complex scenarios and agentic AI. So people is a very, very important aspect of it. The choice still remains with human sense. The awareness is important and enterprises like us spend a significant amount of time and effort in building that skill sets amongst the value chain of all the people who are doing it. Last but not least is the process and governance part which comes with it. It’s more of a guiding principles which need to be given so that it gives an opportunity. It’s more about, you know, if one can say it will be more of a bring your own AI kind of a scenario in every function.

You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us. Orchestration and getting a scale. That’s something which is there. But those templates are being exercised, practiced within the enterprises, practice it in a very diverse group like RPC at ourselves. And then that can be deployed at multiple levels.

Shantari Malaya

Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Pratibha here. Welcome, Pratibha. So Adobe is consistently really positioned responsible as a product philosophy and a trust commitment. You may just have to switch that on. So Adobe has constantly spoken about responsible AI as a very fairly large commitment. So how do you really look at all these principles manifesting or panning out in terms of operationalizing it among all your product teams? And sending it out. as a strong positioning internally and at the same time as someone who’s led a lot of industry conversations, where do you really see enterprises struggling to get these things right?

So all yours here.

Prativa Mohapatra

Okay, thank you. So I think Andy set the context and since we are here not to learn about the principles but the practices, I think everybody should go back with certain practices. So the first practice of AI governance which we practice is art, which is accountability, responsibility and transparency. So if every person goes back to their organizations and talks about art, which is our philosophy, that’s practicing philosophy number one. And we actually have been doing this for our own products for a while now. And of course we are in the business of content for a very, very long time and now the same content is becoming available. It’s becoming the currency which everybody’s debating. So our principles have been there for a while, but how it is actualized.

And let me say how it’s translated into our products. And by the way, it’s in our products. It’s in our methodologies. Every new product that we have goes through a very strong, secure methodology with hundreds of steps inside it. So there’s principles embedded into how we create stuff. But a couple of examples. Firefly, which is our Gen AI tool, actually embeds what Andy said, those content traditions. Nutritions. It is anything that you have being generated out of this product will have that nutrition level. So when an enterprise is using something in Firefly, you can be super confident that you will not be violating any law. You will not be getting into any liability issues. Because how you do it is by feeding.

Because AI is all about the input and output. So the input has to be something which will not land you in the trouble. You cannot take somebody else’s data. So here it is everything licensed. So it goes into the models, and what you create, the output which comes, then you have to test that output. With that output, will we be accountable? Will we be responsible in showing the transparency of how this was created? So I think that loop has to be created in using any AI. Firefly is an example. Let me talk about Acrobat, which everybody has. I’m sure 100 % of you have PDF files on your phones or on your machines. So Acrobat has this new feature called Acrobat Assistant.

It is agentic, but we have so many chatbots in the market. But when you come to an assistant like Acrobat Assistant, it is following the same principles that PDF was used to be created. So everybody is confident when you’re using PDF. So today, you would have read in the papers recently, the Supreme Court was very worried. that there were certain lawyers who had the petitions which had reference to cases which do not exist or it had certain laws stated which are fictitious. So imagine somebody’s created certain content using some sources which were not authentic. Now, if you use acrobat kind of products for that, you feed the data or you feed files from your own machine.

So you’re confident that what comes out of it, you can go back. So wherever there is this usage of high -stake output, enterprise -grade, you have to look at this input -output process and follow the philosophies within it. And I think for every enterprise who is doing that today, they really have to… Already Amal talked about the people process technology. I’m sure every organization today has a legal team, has a compliance team. But these teams have to re -opt to talk about AI compliance. Enterprises… they do business strategy, they have ethical strategies and then they have regulatory compliance. All the three anything that you do in AI ensure that you tick all the three. If you miss any one you might not be ready for the future.

So that’s how I see it.

Shantari Malaya

Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving so fast that we’re not really able to own the operational consequences of what we’re trying to do as well, setting out to do as well. So great point on that Prithiva. I’ll circle back to you time permitting. Let’s see how best we can get back. Dr. Satya calling you in here. So aviation volumes, landscape, scale, I mean you name it, it’s all there. So how are we really looking at balancing AI driven innovation really? Where you’re looking at regulation, you’re looking at accountability, you’re looking at operational efficiency. At the same time, you cannot really compromise on user and customer experience.

How do these things really fall in place in terms of vision and metrics?

Dr. Satya Ramaswamy

Thank you, Shantiri. Since the audience is international, real quick introduction about Air India. Air India is India’s national flag carrier. We operate about 300 aircraft, and we carry about more than 100 ,000 customers a day. And we have a few hundred airplanes on order. So once they are delivered, we will be one of the biggest airlines in the world on the size of one of the large three American carriers. So we are building it up to an airline of scale, and it brings about very interesting challenges that we talked about. So let me illustrate the way we handle it with one of our own examples in generative AI. So in May of 2023, we launched the global airline industry’s very first generative AI virtual assistant out of India.

So it was a global first in the whole airline industry. Today it handles. It has handled about 13 .5 million queries so far from customers. about 40 ,000 queries a day and it operates at a cost which is 100 per query of a contact center and if you look at the customer preferences over a period of last two and a half years we have been operating this facing all the challenges you mentioned so from a customer preference perspective 50 % of the contact volume goes to the contact center they want to talk to a human agent the remaining 50 % of the contact volume comes to A .G out of which it handles 97 % of the queries autonomously only 3 % are escalated further to the agent so pretty high success rate and we faced this challenge from day one we started working on it in November 2022 when we were the whole Azure OpenA services are available we started working on it and the whole approach to responsibility safety has evolved over a period of time so if you dial the safety knob too much then it is an inconvenience to the customer we practically cannot answer any question because the customers are always changing the way they ask certain thing and we have to be very flexible and clearly Generative AI takes us a large step towards that.

At the same time we don’t want any jailbreak to happen we don’t want problem injection to happen we don’t want any inappropriate thing to happen so we are watching the whole performance of the Virtual Assistant A .G as we call it all the time. So we use in fact Generative AI to watch the performance of the Generative AI chatbot and also we have given the voice to the customer so at the end of the day when we send a response we also ask the customer did it answer your question and also allow them to give their reactions is it appropriate, inappropriate and thankfully over the last 20 years it has not answered one single question in an inappropriate fashion because we have embedded all the safety procedures all deep into the way that we handle it but now we are as the technologies are maturing for example we now have interesting technologies in terms of the prompt firewalls where we can centralize all these controls and obviously work with great partners like Adobe who do their diligence and the way that they have deployed some of these technologies, giving full indemnity to us in the event of a problem.

That gives a lot of confidence in the way that we manage the risk. So it’s about managing the risk of something that is not within the bounds happening versus the convenience of the customer and we handle it in a variety of ways like I talked about just now.

Shantari Malaya

Excellent. And given the kind of scale you’re operating at, I think every day is a new day.

Dr. Satya Ramaswamy

Yes, it is. We face challenges. There is new, brand new every day.

Shantari Malaya

Absolutely well stated. Thank you, Dr. Satya. Vishal, may I kind of call you into the discussion? We’re waiting to hear about you. Again, what do I say? NPCI, you know, largest, you know, digital payments, infrastructure platforms. you kind of call the shots in terms of taking some of those calls, you know, for want of a better coinage, you know, in terms of how the payments systems in this country move. So two quick questions here or rather sort of, you know, phrase it into one so that, you know, we can get a comprehensive view from you. How are you really looking at AI in terms of really, you know, being inclusive, you know, in trying to ensure fairness when it comes to two parts?

One is when you’re looking at how India can play an important part in creating a responsible AI by design for a digital, national digital infrastructure platform such as yours. And B, how are we really looking at, given the volume, scale, size, fraud also becomes an unfortunate part of the entire, you know, discussion. So how are we really looking at AI to be? Fair at the same time proactive and detective when it comes to looking at fraud. So how what are the aspects that you look at? keenly here

Vishal Anand Kanwati

I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started. But over a period of time, once we had more data, once we collaborated with the industry and ecosystem, we saw that we were able to achieve higher accuracy. So I think this was the fundamental principles when, you know, I mean, and then once we started with this success, we were able to understand the customers better, you know, their patterns better. And that gave us a lot of insights into, you know, fine tuning the models and taking it forward.

Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously the governance principles are core to it. But I think I would like to call out two of the things that is there. One is a transparency. If a customer has a transaction that is failed, I think you should know why it has been failed. and today we build a small language model where you can go and actually chat and say what happened to this transaction why is it declined and you know even if it is declined due to a fraudulent transactions uh you know due to a suspicious activity we can actually tell him today saying that you know this is where we feel you know you normally do you know don’t send this transaction or you don’t scan a qr ever this is the first time you’re doing so this is the reason why we have sort of declined so this level of transparency and ensuring there’s a person you know obviously we can’t you know have the army of people sitting and answering these questions but building systems and answering those questions is very very important and i think we have a beautiful framework rba has also given the framework from a responsible ai meaty document is fairly comprehensive so i think all the principles have to be adopted there is absolutely no choice for us um and and i think i i don’t see it as a challenge at all because in our experience it’s been very very helpful to obviously ensure the trust in the payment system is not compromised

Shantari Malaya

absolutely and also the fact that you know as you said given the scale that you’re operating in I’m also itching to ask you something but maybe I’ll pick your brains offline in terms of the human in the loop there but yeah that’s a discussion for another time so Pratibha curious to know here so responsible AI while you know in letter and spirit remains there do you think it kind of remains or rather getting relegated to the risk of becoming something of an enterprise large enterprise luxury is it able to cut across and come down the line to you know aspiring businesses growing businesses MSMEs is it able to cut through the noise what’s the responsibility of industry leaders and the larger enterprises in kind of harmonizing the framework that can define responsible AI How would you look at this?

Is it getting risked?

Prativa Mohapatra

Absolutely, yes. I think we stand in that time when on the creators of the AI technology, the big guys versus the small guys, that divide might just become very stark. Coming down to the users of the AI, which is the big enterprises and the MSMEs who are in a big rush to make profit, do something, so that divide can happen. Hence, it is responsibility of all enterprises to be responsible for creating those responsible AI frameworks. It’s a tongue twister, but I think the responsibility is very, very big right now. Again, that Adobe’s example, while the entire AI big bang started happening after November 2022, so early 2023 -2024, but I think our models were there, or our content authentication initiative was from 2019.

So… I think that large enterprises who create technologies absolutely are responsible. And those frameworks now being taken up by many more is again an absolutely act of responsibility to back to the business. And so the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt. Now the users of these enterprise AI -grade technologies. It’s very hard. I mean, I think 10 years back we had digital transformation. Now we are having AI transformation. So the big companies have to quickly create a new org structure, have to create the legal teams, which, by the way, had to just mull over the digital guidelines of various continents, countries, now have to go through the AI guidelines of countries.

So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people, oh, we will take out people from here, put there. The MSMEs don’t have that luxury. So I guess it is creators have to create frameworks so the right technology is created. The users, the big guys, have to quickly tell the methodology. And then the other stakeholders, like the service providers, also quickly have to go from, like I come from this industry where we had custom software to ERPs to digital transformation and now AI transformation. So how do you do this transformation?

Because a technology on its own has no meaning unless there is a context behind it. I think all of that has to come together. And over and above this, since AI, as we have been hearing words like it’s a civilization change, it is a… similar to electricity and steam, it will change everything. So because of the impact… at society level to each one of us, the governments have a big role here as well. So all of it has to come together to ensure that enterprises and society move in tandem.

Shantari Malaya

Absolutely. Very rightly stated about the fact that there is a larger collective responsibility from the bigger players in trying to define the standards. I think it’s very critical. Amol, if I may ask you, like Pratibha said that there is an accountability. MSMEs are growing and there is also policy that supports the growth at this point of time, right? So in their hurry to really scale and innovate, they are often forgetting what guardrails and consequences they have to face, you know, when it comes to their AI policies, strategies, implementations. So what’s the role of the ecosystem, the industry, industry bodies and the entire ecosystem at large in helping responsible AI moving in letter and spirit?

Amol Deshpande

Shantari, I think the first step towards being responsible towards anything is awareness, right? So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for that. That’s the first thing. Second is comes the action part of it, awareness, action, and then you demonstrate it through your product services or whatever you are trying to create and generate that kind of impact. How does that allocate? I echo the sentiment which Pravita has mentioned here is in terms of big players have to come up with those frameworks. Those frameworks need to get translated and industry partnership is a very key thing here through the industry bodies, right?

Where the learnings have to be disseminated into this. Second is it’s more of a demand and supply kind of a thing. So if the supply is there with the kind of right guardrails and responsible aspects which are there as a part of framework, then naturally the supplier starts aligning to it. For a business like us where we deal from, infrastructure to healthcare. and IT to agriculture, tires. See, it is a very diverse element and there is a different kind of templates which we need to do so. Organizations like us have the responsibilities of creating a framework which will be fair to us as well as to our customers and partners and those learnings can be shared across the value chain where MSMEs may not have access to that kind of an information and that to domain specific.

Mind you, this would change. It’s not like this one guardrails construct will work for everybody, but it would vary from industry to industry, function to function, and that kind of a cascading through the industry bodies like FICCI and others is, I think, very, very critical.

Shantari Malaya

Than you for that, Amol. Dr. Satya, if, you know, as we know now that there are enough global regulations or rules or recommendations that have come in. You have the EU AI Act, you have UNESCO’s recommendations, you have the OECD rules, I mean, the principles, so on and so forth. And India is also kind of inching towards developing its own strategies, policies, and approaches at this point of time. so the real leadership question at this point of time that remains is how are we looking at marrying global best practices with the diversity the scale, the fire in the belly that India has at this point of time we are really gearing to go but how are we really looking at it and besides of course we have a lot of domestic regulations industry wise as well, we have regulators we have even the DPDP Act, we have so many things that have come in how are we going to marry all of this and create harmony

Dr. Satya Ramaswamy

Absolutely, I think taking Air India, we are an international airline so we operate in many countries for example we go to North America US, where the federal aviation regulation is the key regulator, then we go to Europe all places in Europe, then obviously we operate in India where the DGCA is the regulator and they are doing a great job overseeing this industry and likewise in other parts of the world so by nature we are geared to looking at the regulation in all parts of the world in all parts of the world and we are looking at the regulation and we are looking at the regulation and we are looking at the regulation and being in compliance.

And our customers are international, and when we operate in this international geographies, we have to comply with the appropriate regulations. And again, you know, aviation is a very safety -critical industry, right? So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the industry because of this, because it’s highly regulated. For example, even, you know, many of these planes can practically land themselves, right? I was in a simulator last week for an Airbus A320 and landing that plane in San Francisco airport. And as we were coming in, the plane was set at seven miles from touchdown, and, you know, my trainer pilot gave me the control.

And, you know, so I could run the plane practically on autopilot all the way. At the same time, you know, there is a red button in the joystick, so at any moment I feel that the airplane is not doing the right thing. the autopilot control is not in the right thing and quickly cancel and take our control, right? So that is, this concept is well embedded in the airline industry. So we know that we need to obey the regulations and do the right thing that is safe for the customer, but we also help the human in the loop take control if at any moment we feel the safety is at risk. So bottom line, you know, we comply with all the regulations, and it doesn’t in any way constrain Indian innovation.

For example, again, like I mentioned, we launched the global airline industry’s first -generation virtual agent out of India, and we have not had any challenge with any of the regulations around because we comply with all the regulations and we work with partners who approach it

Shantari Malaya

So, Vishal, taking a thread from what Dr. Satya said, I’ll close with you here. Is industry -led governance realistically possible, or is regulatory intervention an inevitability? I did speak to people from other forums in some other related industries where they said, you know, it’s very difficult to answer this. But, yeah, self -regulation may be a way forward given the scale that we are operating on. I’d like to know your thoughts here.

Vishal Anand Kanwati

Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I think, like I gave you the example on the transactions, you know. You know, today all the UPI transactions can get declined, you know. And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions, right. So those safeguards are very much, very much required. And when this has to be across the ecosystem, I think, you know, the regulations are mandatory. And obviously it has to be consulted and we have to work with everyone. But it’s important.

While. All of us realize. it’s a great opportunity the innovation can really scale up I think regulation is one thing that I think we have to really take it as part of the initiatives, embed into our systems and then take it forward otherwise the chances of this having a challenge to us is really high

Shantari Malaya

Fair enough, I think in an economy that is maturing regulatory intervention is an inevitability that we must welcome at some level so great discussion, I think this was fantastic, itching to ask you more but I think we’ll have to call to a close this discussion, thank you so much let’s put our hands together to our esteemed panelists this was really nice may I now invite Ms. Sarika Guliani who is the Senior Director, Head AI Technology in Industry 4 .0, Communications, Mobile Manufacturing and Language Technologies at FICCI Thank you so much Sarika, please

Sarika Guliani

first of all what an insightful session I would say that if I start capturing the thoughts I think 2 minutes and 36 seconds would not justify that one but overall if I talk about it starting with Andy you mentioning about the initiatives what Eduvi was able to take in through and how you are responsibly developing the content. Pratibha talking about the art which is really interesting whether you talk about accountability responsibility and transparency and of course Amol mentioning about all the five layers and how the of course the responsible development of AI and the permutation needs to be done. Dr. Satya no second thoughts to it and that again goes to the NPCI also the kind of work which the national carrier of India is doing it or NPCI is handling it it has to be a balance of the responsible AI and the efficiency and actually the act which can be taken thank you so we left it on the note that what regulation is required while that’s a sentence would require another session on this because they would have a people who will talk about a light touch regulation versus a balanced regulation to something I’ll say that as part of Vicky and as part of the discussions what we were hearing it here we feel it that responsibility is not anymore a compliance check which is supposed to be there it’s a commitment of the technology that we should develop it which has a shared human values that is something what decisions we take now not in terms of the words what we are discussing here is not going to define our future it’s not something which is what we create we define what we choose to create is something what will get defined that is something is very important so the choice comes out from the whole thing you have heard the panelist talking about whether it was from the input side to the output side give a very good example by taking it through the whole process it needs to be taken so we simply feel it whether it’s any layer it has to be developed in a way keeping people in mind and the theme of the summit people planet and progress should be kept in mind while doing any technological innovation which is keeping the principles of responsible AI into the mind.

That is something which we highly feel it and support it and I think with that I like to thank our esteemed panelists. Thank you Andy of course for joining us today. Chantri for moderating it and well capturing in time. And of course our panelists Dr. Satya Ramaswamy, Chief Digital and Technology Officer Air India. Mr. Amol Deshpande, Chief Digital Officer Head of Innovation RPG Group. Mr. Vishal Kanwati, CTO, NPCI Ms. Pratibha Mohapatra, Managing Director Adobe and the of course my lovely audience through which we could sail through this and as part of FICCI we are thankful that we could have this joint session with Adobe, the team of Adobe and Nita and Nanya who had worked with my team and the people at the background who delivered it.

So thank you all for joining us. We don’t end the discussion here. We end the session here. FICCI is committed to get this dialogue further into action with the support of the players. And we look forward to your joining in that time. Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Andy Parsons
8 arguments191 words per minute2021 words632 seconds
Argument 1
2026 will mark the shift from AI principles to provable practice with regulatory enforcement
EXPLANATION
Andy argues that 2026 represents a critical turning point where responsible AI will transition from being theoretical principles to mandatory, demonstrable practices. This shift is driven by upcoming regulatory enforcement including the EU AI Act and new laws in California and India.
EVIDENCE
The EU AI Act’s enforcement provisions take effect in August 2026, as does the first law in the United States in California. India has new IT rules on SGI and is actively shaping its own path.
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
Argument 2
The question has changed from “should we be responsible with AI?” to “can your systems prove you’ve been responsible?”
EXPLANATION
Andy emphasizes that the debate about whether to be responsible with AI is settled, and the focus has shifted to demonstrating and proving responsible AI practices through actual systems and processes. This represents a move from philosophical discussion to practical implementation.
EVIDENCE
He mentions that responsible AI stops being a slide in a deck and becomes part of compliance strategy and opportunity for innovation.
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
AGREED WITH
Prativa Mohapatra, Amol Deshpande
Argument 3
Responsible AI commitments on websites are starting points, not meaningful milestones – need standards and working code
EXPLANATION
Andy argues that simply having responsible AI statements on company websites is insufficient and represents only the beginning of the journey. Real progress requires actual working standards, code, and products that implement these principles rather than theoretical commitments.
EVIDENCE
He contrasts this with Adobe’s approach of building working code and products that leverage that code, not theory and slides in a slide deck and statements on a website.
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
AGREED WITH
Prativa Mohapatra, Amol Deshpande
Argument 4
Adobe’s Content Authenticity Initiative provides transparency for AI-generated content through C2PA standards
EXPLANATION
Andy explains that Adobe developed the Content Authenticity Initiative and C2PA (Coalition for Content Provenance and Authenticity) standards to provide transparency about how content is created. This initiative creates a global standard for content credentials that travels with digital assets.
EVIDENCE
Five years of development with partners like Microsoft, BBC, OpenAI, Sony, and others. The C2PA content credentials are now visible on platforms like LinkedIn with a specific symbol.
MAJOR DISCUSSION POINT
Content Authenticity and Transparency Standards
AGREED WITH
Prativa Mohapatra, Dr. Satya Ramaswamy, Vishal Anand Kanwati
Argument 5
Content credentials act like “nutrition labels” for digital content, allowing people to know what’s in their media
EXPLANATION
Andy uses the analogy of food nutrition labels to explain how content credentials should work for digital media. Just as consumers can check food ingredients to make informed decisions about what’s healthy for their children, people should have the right to know how digital content was created.
EVIDENCE
He references PM Modi’s mention of nutrition labels in his remarks and draws the parallel that if you can pick up food in a store and know what’s in it, digital content should have the same transparency foundation.
MAJOR DISCUSSION POINT
Content Authenticity and Transparency Standards
Argument 6
Adobe’s open standard approach ensures independent creators can apply the same provenance at zero cost as Fortune 500 enterprises
EXPLANATION
Andy emphasizes that Adobe’s C2PA standard is designed to be inclusive and democratized, ensuring that small independent creators have access to the same content authenticity tools as large corporations. This prevents the technology from becoming exclusive to big enterprises.
EVIDENCE
The standard is open and free, with no licensing costs, making it accessible to independent creators in India at the same zero cost as Fortune 500 enterprises.
MAJOR DISCUSSION POINT
Democratization and Accessibility of Responsible AI
AGREED WITH
Amol Deshpande
Argument 7
Responsible AI techniques should be interoperable, open, and standardized like India’s UPI payment infrastructure
EXPLANATION
Andy draws a parallel between responsible AI implementation and India’s successful UPI payment system, arguing that AI responsibility requires the same kind of massive scale cooperation, openness, and interoperability that no single organization could achieve alone.
EVIDENCE
He specifically references India’s track record with UPI payment infrastructure that required cooperation across banks and government agencies, emphasizing standards and interoperability.
MAJOR DISCUSSION POINT
Cross-Industry Infrastructure and Standards
AGREED WITH
Amol Deshpande
Argument 8
Regulation should be viewed as a catalyst for good practices rather than just reactive compliance
EXPLANATION
Andy positions regulation not as a burden but as a positive force that catalyzes responsible AI practices across the industry. He advocates for being proactive rather than reactive in implementing responsible AI measures.
EVIDENCE
He mentions viewing regulation like what’s happening in India as a catalyst for good practices, emphasizing the need for standards rather than just principles.
MAJOR DISCUSSION POINT
Regulatory Framework and Industry Self-Governance
AGREED WITH
Vishal Anand Kanwati, Dr. Satya Ramaswamy
A
Amol Deshpande
4 arguments180 words per minute758 words251 seconds
Argument 1
Moving from generative AI to more complex scenarios and agentic AI requires orchestrated responsibility across all layers
EXPLANATION
Amol argues that as AI evolves from simple generative applications to more complex agentic systems, responsible AI implementation must be orchestrated across all five layers of AI deployment. He emphasizes that responsibility cannot be applied to just one layer but must be comprehensive.
EVIDENCE
He mentions the evolution from generative AI to AI ML and more complex scenarios, noting that enterprises like RPG have a significantly higher share as consumers of AI technologies in manufacturing.
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
AGREED WITH
Andy Parsons, Prativa Mohapatra
Argument 2
In conglomerates, responsible AI requires providing scalable, safe environments with guardrails rather than one-size-fits-all solutions
EXPLANATION
Amol explains that large diverse organizations like RPG Group cannot implement a single AI solution across all functions and industries. Instead, they need to create scalable frameworks with appropriate guardrails that can be adapted to different business units and use cases.
EVIDENCE
He describes RPG’s approach of creating a ‘bring your own AI’ scenario where each function can operate with agility within protected environments, noting their diverse portfolio from infrastructure to healthcare, IT to agriculture, and tires.
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
AGREED WITH
Andy Parsons
DISAGREED WITH
Prativa Mohapatra
Argument 3
Industry partnerships through bodies like FICCI are critical for disseminating learnings to organizations without access to such information
EXPLANATION
Amol emphasizes the importance of industry bodies in sharing responsible AI frameworks and learnings, particularly to help smaller organizations that may not have access to the same resources as large conglomerates. He sees this as a supply and demand issue where proper frameworks need to be made available.
EVIDENCE
He mentions that organizations like RPG, which deal across diverse sectors, have the responsibility to create frameworks and share learnings through industry bodies like FICCI, especially for domain-specific applications.
MAJOR DISCUSSION POINT
Democratization and Accessibility of Responsible AI
AGREED WITH
Prativa Mohapatra
Argument 4
Different industries require different templates and guardrails, varying from function to function
EXPLANATION
Amol argues that responsible AI implementation cannot be uniform across all industries and functions. Each sector and business function requires customized approaches and specific guardrails based on their unique requirements and risk profiles.
EVIDENCE
He provides examples from RPG’s diverse portfolio spanning infrastructure, healthcare, IT, agriculture, and tires, noting that each requires different templates and that learnings need to be domain-specific.
MAJOR DISCUSSION POINT
Cross-Industry Infrastructure and Standards
P
Prativa Mohapatra
6 arguments155 words per minute1118 words432 seconds
Argument 1
Enterprises need legal teams and compliance teams to re-adapt for AI compliance, covering business strategy, ethical strategies, and regulatory compliance
EXPLANATION
Prativa argues that organizations must restructure their legal and compliance functions to handle AI-specific requirements. She emphasizes that enterprises need to ensure their AI initiatives align with business strategy, ethical considerations, and regulatory requirements simultaneously.
EVIDENCE
She mentions that enterprises already have legal and compliance teams, but these teams need to re-adapt to handle AI compliance across different continents and countries, requiring additional people and resources.
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
AGREED WITH
Andy Parsons, Amol Deshpande
DISAGREED WITH
Amol Deshpande
Argument 2
Adobe practices “ART” philosophy – Accountability, Responsibility, and Transparency – embedded in product methodologies
EXPLANATION
Prativa introduces Adobe’s “ART” framework as a practical approach to responsible AI that goes beyond principles to actual implementation in products. This philosophy is embedded into their product development methodologies with hundreds of steps in their secure development process.
EVIDENCE
She explains that every new Adobe product goes through a strong, secure methodology with hundreds of steps, and provides examples of how this is implemented in Firefly and Acrobat products.
MAJOR DISCUSSION POINT
Transparency and Accountability in AI Systems
AGREED WITH
Andy Parsons, Dr. Satya Ramaswamy, Vishal Anand Kanwati
Argument 3
Firefly embeds content credentials so enterprises can be confident they won’t violate laws or face liability issues
EXPLANATION
Prativa explains how Adobe’s Firefly generative AI tool implements responsible AI by embedding content credentials and using only licensed content for training. This ensures that enterprises using the tool won’t face legal issues or liability problems.
EVIDENCE
She describes the input-output loop where Firefly uses only licensed content for training models, and the output includes nutrition labels showing transparency about how content was created.
MAJOR DISCUSSION POINT
Content Authenticity and Transparency Standards
Argument 4
Acrobat Assistant follows the same trust principles as PDF creation, ensuring authentic source material
EXPLANATION
Prativa uses Acrobat Assistant as an example of how AI tools can maintain the same trust levels as their underlying platforms. She contrasts this with problems like lawyers using AI-generated content with fictitious legal references.
EVIDENCE
She references recent Supreme Court concerns about lawyers submitting petitions with references to non-existent cases or fictitious laws, contrasting this with Acrobat Assistant which uses files from users’ own machines for reliable sourcing.
MAJOR DISCUSSION POINT
Content Authenticity and Transparency Standards
Argument 5
Large enterprises creating AI technologies are responsible for developing frameworks that smaller organizations can adopt
EXPLANATION
Prativa argues that there’s a risk of creating a divide between large enterprises and smaller organizations in responsible AI adoption. She emphasizes that technology creators have a responsibility to develop frameworks that can be adopted by organizations of all sizes.
EVIDENCE
She mentions Adobe’s content authentication initiative starting in 2019, before the AI boom, and how large enterprises must create methodologies that others can adopt, noting the resource constraints of MSMEs.
MAJOR DISCUSSION POINT
Democratization and Accessibility of Responsible AI
AGREED WITH
Amol Deshpande
Argument 6
MSMEs lack the luxury of shifting resources like large companies, requiring creators and big users to provide methodologies
EXPLANATION
Prativa highlights the resource constraints faced by small and medium enterprises in implementing responsible AI practices. Unlike large companies that can reallocate people and resources, MSMEs need ready-made frameworks and methodologies from larger organizations.
EVIDENCE
She compares the evolution from digital transformation to AI transformation, noting that while big companies can shift people between teams and create new organizational structures, MSMEs don’t have that flexibility.
MAJOR DISCUSSION POINT
Democratization and Accessibility of Responsible AI
AGREED WITH
Amol Deshpande
D
Dr. Satya Ramaswamy
4 arguments187 words per minute1064 words340 seconds
Argument 1
Air India’s generative AI virtual assistant handles 97% of queries autonomously while maintaining safety through continuous monitoring
EXPLANATION
Dr. Satya describes Air India’s successful implementation of the airline industry’s first generative AI virtual assistant, which demonstrates how responsible AI can be deployed at scale while maintaining high performance and safety standards through continuous monitoring and safeguards.
EVIDENCE
The assistant has handled 13.5 million queries, processes 40,000 queries daily, operates at 1/100th the cost of contact centers, with 50% of customers preferring it over human agents, and maintains a 97% autonomous resolution rate.
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
Argument 2
Aviation industry’s safety-critical nature provides embedded concepts of human-in-the-loop control and regulatory compliance
EXPLANATION
Dr. Satya explains how the aviation industry’s inherent focus on safety provides a natural framework for responsible AI implementation. The industry’s existing regulatory compliance culture and human-in-the-loop concepts translate well to AI governance.
EVIDENCE
He provides the example of aircraft autopilot systems where planes can land themselves, but pilots have a red button to immediately take control if needed, demonstrating embedded safety concepts that apply to AI systems.
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
Argument 3
Air India uses generative AI to monitor the performance of their generative AI chatbot, with customer feedback mechanisms
EXPLANATION
Dr. Satya describes a multi-layered approach to AI monitoring where they use AI to watch AI performance, combined with direct customer feedback mechanisms. This creates a comprehensive monitoring system that has prevented inappropriate responses over two years of operation.
EVIDENCE
They use generative AI to monitor the chatbot’s performance, ask customers if their questions were answered appropriately, and allow feedback on whether responses are appropriate or inappropriate, with no inappropriate responses in 20 months of operation.
MAJOR DISCUSSION POINT
Transparency and Accountability in AI Systems
AGREED WITH
Andy Parsons, Prativa Mohapatra, Vishal Anand Kanwati
Argument 4
International airlines must comply with regulations across multiple jurisdictions without constraining innovation
EXPLANATION
Dr. Satya argues that operating internationally requires compliance with diverse regulatory frameworks (US FAA, European regulations, Indian DGCA) while still enabling innovation. This demonstrates that regulation doesn’t necessarily constrain technological advancement.
EVIDENCE
Air India operates across North America, Europe, and other regions, each with different regulatory requirements, yet they successfully launched the global airline industry’s first generative AI virtual assistant from India.
MAJOR DISCUSSION POINT
Regulatory Framework and Industry Self-Governance
AGREED WITH
Andy Parsons, Vishal Anand Kanwati
DISAGREED WITH
Vishal Anand Kanwati
V
Vishal Anand Kanwati
3 arguments184 words per minute584 words189 seconds
Argument 1
NPCI prioritizes keeping false positives low over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraud
EXPLANATION
Vishal explains NPCI’s approach to responsible AI in fraud detection, where they prioritize minimizing false positives (legitimate transactions flagged as fraud) over achieving high accuracy rates initially. This approach protects genuine users while gradually improving the system with more data.
EVIDENCE
He describes starting with lower accuracy but very low false positives, then improving accuracy over time through collaboration with industry and ecosystem partners, leading to better understanding of customer patterns.
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
Argument 2
NPCI built small language models to provide transparency when transactions fail, explaining why decisions were made
EXPLANATION
Vishal describes how NPCI implemented transparency in their AI systems by creating small language models that can explain to customers why their transactions were declined. This provides accountability and helps users understand AI-driven decisions.
EVIDENCE
Customers can chat with the system to understand why transactions failed, with explanations like ‘you normally don’t send this type of transaction’ or ‘this is the first time you’re scanning a QR code,’ providing specific reasoning for declines.
MAJOR DISCUSSION POINT
Transparency and Accountability in AI Systems
AGREED WITH
Andy Parsons, Prativa Mohapatra, Dr. Satya Ramaswamy
Argument 3
Regulations are mandatory because AI can go wrong at scale, requiring safeguards across the ecosystem
EXPLANATION
Vishal argues that regulatory intervention is necessary because AI systems operating at scale can have widespread negative impacts if they malfunction. He emphasizes the need for ecosystem-wide safeguards to prevent systemic failures.
EVIDENCE
He gives the example of how all UPI transactions could potentially be declined by AI systems, requiring safeguards that limit the percentage of transactions that can be declined even if it means letting some fraudulent transactions through.
MAJOR DISCUSSION POINT
Regulatory Framework and Industry Self-Governance
AGREED WITH
Andy Parsons, Dr. Satya Ramaswamy
DISAGREED WITH
Dr. Satya Ramaswamy
S
Sarika Guliani
1 argument141 words per minute586 words249 seconds
Argument 1
Responsibility is a commitment to technology development with shared human values, not just a compliance check
EXPLANATION
Sarika argues that responsible AI should be viewed as a fundamental commitment to developing technology that aligns with human values, rather than merely a regulatory compliance exercise. She emphasizes that current decisions will define the future of AI development.
EVIDENCE
She references the summit’s theme of ‘people, planet, and progress’ and emphasizes that what we choose to create now will define the future, making responsibility a core commitment rather than just compliance.
MAJOR DISCUSSION POINT
Regulatory Framework and Industry Self-Governance
A
Announcer
2 arguments129 words per minute129 words59 seconds
Argument 1
Trust, transparency and accountability are foundational requirements for responsible AI deployment, not optional features
EXPLANATION
The Announcer establishes that as India stands at a defining moment in its digital journey, the key differentiator is not the speed of AI adoption but how responsibly it is deployed. These principles must be foundational to any AI implementation strategy.
EVIDENCE
The session is specifically focused on advancing safe and trusted AI in the corporate landscape, presented by Adobe in association with FICCI.
MAJOR DISCUSSION POINT
Transition from AI Principles to Practical Implementation
Argument 2
The real differentiator for organizations is not how quickly they adopt AI, but how responsibly they deploy it
EXPLANATION
The Announcer argues that while AI serves as a powerful engine for innovation and productivity, the competitive advantage comes from responsible deployment rather than speed of adoption. This shifts the focus from rapid implementation to thoughtful, ethical AI integration.
EVIDENCE
The context is set for a discussion on responsible AI from principles to practice in Corporate India, emphasizing the importance of safe and trusted AI deployment.
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
S
Shantari Malaya
4 arguments160 words per minute1621 words605 seconds
Argument 1
India is charting the course for the world in building trustworthy and inclusive AI at a momentous time
EXPLANATION
Shantari positions India as a global leader in responsible AI development, emphasizing that the country is not just following international practices but actively shaping the global approach to trustworthy and inclusive AI systems.
EVIDENCE
She mentions this is happening at the heart of the AI Impact Summit and emphasizes the spectacular nature of the event, indicating India’s prominent role in global AI discussions.
MAJOR DISCUSSION POINT
Cross-Industry Infrastructure and Standards
Argument 2
Responsible AI risks becoming either a centralized compliance exercise or fragmented business unit checklists in large organizations
EXPLANATION
Shantari identifies two critical risks in implementing responsible AI across large conglomerates: over-centralization that reduces it to mere compliance, or over-decentralization that fragments it into disconnected unit-specific activities. Both approaches can undermine effective responsible AI implementation.
EVIDENCE
She specifically addresses this challenge to Amol Deshpande representing the RPG Group, acknowledging the complexity of managing responsible AI across diverse business units and industries.
MAJOR DISCUSSION POINT
Enterprise-Scale AI Governance and Risk Management
Argument 3
There’s a risk that responsible AI becomes a luxury for large enterprises while smaller businesses struggle with implementation
EXPLANATION
Shantari raises concerns about the potential divide between large enterprises that have resources to implement comprehensive responsible AI frameworks and smaller businesses that may lack the capacity for such implementations. This could create an unequal playing field in responsible AI adoption.
EVIDENCE
She specifically asks about MSMEs (Micro, Small, and Medium Enterprises) and their ability to implement responsible AI practices, questioning whether it’s getting relegated to being an enterprise luxury.
MAJOR DISCUSSION POINT
Democratization and Accessibility of Responsible AI
Argument 4
The challenge is balancing global best practices with India’s unique diversity, scale, and innovation drive
EXPLANATION
Shantari highlights the complex task of harmonizing international AI regulations and standards (EU AI Act, UNESCO recommendations, OECD principles) with India’s distinctive characteristics including its vast scale, cultural diversity, and ambitious innovation goals, while also managing domestic regulatory requirements.
EVIDENCE
She references multiple global frameworks including EU AI Act, UNESCO recommendations, OECD rules, and mentions India’s domestic regulations like DPDP Act and various industry-specific regulators.
MAJOR DISCUSSION POINT
Regulatory Framework and Industry Self-Governance
Agreements
Agreement Points
Responsible AI requires moving from principles to practical implementation with demonstrable systems
Speakers: Andy Parsons, Prativa Mohapatra, Amol Deshpande
The question has changed from “should we be responsible with AI?” to “can your systems prove you’ve been responsible?” Responsible AI commitments on websites are starting points, not meaningful milestones – need standards and working code Enterprises need legal teams and compliance teams to re-adapt for AI compliance, covering business strategy, ethical strategies, and regulatory compliance Moving from generative AI to more complex scenarios and agentic AI requires orchestrated responsibility across all layers
All speakers agree that the era of theoretical responsible AI commitments is over, and organizations must now demonstrate actual working systems and processes that prove responsible AI implementation
Transparency and accountability must be built into AI systems, not added as afterthoughts
Speakers: Andy Parsons, Prativa Mohapatra, Dr. Satya Ramaswamy, Vishal Anand Kanwati
Adobe’s Content Authenticity Initiative provides transparency for AI-generated content through C2PA standards Adobe practices “ART” philosophy – Accountability, Responsibility, and Transparency – embedded in product methodologies Air India uses generative AI to monitor the performance of their generative AI chatbot, with customer feedback mechanisms NPCI built small language models to provide transparency when transactions fail, explaining why decisions were made
All speakers emphasize that transparency and accountability must be embedded in the core design of AI systems rather than being superficial additions
Large enterprises have a responsibility to create frameworks that smaller organizations can adopt
Speakers: Prativa Mohapatra, Amol Deshpande
Large enterprises creating AI technologies are responsible for developing frameworks that smaller organizations can adopt MSMEs lack the luxury of shifting resources like large companies, requiring creators and big users to provide methodologies Industry partnerships through bodies like FICCI are critical for disseminating learnings to organizations without access to such information
Both speakers agree that there’s a risk of creating a divide between large and small organizations in responsible AI adoption, and that larger enterprises must take responsibility for democratizing access to responsible AI frameworks
Regulatory intervention is necessary and should be viewed as a catalyst for good practices
Speakers: Andy Parsons, Vishal Anand Kanwati, Dr. Satya Ramaswamy
Regulation should be viewed as a catalyst for good practices rather than just reactive compliance Regulations are mandatory because AI can go wrong at scale, requiring safeguards across the ecosystem International airlines must comply with regulations across multiple jurisdictions without constraining innovation
All speakers agree that regulation is not only inevitable but necessary for responsible AI deployment, and that it can actually enable rather than constrain innovation when properly implemented
Open standards and interoperability are essential for responsible AI implementation
Speakers: Andy Parsons, Amol Deshpande
Responsible AI techniques should be interoperable, open, and standardized like India’s UPI payment infrastructure Adobe’s open standard approach ensures independent creators can apply the same provenance at zero cost as Fortune 500 enterprises In conglomerates, responsible AI requires providing scalable, safe environments with guardrails rather than one-size-fits-all solutions
Both speakers emphasize that responsible AI cannot be achieved through proprietary solutions but requires open, standardized approaches that enable broad adoption and interoperability
Similar Viewpoints
Both speakers from Adobe share the same philosophy about content authenticity and transparency, using the nutrition label analogy to explain how people should have the right to know how digital content was created
Speakers: Andy Parsons, Prativa Mohapatra
Content credentials act like “nutrition labels” for digital content, allowing people to know what’s in their media Firefly embeds content credentials so enterprises can be confident they won’t violate laws or face liability issues
Both speakers from critical infrastructure organizations (aviation and payments) emphasize the importance of balancing AI efficiency with safety and user protection, prioritizing user experience and safety over pure performance metrics
Speakers: Dr. Satya Ramaswamy, Vishal Anand Kanwati
Air India’s generative AI virtual assistant handles 97% of queries autonomously while maintaining safety through continuous monitoring NPCI prioritizes keeping false positives low over high accuracy initially, ensuring genuine transactions aren’t incorrectly flagged as fraud
Both speakers recognize the complexity of implementing responsible AI across diverse organizational contexts and the need for customized approaches while ensuring accessibility for smaller organizations
Speakers: Amol Deshpande, Prativa Mohapatra
Different industries require different templates and guardrails, varying from function to function MSMEs lack the luxury of shifting resources like large companies, requiring creators and big users to provide methodologies
Unexpected Consensus
Regulation as enabler rather than constraint
Speakers: Andy Parsons, Dr. Satya Ramaswamy, Vishal Anand Kanwati
Regulation should be viewed as a catalyst for good practices rather than just reactive compliance International airlines must comply with regulations across multiple jurisdictions without constraining innovation Regulations are mandatory because AI can go wrong at scale, requiring safeguards across the ecosystem
It’s unexpected that all speakers, including those from private enterprises, view regulation positively as an enabler of innovation rather than a burden. This consensus suggests a mature understanding that proper regulation can actually accelerate responsible AI adoption
Human-in-the-loop as fundamental design principle
Speakers: Dr. Satya Ramaswamy, Vishal Anand Kanwati, Andy Parsons
Aviation industry’s safety-critical nature provides embedded concepts of human-in-the-loop control and regulatory compliance NPCI built small language models to provide transparency when transactions fail, explaining why decisions were made The question has changed from “should we be responsible with AI?” to “can your systems prove you’ve been responsible?”
The consensus on maintaining human oversight and control across different industries (aviation, payments, content creation) shows unexpected alignment on the fundamental principle that AI should augment rather than replace human judgment in critical decisions
Overall Assessment

The speakers demonstrate remarkable consensus on the need to transition from theoretical responsible AI principles to practical implementation, the importance of transparency and accountability built into systems, the necessity of regulatory frameworks, and the responsibility of large organizations to democratize responsible AI practices. There’s also strong agreement on the value of open standards and the need for human oversight in AI systems.

High level of consensus across all major themes, with speakers from different industries (technology, aviation, payments, conglomerates) aligning on fundamental principles. This suggests that responsible AI practices are becoming standardized across sectors and that industry leaders recognize both the opportunities and responsibilities that come with AI deployment. The consensus implies that responsible AI is moving from a competitive differentiator to a baseline requirement for enterprise AI adoption.

Differences
Different Viewpoints
Industry self-regulation versus regulatory intervention necessity
Speakers: Vishal Anand Kanwati, Dr. Satya Ramaswamy
Regulations are mandatory because AI can go wrong at scale, requiring safeguards across the ecosystem International airlines must comply with regulations across multiple jurisdictions without constraining innovation
Vishal strongly advocates for mandatory regulations due to AI’s potential for widespread harm, while Dr. Satya emphasizes that existing regulatory compliance doesn’t constrain innovation and can coexist with technological advancement
Centralized versus decentralized approach to responsible AI in large organizations
Speakers: Amol Deshpande, Prativa Mohapatra
In conglomerates, responsible AI requires providing scalable, safe environments with guardrails rather than one-size-fits-all solutions Enterprises need legal teams and compliance teams to re-adapt for AI compliance, covering business strategy, ethical strategies, and regulatory compliance
Amol advocates for a decentralized ‘bring your own AI’ approach with flexible guardrails, while Prativa emphasizes the need for centralized compliance structures and standardized methodologies
Unexpected Differences
Speed versus safety in AI deployment
Speakers: Andy Parsons, Dr. Satya Ramaswamy
I’d argue that we can all go faster with our adoption of generative AI if we do it responsibly and critically put in place a foundation for that responsibility now rather than wait and be reactive Air India’s generative AI virtual assistant handles 97% of queries autonomously while maintaining safety through continuous monitoring
While both speakers advocate for responsible AI, Andy suggests that responsibility enables faster adoption, while Dr. Satya’s approach emphasizes careful monitoring and gradual scaling. This represents different philosophies on balancing innovation speed with safety measures
Overall Assessment

The discussion revealed relatively low levels of fundamental disagreement, with most tensions arising around implementation approaches rather than core principles. Key areas of difference included the necessity and role of regulation, centralized versus decentralized governance approaches, and strategies for democratizing responsible AI access

Low to moderate disagreement level. The speakers largely aligned on the importance of responsible AI principles but differed on tactical approaches. This suggests a maturing field where practitioners agree on goals but are still developing best practices for implementation. The implications are positive – there’s broad consensus on the need for responsible AI, but healthy debate on optimal implementation strategies that can drive innovation in governance approaches

Partial Agreements
All speakers agree that democratizing responsible AI access is crucial, but they propose different mechanisms – Andy focuses on open technical standards, Prativa emphasizes enterprise responsibility for framework creation, and Amol highlights industry body partnerships for knowledge dissemination
Speakers: Andy Parsons, Prativa Mohapatra, Amol Deshpande
Adobe’s open standard approach ensures independent creators can apply the same provenance at zero cost as Fortune 500 enterprises Large enterprises creating AI technologies are responsible for developing frameworks that smaller organizations can adopt Industry partnerships through bodies like FICCI are critical for disseminating learnings to organizations without access to such information
All speakers acknowledge the importance of regulation, but differ on its role – Andy sees it as a catalyst for innovation, Dr. Satya views it as compatible with innovation, while Vishal sees it as a necessary constraint to prevent systemic failures
Speakers: Andy Parsons, Dr. Satya Ramaswamy, Vishal Anand Kanwati
Regulation should be viewed as a catalyst for good practices rather than just reactive compliance International airlines must comply with regulations across multiple jurisdictions without constraining innovation Regulations are mandatory because AI can go wrong at scale, requiring safeguards across the ecosystem
Takeaways
Key takeaways
2026 marks a critical transition from AI principles to provable practice, with regulatory enforcement making responsible AI a compliance necessity rather than optional Responsible AI must be embedded at all layers of AI systems through orchestrated governance, not treated as a centralized compliance exercise or fragmented checklist Content authenticity and transparency standards (like Adobe’s C2PA) provide ‘nutrition labels’ for digital content, enabling users to understand how content was created Enterprise-scale AI governance requires balancing innovation with safety through human-in-the-loop controls, continuous monitoring, and industry-specific guardrails Large enterprises have a responsibility to create frameworks and standards that smaller organizations and MSMEs can adopt, preventing a divide between big and small players Cross-industry collaboration and open standards are essential for responsible AI implementation, similar to India’s UPI infrastructure model Transparency and accountability must be built into AI systems from the ground up, with clear explanations for AI decisions and outcomes Industry-led governance must work in conjunction with regulatory frameworks, as self-regulation alone is insufficient for managing AI risks at scale
Resolutions and action items
FICCI committed to continuing the dialogue and translating discussions into actionable frameworks with industry support Enterprises should implement the ‘ART’ philosophy (Accountability, Responsibility, Transparency) in their AI governance practices Organizations need to re-adapt legal and compliance teams specifically for AI compliance covering business strategy, ethical strategies, and regulatory compliance Industry bodies like FICCI should facilitate knowledge sharing and framework dissemination to MSMEs and smaller organizations Companies should adopt open standards and interoperable approaches for responsible AI implementation Enterprises should establish continuous monitoring systems for AI performance with customer feedback mechanisms
Unresolved issues
The specific balance between light-touch regulation versus comprehensive regulatory frameworks remains undefined How to effectively scale responsible AI frameworks across diverse industries with different risk profiles and requirements The challenge of consumer awareness and adoption of content authenticity standards, as many users are still unfamiliar with these systems Business case justification for responsible AI investments, particularly for smaller organizations with limited resources Technical challenges around social media platforms stripping metadata and removing transparency when content is uploaded The timeline and methodology for harmonizing global best practices with India’s specific scale, diversity, and regulatory environment
Suggested compromises
Starting with lower AI accuracy but keeping false positives very low, then gradually improving accuracy as more data and collaboration becomes available (NPCI’s approach) Balancing safety controls with customer convenience by using AI to monitor AI systems and providing customer feedback mechanisms Creating industry-specific templates and guardrails rather than one-size-fits-all solutions for responsible AI implementation Adopting a ‘bring your own AI’ approach within enterprises while providing scalable, safe environments with appropriate guardrails Viewing regulation as a catalyst for good practices rather than purely reactive compliance, encouraging proactive adoption of responsible AI frameworks
Thought Provoking Comments
2026 is certainly going to be the year that responsible AI becomes a responsibility and an opportunity for innovation… The question for everyone in this room has changed from should we be responsible with AI? I think that debate is well settled at this point, but can your systems actually prove that you have been responsible with AI, and how do you go about doing that?
This comment fundamentally reframes the entire discussion by shifting focus from theoretical principles to practical implementation and proof of responsibility. It establishes a concrete timeline and transforms responsible AI from an abstract concept to a measurable business requirement.
This set the foundational framework for the entire panel discussion, moving all subsequent conversations away from ‘why’ responsible AI matters to ‘how’ to implement and demonstrate it. Every panelist subsequently focused on practical examples and implementation strategies rather than theoretical benefits.
Speaker: Andy Parsons
We sometimes think of this as nutrition labels, which you heard PM Modi mention yesterday in his remarks. If you walk into a store in most democratic nations in the world, you can pick up a piece of food. You can decide if it is healthy or not healthy for your children… we think that digital content has to have that same foundation of transparency.
This analogy brilliantly simplifies a complex technical concept by connecting it to something universally understood. It makes the abstract concept of content provenance tangible and relatable, while also connecting it to consumer rights and democratic values.
This metaphor became a recurring theme throughout the discussion, with other panelists referencing transparency and traceability in their own contexts. It helped ground the technical discussion in everyday consumer experience and established transparency as a fundamental right rather than a nice-to-have feature.
Speaker: Andy Parsons
It’s more of a bring your own AI kind of a scenario in every function. You cannot provide one solution. One size doesn’t fit all. So when you are dealing with such scenarios, a scalable, safe environment with protected with guardrails is a key thing for us.
This insight challenges the traditional centralized approach to enterprise technology deployment and recognizes the democratization of AI tools. It acknowledges that different business functions need different AI solutions while maintaining consistent safety standards.
This comment shifted the discussion toward the practical challenges of governance in decentralized AI adoption. It influenced subsequent speakers to address how to maintain consistency and safety across diverse use cases, leading to discussions about frameworks, templates, and industry-wide standards.
Speaker: Amol Deshpande
I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine transaction being tagged as a fraud, should not be very, very high. I think that was the first principles on which we started.
This reveals a counterintuitive but crucial insight about implementing AI in high-stakes environments – that perfect accuracy isn’t the primary goal, but rather minimizing harm to legitimate users. It demonstrates sophisticated thinking about trade-offs in AI system design.
This comment introduced nuance to the discussion about AI performance metrics and highlighted that responsible AI isn’t just about technical accuracy but about understanding and minimizing real-world negative impacts. It influenced the conversation toward considering user experience and trust as key metrics of responsible AI implementation.
Speaker: Vishal Anand Kanwati
So the creators of these technologies have to come together and keep on creating this method and methodology for others to adopt… It’s very hard… So small organizations cannot do that. So the people, process, technology changes that is required to adopt this, big guys can maneuver, shift people… The MSMEs don’t have that luxury.
This comment addresses a critical equity issue in AI adoption – that responsible AI practices might become a luxury only large enterprises can afford, potentially creating a two-tiered system. It highlights the social responsibility of technology leaders.
This observation shifted the discussion from individual enterprise strategies to collective industry responsibility and the broader societal implications of AI adoption. It prompted discussions about frameworks, industry bodies, and the role of larger players in democratizing responsible AI practices.
Speaker: Prativa Mohapatra
AI can go berserk… today all the UPI transactions can get declined… And that’s where we have a check where we say, you know, this is the only percentage that I can decline, even if I have to let go of the other transactions… So those safeguards are very much required.
This stark illustration of AI’s potential for systemic failure in critical infrastructure makes the abstract risks of AI very concrete. The image of an entire nation’s payment system failing due to AI malfunction is both vivid and terrifying, effectively arguing for regulatory intervention.
This comment provided the most compelling argument for why regulatory intervention is inevitable rather than optional. It moved the final discussion from whether regulation is needed to how it should be implemented, effectively settling the debate about industry self-regulation versus government oversight.
Speaker: Vishal Anand Kanwati
Overall Assessment

These key comments fundamentally shaped the discussion by establishing a progression from theoretical principles to practical implementation challenges, and finally to systemic risks requiring collective action. Andy Parsons’ opening reframing moved the entire conversation from ‘why’ to ‘how,’ while his nutrition label analogy provided a accessible framework for understanding complex technical concepts. The subsequent panelists built on this foundation by sharing practical insights about implementation challenges, equity concerns, and systemic risks. The discussion evolved from individual enterprise strategies to industry-wide responsibilities and ultimately to the inevitability of regulatory intervention. The most impactful comments were those that either reframed the fundamental question, provided vivid analogies or examples, or highlighted previously unconsidered consequences – particularly around equity and systemic risk. Together, these comments created a comprehensive narrative arc that took the audience from principles through practice to policy implications.

Follow-up Questions
How can social media platforms be encouraged or required to preserve content authenticity metadata instead of stripping it during upload?
This is a critical technical and policy challenge that affects the entire content authenticity ecosystem, as platforms currently remove transparency information when content is uploaded
Speaker: Andy Parsons
How can consumer awareness of content authenticity symbols and provenance be increased effectively?
Low consumer awareness limits the effectiveness of content authenticity initiatives, and strategies are needed to make these symbols as recognizable as nutrition labels
Speaker: Andy Parsons
What specific methodologies and frameworks can be developed to help MSMEs adopt responsible AI practices without the resources of large enterprises?
There’s a risk that responsible AI becomes a luxury only large enterprises can afford, creating a divide that could harm smaller businesses and overall ecosystem development
Speaker: Prativa Mohapatra
How can industry bodies effectively disseminate responsible AI frameworks and learnings to create domain-specific templates for different industries?
Different industries require different approaches to responsible AI, and there’s a need for systematic knowledge transfer mechanisms through industry partnerships
Speaker: Amol Deshpande
What is the optimal balance between industry-led governance and regulatory intervention for AI systems?
This fundamental question about governance models remains unresolved and requires further exploration to determine the most effective approach for different contexts
Speaker: Shantari Malaya
How can global AI regulations and best practices be harmonized with India’s specific scale, diversity, and innovation requirements?
India needs to balance compliance with international standards while maintaining its competitive advantage and addressing its unique market characteristics
Speaker: Shantari Malaya
What specific mechanisms can ensure AI systems remain fair and inclusive while scaling to handle massive transaction volumes like those at NPCI?
The challenge of maintaining fairness and preventing false positives in high-volume, critical systems requires ongoing research and development
Speaker: Vishal Anand Kanwati
How can the business case for content provenance and AI transparency be strengthened beyond regulatory compliance?
Making responsible AI economically viable rather than just a compliance requirement is essential for widespread adoption
Speaker: Andy Parsons

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Who Watches the Watchers Building Trust in AI Governance

Who Watches the Watchers Building Trust in AI Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on the current state of AI governance and safety, examining progress made since the 2023 Bletchley Park AI Safety Summit and exploring new models for oversight and verification. Stephen Clare, co-lead author of the International AI Safety Report, opened by highlighting significant progress in AI safety measures, noting that technical safeguards have improved dramatically, with models becoming much harder to jailbreak and taking hours rather than minutes to circumvent protections. He emphasized that while risks are now materializing in real-world applications with over a billion people using AI globally, the toolkit for managing these risks is expanding, though implementation remains inconsistent across the industry.


Hiroki Hibuka provided a global perspective on AI governance approaches, explaining that all countries employ both hard and soft law approaches rather than purely regulatory or voluntary frameworks. He distinguished between the EU’s holistic approach through the AI Act versus the sector-specific approaches favored by Japan and the US, noting that Japan prioritizes establishing rules in advance while the US relies more on ex-post litigation and court resolution.


Shana Mansbach introduced the concept of independent verification organizations (IVOs) as a solution to the growing trust problem in AI deployment. She argued that traditional command-and-control governance is inadequate for AI’s rapid pace and technical complexity, proposing instead a government-authorized marketplace of independent verifiers that would use outcomes-based approaches rather than procedural compliance. This system would create financial incentives through liability protection, insurance advantages, and competitive market benefits.


The panelists discussed the challenge of creating standardized evaluation methods, noting that current benchmarks are often narrow and quickly outdated as AI capabilities advance. They explored how independent verification could address the information asymmetry between frontier labs and external auditors while maintaining the technical expertise needed for effective oversight. The discussion concluded with recognition that democratic debate is necessary to determine acceptable risk levels and that lessons from other industries like automotive safety regulation could inform AI governance approaches.


Keypoints

Major Discussion Points:

Current State of AI Safety and Governance: The discussion highlighted significant progress in AI safety measures since the 2023 Bletchley Summit, with improved technical safeguards making models much harder to “jailbreak” and 12 leading companies now having frontier safety frameworks. However, challenges remain in ensuring consistent application across the industry.


Global Regulatory Approaches: Different countries are taking varied approaches to AI governance – the EU with holistic regulation through the AI Act, Japan with sector-specific soft law approaches emphasizing compliance, and the US with an “ex-post” liability-focused system. All face the challenge of regulating rapidly evolving “black box” technologies.


Trust Gap and Independent Verification: A central theme was the trust problem affecting all stakeholders – public, deployers, regulators, and developers. The panel discussed the need for independent verification organizations (IVOs) as a marketplace-based solution to provide outcomes-based assessment rather than procedural compliance.


Stakeholder Responsibilities and Incentives: The conversation explored how responsibilities should be distributed among model developers, deployers, and end users, emphasizing the need for “defense in depth” approaches. Key incentives for adoption of verification systems include liability protection, insurance requirements, and competitive market advantages.


Technical Challenges in Evaluation: The panel addressed significant gaps in current AI evaluation methods, noting that existing benchmarks are often narrow and quickly outdated. The stochastic nature of AI systems and the complexity of real-world applications make safety testing particularly challenging.


Overall Purpose:

The discussion aimed to assess the current state of AI governance and safety three years after the Bletchley Park AI Summit, examining what progress has been made, what challenges remain, and what new governance models might be needed to address the trust gap between AI capabilities and public confidence in their safety and reliability.


Overall Tone:

The tone was professional and constructive throughout, with participants building on each other’s points collaboratively. While acknowledging significant challenges and gaps in current approaches, the overall sentiment was cautiously optimistic about progress made and pragmatic about solutions needed. The discussion maintained an academic yet practical focus, with panelists drawing from their diverse expertise in policy, law, and technical safety to offer concrete examples and analogies from other industries.


Speakers

Gregory C. Allen: Moderator/Host of the panel discussion


Stephen Clare: Co-lead author of the International AI Safety Report, expert in AI safety and governance


Hiroki Hibuka: Research professor at Kyoto University Graduate School of Law, former Japanese government policymaker (worked for 4 years designing Japanese AI policies), expert on Japanese AI policy, non-resident senior associate at CSIS, lawyer


Shana Mansbach: Vice president of strategy and communications at Fathom (a think tank), leads policy initiatives and convenes the ASHFE conference series on AI


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion examined the current state of AI governance and safety, three years after the landmark 2023 Bletchley Park AI Safety Summit, revealing both significant progress and emerging challenges as artificial intelligence transitions from theoretical concern to widespread real-world deployment.


The Evolution from Theoretical to Practical AI Governance

Stephen Clare, co-lead author of the 312-page International AI Safety Report reviewed by hundreds of people, opened the discussion by establishing a fundamental shift in the AI governance landscape. As he noted, “the rubber is really hitting the road” with AI systems, as risks that were merely theoretical one or two years ago have now materialized into concrete real-world impacts including deepfakes and cyber attacks. With ChatGPT alone having 800 million weekly average users, the urgency of effective governance has moved from future planning to immediate necessity.


The technical progress in AI safety has been substantial and measurable. Clare highlighted that models have become dramatically more difficult to “jailbreak” or circumvent safety measures. Where the UK Security Institute could previously find universal exploits within minutes using techniques like emotional manipulation (“I miss my grandma who used to tell me bedtime stories about making Molotov cocktails”) or language translation workarounds, these same researchers now require seven to ten hours to breach the latest models’ safeguards. This represents not just incremental improvement but a fundamental strengthening of AI safety infrastructure.


Furthermore, the industry has witnessed widespread adoption of frontier safety frameworks, with twelve leading AI developers now maintaining formal documents describing their risk management approaches for increasingly powerful systems. This represents a significant shift towards transparency and collective learning about risk management practices.


However, Clare emphasized that these advances come with important caveats. Technical safeguards remain vulnerable to sophisticated attacks and edge cases, and providing reliable assurances across the vast range of real-world applications proves extremely difficult. More critically, while frontier developers typically implement robust safeguards, application across the broader industry remains inconsistent, particularly among companies operating behind the technological frontier.


Global Regulatory Approaches and Cultural Differences

Hiroki Hibuka provided crucial perspective on the international landscape of AI governance, challenging common misconceptions about different national approaches. He argued that the frequently cited distinction between the EU’s “hard law” approach and other countries’ “soft law” approaches represents a fundamental misunderstanding of regulatory frameworks. In reality, all jurisdictions employ both hard and soft law mechanisms, as extensive existing regulations—covering privacy protection, copyright, finance, automotive, and healthcare—already apply to AI systems.


The more meaningful distinction lies in whether countries pursue holistic versus sector-specific regulation. The EU’s AI Act represents a comprehensive approach attempting to regulate AI systems across all applications, while Japan and the United States favor sector-specific interventions that address AI within existing regulatory frameworks for particular industries.


Hibuka also illuminated cultural differences in regulatory philosophy that significantly impact implementation. The United States tends towards an “ex-post” approach, allowing broad experimentation with high-level principles and relying on court systems to resolve disputes when harms occur. Japan, by contrast, prefers “ex-ante” rule-setting, with society establishing clear guidelines in advance. Japanese companies are “very, very good at complying with given rules” but struggle with creating their own governance mechanisms or explaining their approaches to stakeholders.


This cultural analysis reveals that all countries face similar fundamental challenges: how to regulate cutting-edge “black box” technologies with unlimited risk scenarios, and how to establish benchmarks and standards for evaluating complex values like privacy, transparency, and fairness when clear societal consensus on these benchmarks has not yet emerged.


The Trust Gap and Independent Verification Solutions

Shana Mansbach from Fathom, a young think tank started only two years ago, introduced a critical framework for understanding current AI governance failures through the lens of trust. She identified a pervasive trust problem affecting all stakeholders: the public lacks means to determine what is actually safe; deployers such as hospitals, banks, and retailers need AI systems but cannot assess their reliability; regulators struggle to confer earned trust rather than mere compliance; and even developers face declining adoption if trust erodes. Deployers also worry about a “populist backlash” if AI systems cause harm.


Traditional command-and-control governance proves inadequate for addressing this trust gap due to two fundamental problems. First, the speed problem: AI capabilities advance so rapidly that even well-intentioned regulations become outdated quickly. Second, the technical capacity problem: expertise for understanding AI systems and their risks remains concentrated primarily within frontier laboratories, creating an information asymmetry that undermines independent oversight.


Mansbach proposed independent verification organizations (IVOs) as a potential solution—a government-authorized and overseen marketplace of independent verifiers tasked with developing testing and tooling to determine whether AI systems meet safety requirements. This represents an outcomes-based approach rather than procedural compliance, where governments specify desired outcomes (children’s safety, data privacy, controllability) while independent verifiers develop and continuously update testing methodologies to ensure these outcomes are met.


The IVO model offers several potential advantages over traditional approaches. Independence ensures that companies are not “grading their own homework.” Democratic accountability maintains government oversight of outcome specification while leveraging private sector technical expertise. Flexibility allows verification organizations to continuously update testing criteria to match technological advancement. Finally, it creates a “race to the top” by incentivizing ever-better testing and tooling through market competition.


Stakeholder Responsibilities and Market Dynamics

The discussion revealed the complexity of distributing responsibilities across the AI ecosystem. Clare advocated for a “layered approach” with “defense in depth,” recognizing that no single intervention or actor can provide complete safety assurance. Model developers implement training techniques to reduce harmful outputs; deployers add monitoring systems and query classifiers; ecosystem monitoring bodies track AI content spread; and society adapts through resilience measures.


Allen, drawing from his experience at a rocket company, provided the analogy of ride-hailing services to illustrate how responsibility distribution might work: automobile manufacturers ensure safe car design and manufacturing; platform companies like Uber maintain vehicles appropriately; and drivers follow traffic laws and operate vehicles safely. Similarly, in AI systems, model developers, business deployers, and end users each bear distinct but interconnected responsibilities.


However, current incentive structures often work against safety adoption. Hibuka noted that independent auditing lacks clear economic incentives except in highly regulated sectors like healthcare, automotive, and finance where trust is essential. Companies may even prefer “willful blindness”—avoiding safety assessments that might reveal problems and increase legal liability.


Allen observed that the insurance industry has emerged as an unexpected but powerful governance mechanism. Major insurers are increasingly refusing to cover AI products due to uncertainty about their contents and risks. This creates de facto market regulation, as companies requiring insurance coverage—which includes most major enterprises—find themselves unable to deploy AI systems without meeting insurer requirements. Allen referenced his experience with AS9100 certification in the aerospace industry, where certification became essential not due to regulatory mandate but because insurers and customers demanded it.


Technical Challenges in Evaluation and Standards Development

A critical gap exists in current AI evaluation methodologies, which Clare described as “already not super informative about real-world risk because they’re too narrow.” Existing evaluations typically consist of question sets related to specific topics like biosecurity or cybersecurity, with models deemed risky if they score above certain thresholds. However, as models become more capable, general, and widely adopted, these narrow evaluations fail to capture the vast range of real-world applications and risks.


Mansbach elaborated on the technical challenges inherent in AI system evaluation. The fundamentally stochastic nature of AI means identical queries can produce different outputs, complicating safety assessment. Additionally, model outputs do not directly correlate with user actions—a model might provide identical harmful suggestions to ten users, with nine dismissing the advice while one acts upon it with serious consequences. The multi-turn, relationship-building nature of AI interactions adds further complexity that simple evaluation frameworks cannot capture.


These technical limitations highlight why current benchmarks, while useful, remain inadequate for comprehensive safety assessment. The rapid pace of capability advancement means evaluation methods quickly become outdated, creating a persistent gap between assessment tools and actual system capabilities.


Democratic Accountability and Risk Tolerance

Hibuka raised fundamental questions about democratic governance of AI risks, noting that technical solutions cannot resolve value-based decisions about acceptable risk levels. Using autonomous vehicles as an example, he highlighted that Japan experiences over 2,000 traffic fatalities annually from human drivers, raising questions about what safety standards should apply to AI-driven vehicles. Should they merely match human performance, exceed it, and if so, by what margin?


These decisions require democratic deliberation rather than technical expertise alone. Different societies may reasonably reach different conclusions about acceptable risk-benefit trade-offs, and governance mechanisms must accommodate this variation while maintaining technical rigor in implementation.


The challenge extends beyond risk tolerance to evaluation methodology design. Testing autonomous vehicles on safe highways produces different results than testing in complex urban environments, but determining appropriate test conditions involves value judgments about representative use cases and acceptable failure modes.


Information Asymmetries and Ongoing Challenges

A persistent challenge throughout the discussion was the concentration of technical expertise within frontier AI laboratories. Clare acknowledged this information asymmetry while noting that external oversight cannot succeed without drawing upon internal knowledge. The International AI Safety Report attempted to address this through partnerships that leverage industry expertise while maintaining transparency and incorporating diverse perspectives from academia, civil society, and government.


However, this approach faces limitations. External actors remain dependent on publications from laboratories, leaving significant gaps in understanding across different risks and applications. The report frequently acknowledged uncertainty due to lack of comprehensive data outside laboratory walls.


Some positive trends suggest this may improve over time. Frontier safety frameworks have become institutionalized in governance mechanisms like the EU AI Act’s codes of practice. Companies have made new commitments to sharing usage data, and broader societal attention to AI impacts creates pressure for greater transparency.


Allen referenced recent incidents like the XAI Grok situation as examples of how real-world deployment continues to reveal unexpected challenges and the need for better oversight mechanisms.


Legal Frameworks and Verification

The discussion touched on how independent verification could address critical gaps in legal frameworks, particularly around standards of care in liability cases. Currently, when AI systems cause harm, courts must retrospectively determine whether defendants met appropriate standards—a challenging task for technically complex systems where even experts disagree on best practices.


Mansbach suggested that verification could potentially confer legal protection for companies that undergo independent assessment, establishing clear expectations before harms occur rather than leaving courts to determine appropriate standards after the fact. Such clarity could reduce legal uncertainty while incentivizing proactive safety measures.


The insurance implications prove particularly significant. Just as other industries require various forms of assessment, AI insurance could require verification, with verified systems potentially receiving coverage at more favorable terms. This creates market-driven incentives for safety adoption without requiring regulatory mandates.


Unresolved Questions and Future Directions

Despite the comprehensive discussion, several fundamental challenges remain unresolved. The stochastic nature of AI systems continues to complicate safety testing, as identical inputs can produce varying outputs. The relationship between model outputs and real-world harms remains poorly understood, particularly given the complex, multi-turn interactions users develop with AI systems.


Information asymmetries between frontier laboratories and external actors persist, though some progress toward greater transparency appears promising. The question of how to maintain technical competence in independent verification organizations as they become separated from cutting-edge development remains open.


Perhaps most fundamentally, the challenge of determining acceptable risk levels across different applications and societies requires ongoing democratic deliberation that technical solutions alone cannot resolve. The governance frameworks discussed provide potential mechanisms for implementing societal choices about risk tolerance, but cannot substitute for the democratic processes needed to make those choices.


Conclusion

The discussion revealed a field in transition from theoretical concern to practical implementation, with significant technical progress in AI safety measures but persistent challenges in governance and oversight. The emergence of independent verification organizations represents a promising but experimental approach to addressing trust gaps while maintaining democratic accountability and technical rigor.


The insurance industry’s growing influence as a de facto regulator highlights how market mechanisms may drive safety standards regardless of formal regulatory approaches. However, success will require careful attention to incentive design, democratic input on risk tolerance, and continued innovation in evaluation methodologies to keep pace with rapidly advancing AI capabilities.


Most significantly, the conversation demonstrated growing recognition among experts that traditional regulatory approaches may prove inadequate for AI governance, necessitating novel frameworks that blend public oversight, private sector expertise, and market-driven incentives. The specific mechanisms may vary across jurisdictions, but the fundamental challenge of building trustworthy AI systems while maintaining innovation and democratic accountability remains universal. As Clare humorously noted about the length of their report, the complexity of these challenges defies simple solutions, requiring sustained attention and experimentation with new governance approaches.


Session transcriptComplete transcript of the session
Gregory C. Allen

Again, to my immediate right, we have Stephen Clare, who wrote the International AI Safety Report as the co -lead author, if I’m not mistaken. And he earned that applause, because that report is a remarkable document that I do think is the foundation upon which all conversations about AI governance now must rest for the next year. It’s the sort of minimum amount of knowledge that you must have to participate in the conversation, which I think is really a tribute to him. Then we have Hiroki Hibuka, who is currently a research professor at the Kyoto University Graduate School of Law, and was also deeply involved in drafting Japan’s first set of soft law regulations, and is an expert on all things AI, but also especially astute at what’s going on in Japan.

We also have a privilege of collaborating with him at CSIS, where he’s a non -resident senior associate. And I must say, he is probably the best person writing about Japanese AI policy in Japanese, but he is definitely the best person writing about it in English. And so I often tell Hiroki that, like, if he doesn’t write about it, nobody in Washington, D .C. knows about it. So it’s important, his work. And then finally, we have Shana Mansbach, who’s the vice president of strategy and communications at Fathom, which is a young think tank, started only two years ago, but has already succeeded as one of the best conveners of the ASHFE conference series on AI, and also now leading a policy initiative, which I think she’s going to tell us all about.

So without further ado, I’d like to start with you, Stephen. I just said that the report that you were the lead author of is sort of the bedrock for having a conversation on AI governance. For those in the audience who haven’t yet made it through, but they, of course, will, can you sort of set the stage? Where are we in 2026 in AI governance and in AI safety, technical and procedural intervention?

Stephen Clare

Sure. Thanks, Greg. First of all, I’m sorry if I’d known Greg was going to make the report, you know, required reading, I would have tried harder to make it shorter. Yeah. Thanks for having me. Thanks for really excited to be here. So for people who don’t know, the report is it was founded up the Bletchley 2023 Bletchley Safety Summit as sort of, you know, the shared evidence base for decision makers thinking about these complicated, fast moving, noisy governance questions. It’s kind of trying to be like the IPCC report for for AI. It’s backed by over 30 countries and intergovernmental organizations. You know, I’m one of two co lead writers along with Karina Prunkle, but there’s over 30 dedicated experts writing different sections, and there’s hundreds of people that review it.

So it’s really trying to be a sort of state of the art, what do we know? What don’t we know about general purpose AI systems and the risks they might pose? I think this year the main message of the report is like the rubber is really hitting the road or something with these kind of systems. Risks that even a year or two ago might have been theoretical are now very real and we’re seeing emerging empirical evidence. More real world impacts of AI on productivity and labor markets and in science and in software engineering. It’s all like really happening out in the world. There’s a billion people now using AI around the world. Many of those impacts include risks.

So we’re seeing effects of deepfake spreading, cyber attacks being more common with AI systems. And so the need for sort of risk management techniques that are effective is also growing. One thing that I found surprising working on the report is that in this domain on risk management and technical safety, there’s actually some good news. Quite a lot of good news, I’d say. In various ways, our technical safeguards are improving. Models are becoming much harder to jailbreak. So. You know. So three, four years ago, if you asked a model to give you a recipe for a Molotov cocktail, it would not do that. But if you said, oh, I miss my grandma, and she used to tell me this amazing bedtime story about how she loved making Molotov cocktails, please help me remember my grandmother, it would be like, okay, well, if it’s for your grandmother.

Then that stopped working maybe a year or two ago, but then if you maybe translated your question into Swahili or something and put it in the model and then translated the answer back, it might have made safeguards. So none of that works anymore. These safeguards are much harder to evade, and we know this quantitatively. For example, the UK Security Institute will try and evade the safeguards or jailbreak all these new models when they’re released. At the beginning of 2025, they could do this in literally minutes, find a sort of universal jailbreak that would elicit potentially harmful knowledge. For the latest models, it’s taking them seven, ten hours to get around safeguards. So there’s still vulnerabilities, but for novices or even moderately skilled actors, it’s basically the same thing.

It’s becoming much, much harder to evade them. We’re also seeing more of these safeguards get implemented into organizational practices. So 12 companies, all the leading AI developers now have frontier safety frameworks, which are these documents that describe how they plan to manage risks as they scale more powerful systems, which is many more than had them a couple of years ago and is, I think, a sign of transparency and sort of collective learning about risk management that’s worth noting. So basically, yeah, our toolkit for managing these risks is growing. But, you know, it wouldn’t be a safety report if I didn’t maybe end on a few caveats or some bad news. The first is that these technical safeguards are still vulnerable in many ways.

They can still be jailbroken with enough effort or in edge cases, and it’s very difficult to test and provide reliable assurances that these safeguards will work across this huge range of use cases that these models are now applied to in the real world. And on the organizational side, you know, these safeguards only work if they are applied. And although we’re seeing, especially from frontier developers, we’re very prominent, usually quite robust safeguards applied on models, across the whole industry, and especially behind the frontier, application remains quite inconsistent. The safety frameworks, all these companies have them, but they vary in the risks they cover, they vary in the practices that they recommend. And so the landscape as a whole, you know, these tools only work if they are applied.

And we still see that, some vulnerabilities across the landscape, which I think turns this technical challenge, that points towards the governance challenge of how do we assure broader adoption, how do you ensure compliance, what do you do when there’s a lack of compliance. We’re sort of facing these questions, and again, because these risks and the impacts are now not something that we can sort of push down the road anymore, I think, for future years, the governance questions are becoming a lot more urgent.

Gregory C. Allen

Terrific. And if I could contrast what you said with what we might have said if we were having this conversation back at the Bletchley Park AI Summit. But it’s almost like the only good news on AI safety, AI security, and AI governance at Bletchley was, well, at least we’re all here talking about it. And now, three years later, the good news is we’ve done a lot about it. We have techniques that can provide demonstrable increases in safety. We don’t know everything that we need to work, but we know a lot of stuff that does work. And really, a lot of the challenges, I think, as the report says, it’s now in the hands of policymakers to make sure that these safeguards get implemented robustly and diversely.

So with that, I now want to turn to Hiroki, who I hope can give us a state of where we are in the story of AI governance around the world. If the next steps are really in the hands of policymakers, where are we globally?

Hiroki Hibuka

Thank you, Greg. And again, congratulations. Stephen was the publisher of the great report. And I think, first of all, I feel very glad that now the discussion on AI governance is such advanced compared to three years ago. I’m a lawyer and I’m a former policymaker. I worked for the Japanese government for four years, designing the Japanese AI policies, mainly in terms of regulation and governance. And as a lawyer and policymaker, the question after reading the report is, where is the end? And to what extent stakeholders have to manage the risks? Because in the end, you can’t remove all the risks. AI is black box and the technology advances so fast. And even though there is advance and progress of Godwins, the next day you may find another risk.

So there is no end to the story of how regulators should design the regulations. That is the main question. All countries. Countries are facing and different nations, regions take different approaches. Maybe the most famous regulation is the EU AI Act. And in that context, a lot of people say, hey, EU takes a hard law regulatory approach on AIs while Japan or UK or United States takes a software approach. But I think it’s a completely wrong understanding of the regulatory framework because, as you know, there are already lots of regulations that can be applied to AI systems. Privacy protection laws, copyright laws, or sector -specific laws such as finance, automotive or healthcare. We already have a lot of regulations out there.

So the real question is not whether or not to regulate AIs, but the real question is how to update our existing regulations and whether or not we need additional regulations targeting AI systems. In addition to the existing regulatory framework, so in that sense all countries take the hard law approach and also all countries have soft laws because European Union there are a lot of technical standards to implement the EU AI Act that are now under discussion but anyways all countries have both hard laws and soft laws that is the start of the discussion and then when we compare EU approach and Japan approach the clear difference is whether to regulate AI holistically or not sector -specific and when I compare the Japanese policy and the US policy we are on the same position as to taking a sector -specific regulation the main difference I understand is whether you prioritize the exempt approach or exposed approach the US takes more exposed approach you can do whatever you want to do and the regulation is usually very high level the principle is very high But once you have a problem, if you damage others’ properties or lives, then you go to the court and you fight in the court.

The Japanese society is not like that. In Japan, actually the number of losses is very low. People prefer to set the rules in advance. Japanese companies are very, very good at complying with the given rules. But they are not very good at creating their own governance mechanisms or explaining to stakeholders why you are doing that. And now Japanese stakeholders are starting to realize that it doesn’t work. So we need to have more agile and multi -stakeholder approach. So we are trying to leverage the power of soft laws, negotiating among different stakeholders, and give the standards, guidances. But in the end, again, if you violate the existing hard laws, of course you will be sanctioned. So that’s the main differences in American approach and Japan approaches.

And in the end, all countries are facing difficult questions of how to deal with this cutting -edge technologies that are black box and there are unlimited risk scenarios. And sometimes we don’t know how to evaluate the values such as privacy or transparency or fairness. There has been no clear benchmark standards so far in the society. So how to design those benchmarks and regulation methods are the challenges all countries are facing.

Gregory C. Allen

Terrific, Hiroki. And Shaina, I know you have a unique perspective on this because your organization is now proposing sort of additional models of AI governance that are not really reflected in existing law, whether in the United States or Europe or Japan or India. So walk us through what you see as the important work we’re doing now.

Shana Mansbach

Sure. My panelists have set me up very well to say this. So I think as the International AI Safety Report shows, the capabilities around these models are surging. And as the capabilities surge, so too does the uncertainty around the risks, by which I mean, do these systems work safely, securely, and as advertised? That uncertainty creates a trust problem, a trust problem for the public, which doesn’t have a way of figuring out what is actually safe, a trust problem for deployers, by which I mean hospital systems, retail, banks, who want to and indeed need to use these systems, but have no idea what they can actually trust. So there’s a trust problem for the regulators, too.

They don’t know, how do you confer not just trust, but how do you confer earned trust? And I would say there’s a trust problem for the developers also, because if and as trust starts to grow, there’s a trust problem for So if the trust starts to decline, you’re going to see adoption decline as well, so this is something that developers should be focused on too. The current approach is just not the current approach to tech governance is not equipped to handle this trust problem very well. Traditional command and control governance says here are the rules, here are all the things you have to do, here are the procedures, here is what compliance actually here’s what compliance actually looks like.

There are a bunch of problems with this approach in the context of AI, but I’ll focus on two, which is the speed problem. AI moves really, really quickly, and even well -intentioned regulations are going to become outdated very, very quickly, and then there’s the technical capacity problem. Even with the rise of the AI safety institutes, which are doing amazing work, the talents, the expertise for understanding these systems and understanding their risks is largely concentrated in the frontier labs, which of course leads some people to say, well, let’s just go to the frontier labs. They can regulate themselves. I don’t think I have to spend too much time explaining why there are problems with that approach but it’s simple incentives I think all of us know people in the labs who are doing amazing, amazing work they are the people who make sure that I can because of them I sleep better at night but the incentives are just not there there are always going to be trade -offs between investing in safety testing and tooling and investing in development so we’re going to have problems with self -regulation in terms of addressing that trust gap so where does that lead us?

at Fathom, my organization, we’re very focused on coming up with new models that can solve this trust gap so we’re very focused on independent verification specifically the marketplace of independent verification organizations by which I mean a government -authorized and overseen marketplace of independent verifiers which are trying to be charged with creating testing and tooling to determine whether these AI systems are actually safe The difference here is that this is an outcomes -based approach. Instead of, as I said, having procedures, here are the rules, here are all the things you need to do, here are all the boxes you must check to be certified as being good, you have an outcomes -based approach where you have a government saying, here are the things that we care about.

We care about children’s safety. We care about data privacy and protection. We care about controllability and interpretability. And then you have independent verifiers that can actually go out, do the testing, have updated testing constantly to make sure that those outcomes are being met. We think that independent verification solves for a couple of these deficits in the trust context. First, they are independent. The labs are not grading their own homework. Second, democratic accountability. You have governments that are creating outcomes instead of the industry doing it itself. Third, flexibility. Under this system, the IVOs, independent verification organizations, are constantly updating their testing and criteria to make sure that they’re keeping up with the pace of technology and the pace of risks as well.

And I think the fourth thing, which is pretty interesting, is it creates a race to the top here. Right now, the only people working on safety testing and tooling are in the labs. What we’re envisioning is a marketplace that incentivizes ever better testing and tooling here. I could talk about IVOs for days and days, but let me just end on one point. I was talking to Greg about this earlier, and Greg asked, are there analogous systems or industries or sectors that we could talk about? And I said, yeah, sort of. I mean, in America, we have Underwriters Lab. There’s LEED certification. There are some analogies. But the honest answer is there’s not a perfect analogy.

We have had the same regulatory system for the last century. And I think that with the rise of AI, we’re seeing that system is no longer built for purpose. And when we try to use old systems, hard law, soft law, any of these things, we’re really struggling to make it work. So what I’m trying to do, what I’d encourage all of us to do is to say, you know, we do need to think a little bit differently. Because this is what this technology in this time calls for.

Gregory C. Allen

Well, that’s great. So there’s a few points I want to pull together there. The first is, you know, as Hiroki pointed out, in the U .S. system, liability law looms extremely large, right? The lawsuits at the end of this story when things go wrong. And when you have, as, for example, ChatGPT does, 800 million weekly average users, something’s going to go wrong every week, right? And the question is… How is that going to intersect with our existing body of regulation? How is that going to intersect with liability law? The second thing is this is going to, because we’re talking about these general purpose technologies, this is going to be adopted in so many different sectors of the economy.

And right now, as Shana pointed out, the number of people who have, you know, Steven’s expertise on what it takes to really make AI systems safe and well -governed and perform reliably as intended across the whole range of potential applications, that’s not a lot of humans on planet Earth who are good at that stuff. And because these AI models are going to be deployed in just about every sector of the economy, we need some level of those capabilities in every sector of the economy. And so the question is, you know, if I am a financier, if I am a finance company, if I am a health care company, you know, how am I going to know and how are my consumers going to know?

that when they use AI -related capabilities, it’s going to work reliably as intended over the full range of acceptable use cases. And so, Stephen, I want to come to you and ask, when it comes to governance, when it comes to oversight and verification, how do you see the balance of responsibilities in terms of what responsibilities need to fall upon the model developers, what responsibilities need to fall upon the users, what responsibilities need to fall on independent third parties, whether that’s the government, whether that’s auditors, whether that’s this marketplace of verification that Shana is talking about. So what do you see as the balance of responsibilities, and how might this go wrong, how might this go right?

In 30 seconds or less.

Stephen Clare

I mean, I’m sure it’s kind of the boring but true answer. It’s the boring part of it. depends and it’ll vary a lot across use cases and sectors. I think probably it’s not the case that it’s fair or helpful or true to allocate to one actor or another, but instead we need this layered approach of just many different policies and practices at different parts of the stack. Because none of our approaches are foolproof, they all have vulnerabilities, and so we have, instead of safety by design, we have this safety by degree situation where we want defense in depth. So for developers, there will be training techniques that they can implement to make models less likely to elicit dangerous knowledge in the first place.

If there are people building on top of those models and then deploying them, there will be monitoring systems they can put in place and classifiers that identify dangerous queries and stop models from answering them. and then probably for ecosystem monitoring bodies which could be deployers but could also be other institutions in the world there can be tracking how AI content is spreading across borders and around the world and then I think there’s this other aspect of we’re focusing a lot on sort of model or developer safety but as we are moving into this world where many people around the world are having access to powerful, helpful intelligent technologies and we also just need to adapt for that reality and think about resilience at the societal level too of how do we adapt to the beneficial use cases and the various use cases that these models will be used for so thinking about hardening digital systems against increased cyber attacks just sort of admitting the reality of the situation in many ways and adapting to it rather than trying to prevent all harmful uses in the first place I think we need a variety of approaches across all these different actors

Gregory C. Allen

Yeah. And just to use an analogy for how broad the group of stakeholders is, if you think about a ride hailing service, a taxi service like Uber, you have the automobile manufacturers who have to make sure that this is a solid car design that was manufactured safely and appropriately to specification. Then you have Uber, where in some countries Uber owns the car, and so they’re responsible for ensuring that it gets maintenance appropriately. And then you have the driver who’s responsible for ensuring that they are actually following the law and driving the car safely. And if you apply that analogy to AI, you have the model developer, then you might have the sort of business use case deployer, which could be a bank, a medical device company.

Who? A financial institution, whoever. And then you finally have the end customer who’s receiving those services and making sure that they’re using them appropriately. And so. If you think about that sort of different body of use cases, as I said before, the capabilities are not symmetric across all of those. But there are sort of obligations. And so, Shana, I want to come back to you and ask this model that you’re proposing, what exactly does it mean for the different stakeholders in the ecosystem? How does their life change if we adopt the system that you’re in favor of?

Shana Mansbach

Yeah, I mean, the overarching answer is we create trust throughout the system, which is the missing piece here. I think there are a couple of pieces that I would pull out. You had mentioned liability earlier, and let me talk about that a little bit. What this system does, it does not assign liability. It doesn’t say, you know, deployers, developer, it’s you, it’s you, it’s you. We’re seeing, at least in America, courts move their way through this. Sister. court cases move their ways through the court system and we’ll see where that is but where that ends up being but what is really missing is a standard of care and this is I think one of the real advantages that this system has so right now at least how it works in our current tort system is that if you’re Waymo kill someone someone can sue and then a judge and a jury has to figure out so again we’re not answering who should be sued but let’s say that the family of someone who got hurt or killed is suing Waymo what happens is that the jury has to decide whether whether the person who was sued did the right thing and if you are not technical that is the hardest thing even if you are technical and maybe even Waymo doesn’t know So what this system would do is confer, if you are verified, it would confer, the verification would confer a rebuttal presumption of having met a heightened standard of care.

So what we’re doing is clarifying and defining up front before an actual harm happens what a deployer or whoever is sued is actually supposed to do instead of having this very, very messy system where someone after the fact has to figure out what went wrong and who’s responsible for that. I can talk about other layers of this back here, but I think the liability piece is really key. I mean, we just see this. I think it’s a reflection of the trust problem here where when you’re a deployer, I mean, God, I think everyone that I talk to, you know, again, hospital systems, retail, banks, anyone who needs to be consumer facing is really worried about this problem.

I mean, when I get sued, what do I do? And maybe there’ll be. a populist backlash and everyone will hate everyone who’s using AI systems. And it’s much better to, ahead of something like that, ahead of that happening, have that standard of care defined up front and have that seal of approval conferred.

Gregory C. Allen

And Hiroki, as you think about the different stakeholders in the system and especially the idea of auditors, which now there are a number of organizations being founded, it seems like almost every day, who are proposing to provide external evaluation services that can help companies understand, as Shane has said, this product or this service or this company meets the seal of approval and we vouch for it as an independent entity. What kind of momentum do you see for this independent assessment part of the story across regulatory frameworks?

Hiroki Hibuka

Independent evaluation. Independent evaluation is essential given that we are all using AI systems for all different situations, starting from language models to healthcare systems to car driving. But it would be not easy to persuade corporate executives to use the independent audit without clear economic incentives. For example, if you get the certification for autonomous driving, then you can sell the car to the big market. Then, of course, you pay for the audit. But if you take this audit for this language model, then you can prove that this language model is relatively safer than the other models. But it doesn’t necessarily make enough incentive for model developers to conduct the audit or evaluation systems, independent evaluation, because there is no clear financial incentives.

Gregory C. Allen

Actually, could I? I can ask you to elaborate on that. So where might these financial incentives come from? You mentioned one, which is the regulators force you to do it. That’s one. Maybe insurance is one. another like where where might these incentives come from

Hiroki Hibuka

I think it should start from the regulated areas such as cars health care systems finance systems or infrastructures because everybody needs a strong requires strong trust on those systems if it doesn’t work well then somebody might have a baby kills that’s a big problem and maybe you could say hey but in the end if you are killed you can be compensated but it’s not the end of the story while if the damage could be compensated by money by the company and stakeholders are okay with that maybe companies like to just run the system go and and compensate to the victims for example if the language model says something discriminated the company can just say hey we’re very sorry we introduce better guardrails and we pay for that if you want compensation

Gregory C. Allen

in terms of what is possible, what interventions work, what the risks are. But I want to ask about how we go from that degree of consensus to something that might be more of like a standard around procedural implementation. You know, Shana’s term of art is standard of care, which matters a lot in the American legal system. I’m sure it matters a lot in other legal systems. I’m just ignorant about, you know, how and where. And so I’m curious, you know, what do you see as the gap? If these independent evaluators, these independent auditing organizations are emerging, how do they go from we think we’re good at this to, no, this is the accepted best practice?

You know, we have accepted consensus on the risks and the interventions, but, like, how do you turn that into a procedure? Just to give an example to the folks in the audience, I used to work at a rocket company, and the safety standard in the American aerospace industry is AS9100. And in the history, of our company, there’s kind of like a before AS9100 moment, and then there’s an after AS9100 moment. And everything changed for our company, you know, after we got that third -party audit evaluation. A lot of our customers, you know, just said, we do not sign checks for companies that are not AS9100 certified. So, you know, you are deeply steeped in where we are today on the consensus, but how far are we from converting that into standards and procedures for third -party evaluation?

Yeah. I’ll also say one follow -up to Hiroki’s point, too, about auditing. Not only is there sort of a lack of incentives to conduct audits voluntarily now, but there might even be disincentives where one is it’s costly, and it slows you down, and there’s very intense competitive pressures to release faster. And there’s also potentially… like, information or security risks to sharing. You spent hundreds of millions, maybe billions of dollars developing a model, and then you have to share it with an external party before deployment. Like, serious risks to, or perceived risks, at least, to having that information leak or… So I think, yeah, there’s some serious challenges there. I guess there’s one other potential part of the story, which is sometimes you see companies want to be willfully blind, right?

If they have a report that says my product is not safe, well, now they know they’re going to lose the lawsuit. Whereas if they never commission the report, maybe they’ll win the lawsuit. So, Shana, what do you see as meaningful interventions that can help address this problem, both the cost side that Stephen mentioned and the other parts of the incentive structure?

Shana Mansbach

Yeah, let me make a couple of points. I mean, I think we’re talking about the cost of audits, and I think this… this is a big issue that we think about a lot. This system will not work if everyone, if there’s a flat fee, everyone is paying a ton. I mean, we are really, we think that an unsuccessful, there are many ways that a system looks unsuccessful, and one of those ways is if it is just protecting incumbents. And we’re thinking, we envision the system as something that works for, you could verify a general purpose LLM, you could also have narrow AI, you could have a tiny little tool, a little chatbot that is used in schools.

Those three different products should not be audited, not only at the same cost, but in the same way. I mean, compliance isn’t just the check that you’re writing, it is how much of a pain in the butt is it? How many lawyers do you need? How long will this take? So the great thing about this being a marketplace is that the system is right -sized to risk type, to size of these products. and again instead of having just a one size fits all this is what you have to do to comply because I think that that is a real issue it really quickly I just want to go back to you know the question that you asked Hiroki about incentives I mean you can imagine a system where this is mandatory and maybe in some areas you can imagine that but I think that there are three real real carrots for wanting to get verified we talked a little bit about liability so obviously the liability clarity that this is a big carrot I think the insurance piece the insurance piece is real right now we are seeing the big insurers saying we’re not going to touch this we’re not going to insure any AI products because we have no idea what’s inside of them at least in America the way that life insurance works is if you want insurance you have to have a lot of money and a lot of money and a lot of money and a lot of money you have to jump on a scale and tell someone how healthy you are and what are the things that you do and the insurer decides okay are you worthy of being insured and at what premium I think that’s actually a pretty direct analog for what we’re trying to do here where the books are opened and an insurer can look at whether they don’t have to do the testing themselves, but they can look at whether the system has been verified and say, okay, we will actually insure you or we will insure you at a more affordable premium.

I think the third thing is just straight -up market competitive advantage. If I’m a school superintendent and I am choosing between two learning chatbots to put in my schools, I’m not going to choose the one that has not been verified. I want the one that has been verified, that is safest. Yes, because I’m worried about getting sued, but because I want my kids to be safe. And you can imagine a situation much like Underwriters Lab in the United States where basically all consumer products like light bulbs, toothbrushes, basic things that you buy in a store like Walmart, all have the UL seal of approval, and those are the ones that get sold in stores. They have a huge market advantage.

They pay a little bit, but not very much. And in exchange for doing that, they go to market in a way that, or they compete in a market in a way that the ones that don’t go through verification. do. I’m so sorry, Gary, you asked me an actual question and I just answered everyone else’s question and probably not my own.

Gregory C. Allen

It’s okay. You get out of jail free card because you mentioned insurance, which is something I’m deeply interested in right now. I mean, in that space orbital launch vehicle example that I just mentioned, you can’t get insurance for space launches of satellites until you’re AS9100 certified. And that is 10 % of the cost of getting a satellite into space is just the insurance on the rocket. And so basically companies that can’t get insurance can’t compete in the market. And as Shana mentioned, and I think this is a super undercovered story, there are now many of the major insurers in the United States at least are saying, for your enterprise risk policy, AI is not included. So if you are a major bank and you are doing big, important financial transactions, as soon as you start using AI, you’ve lost all your insurance.

And I think the Trump administration in the United States has a very light -touch regulatory approach. And my concern there is that, well, just because the government is not doing anything big and bold on regulation doesn’t mean there will be no regulation. The insurers will step in. And if the insurers exit the market, maybe not in legal terms, but in economic outcome terms, that could be very similar to draconian regulation. So, Shana, you’re mentioning the Underwriters Lab, which is an organization that writes standards that are relied upon by underwriters, the people who are issuing insurance. This is a huge part of the regulatory and governance ecosystem that I think is really important. And so now I’m hoping, Stephen, that you’re going to tell me, that you’ve been reached out to by a bunch of insurance companies, and they’re all reading your report eagerly and thinking about this.

But maybe, maybe not. What’s the case?

Stephen Clare

Not yet, but it’s a really long report. 312 pages, but it goes like that. Maybe I can come back to the best practices point a little bit, because I think we’re talking about auditing here, and at least I know there’s a lot of steps involved, I’m sure, but at least at the technical level, the main tool we have right now to audit the capabilities of the RISC -MD AI model are evaluations. And although in my opening I sort of talked about, oh, it’s great we have this toolkit that’s emerging and it’s strengthening, and that is true, I think on evaluations in particular, as far as like, okay, let’s say we have auditors that are looking at these companies, looking at models, what are they actually looking at to audit or evaluate the models?

I think we actually have a big gap here, a big evaluation gap in terms of, well, how are we actually assessing? So if we’re moving towards best practices, not only do I think we don’t have a sense of the best practices right now, but if we did, they’d be different in a year, because the capabilities are moving too quickly for these technical tools to be in date, for very long. So for example, you’ll have, you know, these evaluations often look like a set of questions related to a certain topic, and you ask the model, so you have a bunch of questions about biosecurity or a bunch of questions about cybersecurity. And if it’s above, if it scores high enough on the test, you say, whoa, this is a dangerous capability, and we need to implement more safeguards or something.

And as far as what’s best practice or safe risk management for a company, we evaluate in terms of, well, does it seem like the safeguards apply proportionately to the risk that you’ve assessed? But I think in many cases, these evaluations we’re using are already not super informative about real -world risk because they’re too narrow. Because you have to build a set of questions that gives you some information about the vast range of use cases in the real world. And as models have become more capable and general and adopted more widely, this has become much more difficult. And I don’t think there’s very many actors out there that are constantly thinking about new ways to evaluate the capabilities.

And so I think this… This is like an important gap in terms of our toolkit. that is, again, quite urgent because these models have been released and we’re using our current evaluations, which are already, in many cases, out of date and not super informative about real -world risk. Shannon, do you want to jump in here?

Shana Mansbach

Yeah. Stephen, I agree with you so much. I mean, all of us are obsessed with benchmarks because that’s kind of all we have, and they’re just so narrow. I spend a lot of time with organizations that we think will become these IVOs, and testing is so, so hard. I mean, think about this. We have a fundamentally stochastic system, so I can ask something 10 times, system 10 times, and I’m going to get 10 different answers. So what does that mean in a safety context? Another problem that we have, what a model outputs is not the same thing as what someone does with it. So think about in the context of mental health. Maybe the model says to 10 different people different versions of, I think you should kill yourself.

Nine times, maybe for nine of those users, that’s fine, they will laugh it out. But for one of those users, there’s going to be a real problem here. and also the multi -turn nature of AI. I mean, you build relationships with these systems and you ask long queries and the stuff just gets really complicated really quickly as technical minds could explain far better than I could. So what we’re trying to do here is incentivize better testing because right now the only people creating evals or eval organizations or doing God’s work, doing awesome stuff, but what does it mean? You’re the best meter out there. I mean, there’s not an incentive to go from good to the best.

And the other actor working, of course, are the labs. And I think many of the labs are actually attempting to be responsible actors here, but again, there’s an incentive gap. I think the only way you’re going to solve this is to have an ecosystem where all of the actors are competing to have the best services, to have the best evaluations, to have the best feedback, to have the best feedback, to have the best feedback, And we hope one day one of these IVOs says, I’ve developed a new type of testing that figures out this kid safety thing that no one has ever thought about. And then the next day someone says, well, we have to be better because then everyone will want to be verified from that organization.

So you are incentivizing ever better testing. And as Stephen says, I think that just given how quickly and dramatically the capabilities and the risks of these systems are increasing, we need really good testing and tooling that can keep up with that. And the only way to do that is to incentivize

Gregory C. Allen

So, Stephen, if I could come to you about what Shana just said. You pointed out how the state of the art in evaluations and assessment is constantly shifting as the capabilities are shifting. I sometimes hear the frontier labs say, yes, and that’s why we’re the only ones who can do the testing, because we’re the ones out there on the frontier. But Shana is making this point about misaligned incentives, which I think we saw. In a conversation you and I had a couple weeks ago in the XAI Grok undressing children kind of example, there’s perverse incentives sometimes at work here in terms of the companies evaluating themselves. So how do you reconcile that gap between the frontier AI labs often do have a unique perspective and a unique understanding, but also it’s really hard to see how we could ever be comfortable with them being the only ones assessing themselves?

Stephen Clare

Well, I can talk about a bit in the context of the report where we try to work with everybody to get the state of the science across the whole landscape. And there I think it is true that there’s this big information asymmetry between the people in the labs who both have the most technical capacity and also the most access to leading models and all of the information about testing and development and all of the information about the technology that’s being used in the lab. And if you don’t draw on that knowledge, you can’t really do anything about it. you’re not going to be able to understand what’s actually going on in the AI world but then I think we brought in a lot of perspectives from academia and society and government feedback to sort of get a full perspective of the landscape as far as what to do going forward to deal with this I think probably it looks something like this with partnerships that are aiming to draw on that knowledge but then aiming for transparency and information sharing that gives third parties and external actors a better understanding of what’s actually going on because it’s true like even writing the report we were reliant on these papers that labs will occasionally publish and drop with like very useful data on how people are using the models or adoption rates but we’re kind of reliant on these like ad hoc publications and then that leaves a lot of gaps across the landscape and different risks and so we you know constantly had the word uncertainty or unknowns in the report because we lack that data outside of the labs

Gregory C. Allen

And do you think that that’s likely to remain the case, or do you think that that could change over time? As we’ve seen, literally, the safety staff of some of these labs quit and start their own auditing companies. So are they likely to have their skills atrophy as they get farther from the development process, or do you think it’s credible that these third -party organizations can build, the word that comes to mind is like economies of scale that are relevant to be able to continue advancing the state -of -the -art of safety and governance, even as the technology keeps evolving?

Stephen Clare

I’m not sure, but I think what we can do is sort of look at the trend, and the trend is towards, I think, a stronger ecosystem around AI labs. As more people, as these problems of lack of data and lack of independent verification are identified more, there’s more people working on it. And then I think we’ve seen some movement towards greater transparency with AI labs as well. So frontier safety frameworks are now a governance mechanism that’s in the EU. AI are in the code of practice, and it’s become institutionalized. It started as a voluntary, anthropic, just published, a responsible scaling policy. And so you see these movements towards sharing more information in more structured ways.

I think also, yesterday, there were the new commitments from the companies at the summit, which were related to sharing data about usage. So I think as a broader set of actors in society are paying attention to AI, because, again, we’re feeling the effects more clearly. It’s becoming more of an economic priority. We’ll see more demand from outside the labs to share this information, and maybe that will lead to some changes.

Gregory C. Allen

Hiroki, you’ve written a ton about AI, but in your capacity as a lawyer, you also have a lot of understanding of many different industries. Are there any lessons learned from other industries here that have solved this sort of technical expertise exists here, but the need for independence exists here? What kind of precedents do you see that we can learn from?

Hiroki Hibuka

Okay, so before that, let me add one more incentive, which is public procurement. If the government says, we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, and we recognize this is a very important issue, LLM or model is safe and then government procures this standard then it will be a big incentive for developers so that is one thing and when I try to answer your questions I think democratic debate is necessary as to what kind of risk level is acceptable and also what kind of test measures are good because there is any single specific answer as to this is acceptable level of perspective.

For example in Japan every year more than 2 ,000 people were killed by a human driven car and the question is what kind of safety would we require for the autonomous vehicles? Is it okay if the kill number is less than 2 ,000 or would we like to require more safety than human drivers? If so, what would be the level? There is no single answer to that kind of question so we need to debate. in a democratic manner as to what is our acceptable goal. And also about the test measures. For example, we can just simply compare the number of rates per kilometers, but if you test in a very safe straight highway, of course it’s easier to get to safety.

While if you try to drive in a pretty complex city, it’s gonna be very difficult. So how to measure how to define the test method is another question. And I don’t go into the details, but the thing that discussion has been done in a lot of industries, car industries, or finance industries, or aerospace industries, we can certainly do a lot of lessons learned from the existing.

Gregory C. Allen

Yeah, one analogy that as you were talking, you jogged my memory, is the National Highway Transportation Safety Administration in the United States, which actually industry begged for this organization to be created. They did. in the 60s and 70s because they said, look, all of us are going to claim that we have safe cars, but only some of us are making big investments in becoming safe, and we want to reward the people whose good behavior is making big safety investments. And so they created this new organization which would give cars a safety rating on one to five, five star or one star. And so now the companies can only get a five star rating if they’re actually doing what it takes to be safe.

And consumers, you know, they’re not always qualified to rip open their car’s engine and see what it looks like under the hood, what’s safe, but they can interpret that five star rating. And so my idea was to ask you, Shana, to elaborate on this in the context of your model, but I’m now scared of the beeper, which is quite loud and scary. So please join me in thanking our terrific panel. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Stephen Clare
5 arguments189 words per minute2162 words685 seconds
Argument 1
Technical safeguards are improving significantly with models becoming much harder to jailbreak and 12 leading companies now having frontier safety frameworks
EXPLANATION
Clare argues that AI safety measures have made substantial progress, with models becoming increasingly resistant to attempts to bypass their safety mechanisms. He notes that major AI developers have implemented comprehensive safety frameworks to manage risks as they scale more powerful systems.
EVIDENCE
UK Security Institute data showing jailbreak attempts now take 7-10 hours compared to minutes at the beginning of 2025; 12 leading AI developers now have frontier safety frameworks compared to fewer companies having them previously
MAJOR DISCUSSION POINT
Current State of AI Safety and Governance
Argument 2
AI risks that were theoretical 1-2 years ago are now real with a billion people using AI globally, creating urgent governance needs
EXPLANATION
Clare emphasizes that AI has moved from theoretical concerns to real-world impacts affecting massive populations. The widespread adoption has made governance challenges immediate rather than future considerations.
EVIDENCE
A billion people now using AI around the world; real-world impacts including deepfake spreading and AI-enabled cyber attacks becoming more common
MAJOR DISCUSSION POINT
Current State of AI Safety and Governance
AGREED WITH
Hiroki Hibuka, Shana Mansbach
Argument 3
AI safety requires layered approach with different responsibilities across developers, deployers, and monitoring bodies rather than single actor accountability
EXPLANATION
Clare advocates for a multi-stakeholder approach to AI safety where different actors in the AI ecosystem have specific responsibilities. He argues that no single approach is foolproof, requiring defense in depth across the entire system.
EVIDENCE
Examples include training techniques for developers, monitoring systems for deployers, and ecosystem tracking for monitoring bodies; comparison to safety by degree rather than safety by design
MAJOR DISCUSSION POINT
Responsibility Distribution and Incentive Structures
AGREED WITH
Hiroki Hibuka, Shana Mansbach
Argument 4
Current evaluation methods have significant gaps and are often outdated, not informative about real-world risks due to AI’s stochastic nature
EXPLANATION
Clare identifies major limitations in how AI systems are currently assessed for safety and capabilities. He argues that existing evaluation methods fail to capture the complexity and variability of real-world AI applications.
EVIDENCE
Evaluations often consist of narrow question sets that don’t reflect vast range of real-world use cases; models becoming more capable and general make evaluation increasingly difficult
MAJOR DISCUSSION POINT
Trust and Verification Challenges in AI Systems
AGREED WITH
Shana Mansbach
Argument 5
Information asymmetry exists between frontier labs with technical capacity and external actors needing transparency for effective governance
EXPLANATION
Clare highlights the challenge that the most crucial knowledge about AI systems resides within the companies developing them, creating barriers for external oversight. This concentration of expertise makes independent governance difficult while still requiring that knowledge for effective regulation.
EVIDENCE
Reliance on ad hoc publications from labs for data on model usage and adoption rates; gaps in understanding across different risks and applications
MAJOR DISCUSSION POINT
Standards of Care and Legal Framework Development
AGREED WITH
Shana Mansbach
DISAGREED WITH
Shana Mansbach
H
Hiroki Hibuka
5 arguments149 words per minute1274 words509 seconds
Argument 1
All countries face the challenge of regulating black box technology with unlimited risk scenarios and no clear benchmark standards
EXPLANATION
Hibuka argues that AI regulation presents universal challenges regardless of jurisdiction due to the technology’s opacity and rapidly evolving nature. He emphasizes that the fundamental difficulty lies in establishing appropriate risk management standards for unpredictable technology.
EVIDENCE
AI advances so fast that new risks emerge constantly; lack of clear benchmark standards for values like privacy, transparency, and fairness in society
MAJOR DISCUSSION POINT
Current State of AI Safety and Governance
AGREED WITH
Stephen Clare, Shana Mansbach
Argument 2
Different countries take varying approaches – EU’s holistic regulation vs Japan/US sector-specific approaches, with Japan preferring ex-ante rules while US uses ex-post liability
EXPLANATION
Hibuka explains that while all countries use both hard and soft law approaches, they differ in scope and timing of regulation. He contrasts the EU’s comprehensive AI Act with Japan and US preferences for sector-specific regulation, and highlights cultural differences in regulatory philosophy.
EVIDENCE
EU AI Act as example of holistic regulation; Japan’s preference for setting rules in advance due to low litigation culture and companies’ compliance strengths; US reliance on court-based resolution of disputes
MAJOR DISCUSSION POINT
Current State of AI Safety and Governance
Argument 3
Financial incentives for independent auditing are lacking except in regulated sectors like healthcare, automotive, and finance where trust is essential
EXPLANATION
Hibuka identifies a market failure in AI auditing where companies lack economic motivation to pursue independent verification. He argues that clear financial incentives only exist in sectors where safety failures have severe consequences and regulatory requirements.
EVIDENCE
Example of autonomous driving certification providing market access; contrast with language models where safety certification doesn’t necessarily provide sufficient financial incentives
MAJOR DISCUSSION POINT
Responsibility Distribution and Incentive Structures
DISAGREED WITH
Shana Mansbach, Gregory C. Allen
Argument 4
Democratic debate is necessary to determine acceptable risk levels and testing measures, similar to discussions in automotive and aerospace industries
EXPLANATION
Hibuka argues that technical standards alone are insufficient and that society must democratically determine what levels of AI risk are acceptable. He emphasizes that these decisions involve value judgments that require public input rather than purely technical solutions.
EVIDENCE
Example of Japan’s 2,000 annual traffic deaths and the question of what safety standards to require for autonomous vehicles compared to human drivers; comparison to existing democratic processes in car, finance, and aerospace industries
MAJOR DISCUSSION POINT
Standards of Care and Legal Framework Development
AGREED WITH
Stephen Clare, Shana Mansbach
Argument 5
Public procurement could provide strong incentives for safety certification if governments require verified systems for purchases
EXPLANATION
Hibuka suggests that government purchasing power could create market incentives for AI safety certification. By requiring verified AI systems for public procurement, governments could drive demand for independent auditing and safety standards.
EVIDENCE
Example of government recognition of safe LLMs or models leading to procurement preferences
MAJOR DISCUSSION POINT
Standards of Care and Legal Framework Development
S
Shana Mansbach
5 arguments175 words per minute2464 words843 seconds
Argument 1
Current AI governance creates a trust problem for public, deployers, regulators, and developers due to uncertainty around system safety and reliability
EXPLANATION
Mansbach identifies a fundamental trust deficit across all stakeholders in the AI ecosystem. She argues that uncertainty about AI system performance creates problems for everyone from end users to the companies developing the technology.
EVIDENCE
Examples of trust problems for hospital systems, retail, and banks who need to use AI but don’t know what to trust; public lacking ways to determine safety; regulators unable to confer earned trust
MAJOR DISCUSSION POINT
Trust and Verification Challenges in AI Systems
AGREED WITH
Stephen Clare, Hiroki Hibuka
Argument 2
Traditional command-and-control governance fails due to AI’s speed of development and concentration of technical expertise in frontier labs
EXPLANATION
Mansbach argues that conventional regulatory approaches are inadequate for AI governance because they cannot keep pace with technological change and lack the technical capacity concentrated in private companies. She identifies both speed and expertise gaps as fundamental problems.
EVIDENCE
AI moves quickly making regulations outdated rapidly; technical expertise concentrated in frontier labs creating capacity problems for regulators
MAJOR DISCUSSION POINT
Trust and Verification Challenges in AI Systems
AGREED WITH
Stephen Clare
Argument 3
Independent verification through government-authorized marketplace of verifiers offers outcomes-based approach rather than procedural compliance
EXPLANATION
Mansbach proposes a new governance model where independent organizations conduct AI system verification within a government-overseen marketplace. This approach focuses on achieving safety outcomes rather than following prescribed procedures.
EVIDENCE
Comparison to Underwriters Lab and LEED certification as analogous systems; emphasis on outcomes like children’s safety, data privacy, and controllability rather than procedural requirements
MAJOR DISCUSSION POINT
Trust and Verification Challenges in AI Systems
AGREED WITH
Stephen Clare, Hiroki Hibuka
DISAGREED WITH
Stephen Clare
Argument 4
Independent verification could establish rebuttal presumption of meeting heightened standard of care, clarifying liability before harms occur
EXPLANATION
Mansbach argues that her proposed verification system would provide legal clarity by establishing standards of care in advance rather than determining them after accidents occur. This would help address liability uncertainty that currently plagues AI deployment.
EVIDENCE
Example of Waymo liability case where juries must determine appropriate standards after the fact; contrast with verification providing predetermined standard of care
MAJOR DISCUSSION POINT
Standards of Care and Legal Framework Development
Argument 5
Insurance companies are refusing to cover AI products, creating potential market regulation through insurance requirements similar to aerospace industry standards
EXPLANATION
Mansbach identifies insurance market withdrawal as a significant regulatory force that could effectively govern AI deployment. She draws parallels to other industries where insurance requirements drive safety standards and market access.
EVIDENCE
Major insurers saying they won’t insure AI products due to lack of understanding; comparison to aerospace industry where AS9100 certification is required for insurance and 10% of satellite launch costs are insurance
MAJOR DISCUSSION POINT
Responsibility Distribution and Incentive Structures
DISAGREED WITH
Hiroki Hibuka, Gregory C. Allen
G
Gregory C. Allen
1 argument167 words per minute2623 words939 seconds
Argument 1
Companies may prefer willful blindness to avoid liability, while auditing creates costs and competitive disadvantages without clear benefits
EXPLANATION
Allen identifies perverse incentives in the current system where companies might avoid safety assessments to maintain legal deniability. He argues that commissioning safety reports could actually increase legal liability while imposing costs and delays.
EVIDENCE
Example of companies potentially losing lawsuits if they have reports showing products are unsafe versus winning if they never commission such reports; costs and competitive pressures to release faster
MAJOR DISCUSSION POINT
Responsibility Distribution and Incentive Structures
DISAGREED WITH
Hiroki Hibuka, Shana Mansbach
Agreements
Agreement Points
AI governance challenges are urgent and real-world impacts are happening now
Speakers: Stephen Clare, Hiroki Hibuka, Shana Mansbach
AI risks that were theoretical 1-2 years ago are now real with a billion people using AI globally, creating urgent governance needs All countries face the challenge of regulating black box technology with unlimited risk scenarios and no clear benchmark standards Current AI governance creates a trust problem for public, deployers, regulators, and developers due to uncertainty around system safety and reliability
All speakers agree that AI governance has moved from theoretical concerns to urgent real-world challenges affecting billions of users, requiring immediate attention from policymakers and stakeholders
Multi-stakeholder approach is necessary for AI safety and governance
Speakers: Stephen Clare, Hiroki Hibuka, Shana Mansbach
AI safety requires layered approach with different responsibilities across developers, deployers, and monitoring bodies rather than single actor accountability Democratic debate is necessary to determine acceptable risk levels and testing measures, similar to discussions in automotive and aerospace industries Independent verification through government-authorized marketplace of verifiers offers outcomes-based approach rather than procedural compliance
All speakers recognize that effective AI governance cannot be achieved by any single actor and requires coordinated efforts across multiple stakeholders including developers, deployers, regulators, and society
Current evaluation and testing methods are inadequate
Speakers: Stephen Clare, Shana Mansbach
Current evaluation methods have significant gaps and are often outdated, not informative about real-world risks due to AI’s stochastic nature Traditional command-and-control governance fails due to AI’s speed of development and concentration of technical expertise in frontier labs
Both speakers acknowledge that existing methods for evaluating AI systems are insufficient to capture real-world risks and cannot keep pace with technological advancement
Information asymmetry between labs and external actors creates governance challenges
Speakers: Stephen Clare, Shana Mansbach
Information asymmetry exists between frontier labs with technical capacity and external actors needing transparency for effective governance Traditional command-and-control governance fails due to AI’s speed of development and concentration of technical expertise in frontier labs
Both speakers identify the concentration of technical expertise in private companies as a fundamental challenge for independent oversight and governance
Similar Viewpoints
Both speakers recognize that market-based incentives, particularly through insurance and regulated sectors, could drive AI safety standards more effectively than voluntary compliance
Speakers: Hiroki Hibuka, Shana Mansbach
Financial incentives for independent auditing are lacking except in regulated sectors like healthcare, automotive, and finance where trust is essential Insurance companies are refusing to cover AI products, creating potential market regulation through insurance requirements similar to aerospace industry standards
Both speakers acknowledge progress in AI safety measures while recognizing that different jurisdictions are taking varied but legitimate approaches to regulation
Speakers: Stephen Clare, Hiroki Hibuka
Technical safeguards are improving significantly with models becoming much harder to jailbreak and 12 leading companies now having frontier safety frameworks Different countries take varying approaches – EU’s holistic regulation vs Japan/US sector-specific approaches, with Japan preferring ex-ante rules while US uses ex-post liability
Both speakers understand that current liability frameworks create perverse incentives that discourage proactive safety measures, requiring new approaches to align incentives with safety goals
Speakers: Shana Mansbach, Gregory C. Allen
Independent verification could establish rebuttal presumption of meeting heightened standard of care, clarifying liability before harms occur Companies may prefer willful blindness to avoid liability, while auditing creates costs and competitive disadvantages without clear benefits
Unexpected Consensus
Technical progress in AI safety is substantial and measurable
Speakers: Stephen Clare, Hiroki Hibuka
Technical safeguards are improving significantly with models becoming much harder to jailbreak and 12 leading companies now having frontier safety frameworks Different countries take varying approaches – EU’s holistic regulation vs Japan/US sector-specific approaches, with Japan preferring ex-ante rules while US uses ex-post liability
Despite the focus on challenges and risks, there was unexpected consensus that significant technical progress has been made in AI safety, with measurable improvements in safeguards and widespread adoption of safety frameworks by major developers
Insurance markets could become de facto AI regulators
Speakers: Shana Mansbach, Gregory C. Allen
Insurance companies are refusing to cover AI products, creating potential market regulation through insurance requirements similar to aerospace industry standards Companies may prefer willful blindness to avoid liability, while auditing creates costs and competitive disadvantages without clear benefits
There was unexpected agreement that insurance companies withdrawing from AI coverage could create more effective regulation than government action, essentially forcing safety standards through market mechanisms
Need for new governance models beyond traditional regulation
Speakers: Stephen Clare, Hiroki Hibuka, Shana Mansbach
AI safety requires layered approach with different responsibilities across developers, deployers, and monitoring bodies rather than single actor accountability All countries face the challenge of regulating black box technology with unlimited risk scenarios and no clear benchmark standards Independent verification through government-authorized marketplace of verifiers offers outcomes-based approach rather than procedural compliance
All speakers converged on the idea that traditional regulatory approaches are insufficient for AI, requiring innovative governance models that blend public and private sector capabilities
Overall Assessment

The speakers demonstrated strong consensus on the urgency of AI governance challenges, the inadequacy of current approaches, and the need for multi-stakeholder solutions. They agreed on both the progress made in technical safeguards and the fundamental limitations of existing regulatory frameworks. There was notable alignment on the role of market mechanisms, particularly insurance, in driving safety standards.

High level of consensus with complementary rather than conflicting perspectives. The speakers built upon each other’s arguments rather than disagreeing, suggesting a mature understanding of AI governance challenges. This consensus implies that there is a clear foundation for developing new governance approaches that combine technical expertise, democratic accountability, and market incentives.

Differences
Different Viewpoints
Who should conduct AI safety evaluations and auditing
Speakers: Stephen Clare, Shana Mansbach
Information asymmetry exists between frontier labs with technical capacity and external actors needing transparency for effective governance Independent verification through government-authorized marketplace of verifiers offers outcomes-based approach rather than procedural compliance
Clare acknowledges the information asymmetry problem but suggests partnerships that draw on lab knowledge while ensuring transparency, whereas Mansbach advocates for fully independent verification organizations to avoid the conflict of interest inherent in self-evaluation
Approach to AI regulation – holistic vs sector-specific
Speakers: Hiroki Hibuka
Different countries take varying approaches – EU’s holistic regulation vs Japan/US sector-specific approaches, with Japan preferring ex-ante rules while US uses ex-post liability
While not a direct disagreement between speakers, Hibuka presents fundamentally different regulatory philosophies across jurisdictions – EU’s comprehensive AI Act versus Japan/US preference for sector-specific regulation, and cultural differences between ex-ante rule-setting versus ex-post liability approaches
Incentive structures for safety compliance
Speakers: Hiroki Hibuka, Shana Mansbach, Gregory C. Allen
Financial incentives for independent auditing are lacking except in regulated sectors like healthcare, automotive, and finance where trust is essential Insurance companies are refusing to cover AI products, creating potential market regulation through insurance requirements similar to aerospace industry standards Companies may prefer willful blindness to avoid liability, while auditing creates costs and competitive disadvantages without clear benefits
The speakers identify different primary drivers for safety compliance – Hibuka focuses on regulated sectors and public procurement, Mansbach emphasizes insurance market forces and competitive advantage, while Allen highlights perverse incentives that discourage voluntary auditing
Unexpected Differences
Role of democratic processes in AI governance
Speakers: Hiroki Hibuka, Shana Mansbach
Democratic debate is necessary to determine acceptable risk levels and testing measures, similar to discussions in automotive and aerospace industries Independent verification through government-authorized marketplace of verifiers offers outcomes-based approach rather than procedural compliance
While both speakers support government involvement in AI governance, Hibuka emphasizes the need for democratic debate to determine acceptable risk levels, whereas Mansbach focuses on technocratic solutions through independent verification. This represents an unexpected philosophical divide between democratic deliberation versus expert-driven governance approaches
Overall Assessment

The main areas of disagreement center on governance mechanisms (self-regulation vs independent verification), regulatory approaches (holistic vs sector-specific), and incentive structures (market-driven vs regulatory mandates). While speakers agree on the inadequacy of current systems and the need for multi-stakeholder approaches, they propose fundamentally different solutions.

Moderate disagreement with significant implications – the speakers share common concerns about AI safety and governance challenges but propose competing frameworks that could lead to very different regulatory outcomes. The disagreements are constructive and focus on implementation approaches rather than fundamental goals, suggesting potential for synthesis of ideas.

Partial Agreements
Both speakers agree that current evaluation and governance methods are inadequate for AI systems, but Clare focuses on improving existing technical evaluation methods while Mansbach proposes replacing the entire governance framework with independent verification organizations
Speakers: Stephen Clare, Shana Mansbach
Current evaluation methods have significant gaps and are often outdated, not informative about real-world risks due to AI’s stochastic nature Traditional command-and-control governance fails due to AI’s speed of development and concentration of technical expertise in frontier labs
All speakers agree that AI governance requires multi-stakeholder approaches and that current systems are inadequate, but they propose different solutions – Clare advocates for layered responsibilities, Mansbach for independent verification marketplaces, and Hibuka for democratic debate on acceptable risk levels
Speakers: Stephen Clare, Shana Mansbach, Hiroki Hibuka
AI safety requires layered approach with different responsibilities across developers, deployers, and monitoring bodies rather than single actor accountability Current AI governance creates a trust problem for public, deployers, regulators, and developers due to uncertainty around system safety and reliability All countries face the challenge of regulating black box technology with unlimited risk scenarios and no clear benchmark standards
Takeaways
Key takeaways
AI safety has made significant technical progress with models becoming much harder to jailbreak and 12 leading companies now having frontier safety frameworks, but governance challenges are becoming more urgent as risks move from theoretical to real-world impacts A fundamental trust problem exists across all stakeholders (public, deployers, regulators, developers) due to uncertainty about AI system safety and reliability, which traditional regulatory approaches cannot adequately address AI governance requires a layered, multi-stakeholder approach with different responsibilities distributed across developers, deployers, and monitoring bodies rather than relying on any single actor Independent verification through government-authorized marketplaces of verifiers could provide outcomes-based governance that is more flexible and democratic than traditional command-and-control regulation Insurance companies are increasingly refusing to cover AI products, potentially creating de facto market regulation similar to aerospace industry standards, regardless of government regulatory approaches Current AI evaluation methods have significant gaps and quickly become outdated, creating challenges for establishing reliable standards of care and best practices Different countries are taking varying regulatory approaches (EU holistic vs Japan/US sector-specific), but all face similar challenges with black box technology and unlimited risk scenarios
Resolutions and action items
Need to establish financial incentives for independent auditing, particularly in regulated sectors like healthcare, automotive, and finance Require democratic debate to determine acceptable risk levels and testing measures for AI systems Develop marketplace of independent verification organizations (IVOs) that can provide right-sized compliance based on risk type and product size Create standards of care that provide rebuttal presumption of meeting heightened legal standards for verified AI systems Increase transparency and information sharing between frontier labs and external actors to address information asymmetries Leverage public procurement as an incentive mechanism by requiring government purchases of AI systems to be independently verified
Unresolved issues
How to balance the concentration of technical expertise in frontier labs with the need for independent oversight and verification What constitutes acceptable risk levels for AI systems across different sectors and use cases How to develop evaluation methods that can keep pace with rapidly evolving AI capabilities and remain informative about real-world risks How to address the stochastic nature of AI systems in safety testing when the same query can produce different outputs What the appropriate distribution of liability should be across developers, deployers, and end users when AI systems cause harm How to create sufficient financial incentives for voluntary adoption of independent auditing without regulatory mandates How to prevent the verification system from becoming a barrier that only protects incumbents while excluding smaller players
Suggested compromises
Implement a marketplace approach for independent verification that allows for right-sized compliance based on product risk and company size rather than one-size-fits-all requirements Create partnerships between frontier labs and external actors that draw on internal technical knowledge while ensuring transparency and independent oversight Develop sector-specific approaches that focus initial independent verification requirements on high-risk areas like healthcare, finance, and automotive before expanding to other sectors Establish verification systems that provide liability clarity and insurance benefits as market incentives rather than relying solely on regulatory mandates Use outcomes-based governance that sets safety goals while allowing flexibility in how those outcomes are achieved and measured
Thought Provoking Comments
The rubber is really hitting the road or something with these kind of systems. Risks that even a year or two ago might have been theoretical are now very real and we’re seeing emerging empirical evidence… There’s a billion people now using AI around the world.
This comment fundamentally reframes the AI governance discussion from theoretical future concerns to present-day reality. It establishes that we’ve crossed a threshold where AI risks are no longer hypothetical but are manifesting in real-world impacts with massive scale.
This set the foundational tone for the entire discussion, shifting focus from ‘what might happen’ to ‘what is happening now.’ It justified the urgency of all subsequent governance discussions and provided the empirical grounding that made other panelists’ policy proposals feel immediately relevant rather than premature.
Speaker: Stephen Clare
The real question is not whether or not to regulate AIs, but the real question is how to update our existing regulations and whether or not we need additional regulations targeting AI systems… all countries take the hard law approach and also all countries have soft laws.
This comment dismantles a common false dichotomy in AI governance debates between ‘regulation vs. no regulation’ and reveals the more nuanced reality of how different regulatory frameworks actually operate. It challenges oversimplified narratives about EU vs. US vs. Japan approaches.
This reframing shifted the conversation away from broad comparisons of national approaches toward more specific discussions about implementation mechanisms and stakeholder responsibilities. It elevated the sophistication of the governance discussion by focusing on practical regulatory design rather than ideological positions.
Speaker: Hiroki Hibuka
The current approach to tech governance is not equipped to handle this trust problem very well… AI moves really, really quickly, and even well-intentioned regulations are going to become outdated very, very quickly, and then there’s the technical capacity problem.
This comment identifies the fundamental mismatch between traditional regulatory approaches and the unique characteristics of AI technology. It articulates why existing governance models are structurally inadequate, not just temporarily behind.
This diagnosis of systemic inadequacy justified the need for entirely new governance models rather than incremental reforms. It set up the intellectual foundation for proposing independent verification organizations as a novel solution, making the subsequent detailed discussion of IVOs feel necessary rather than speculative.
Speaker: Shana Mansbach
Independent evaluation is essential… But it would be not easy to persuade corporate executives to use the independent audit without clear economic incentives… there is no clear financial incentives.
This comment cuts through idealistic proposals to identify the core practical barrier to implementation: misaligned economic incentives. It forces the discussion to confront the gap between what’s theoretically desirable and what’s practically achievable.
This observation redirected the entire conversation toward incentive design, leading to rich discussions about insurance, liability, procurement, and market dynamics. It grounded the theoretical governance models in economic reality and sparked the most practical parts of the discussion.
Speaker: Hiroki Hibuka
Right now we are seeing the big insurers saying we’re not going to touch this we’re not going to insure any AI products because we have no idea what’s inside of them… If you are a major bank and you are doing big, important financial transactions, as soon as you start using AI, you’ve lost all your insurance.
This reveals a critical but underexplored dimension of AI governance: how insurance markets are creating de facto regulation through risk assessment. It shows how market forces may impose constraints that formal regulation hasn’t yet addressed.
This insight opened up an entirely new thread about insurance as a governance mechanism, leading to discussions about how market-based incentives could drive safety standards. It demonstrated how economic forces might solve governance problems that political processes struggle with, adding a new dimension to the policy toolkit.
Speaker: Shana Mansbach and Gregory Allen
I think in many cases, these evaluations we’re using are already not super informative about real-world risk because they’re too narrow… And as models have become more capable and general and adopted more widely, this has become much more difficult.
This comment exposes a fundamental technical limitation in current AI safety approaches: the evaluation methods themselves are inadequate for assessing real-world risks. It reveals that even the technical foundations of governance are shaky.
This technical reality check sobered the discussion about independent verification, forcing acknowledgment that even well-intentioned third-party auditing faces serious methodological challenges. It added necessary complexity to proposals for verification organizations and highlighted the need for continued innovation in evaluation methods themselves.
Speaker: Stephen Clare
Overall Assessment

These key comments collectively transformed what could have been a superficial policy discussion into a sophisticated analysis of AI governance challenges. The progression moved from establishing empirical urgency (Clare’s ‘rubber hitting the road’), through dismantling false dichotomies (Hibuka’s regulation reframing), to proposing structural solutions (Mansbach’s verification organizations), then confronting practical barriers (economic incentives), discovering unexpected governance mechanisms (insurance markets), and finally acknowledging technical limitations (evaluation gaps). This created a comprehensive view that balanced optimism about governance solutions with realism about implementation challenges, ultimately producing a more nuanced and actionable understanding of AI governance needs.

Follow-up Questions
Where is the end? And to what extent do stakeholders have to manage the risks?
This addresses the fundamental challenge of determining acceptable risk levels in AI governance, as risks can never be completely eliminated and technology advances rapidly
Speaker: Hiroki Hibuka
How to design benchmarks and regulation methods for cutting-edge technologies that are black box with unlimited risk scenarios?
This highlights the need for new approaches to evaluate and regulate AI systems where traditional methods may not be sufficient
Speaker: Hiroki Hibuka
How do we assure broader adoption of safety frameworks, how do you ensure compliance, what do you do when there’s a lack of compliance?
This addresses the governance challenge of moving from technical solutions to ensuring widespread implementation across the industry
Speaker: Stephen Clare
How far are we from converting consensus on risks and interventions into standards and procedures for third-party evaluation?
This explores the gap between understanding AI risks and establishing accepted best practices for evaluation and auditing
Speaker: Gregory C. Allen
What kind of safety level would we require for autonomous vehicles compared to human drivers?
This illustrates the broader challenge of determining acceptable risk thresholds for AI systems across different applications
Speaker: Hiroki Hibuka
How to measure and define test methods for AI systems given varying complexity of use cases?
This addresses the technical challenge of creating meaningful evaluations that reflect real-world deployment scenarios
Speaker: Hiroki Hibuka
How do we reconcile the gap between frontier AI labs having unique expertise but also having misaligned incentives for self-assessment?
This explores the tension between technical expertise concentration and the need for independent evaluation
Speaker: Gregory C. Allen
Will third-party organizations be able to build economies of scale to advance safety and governance as technology evolves?
This questions whether independent auditing organizations can maintain technical competence as they become separated from frontier development
Speaker: Gregory C. Allen
How to create meaningful evaluations for fundamentally stochastic systems where outputs vary across identical inputs?
This addresses the technical challenge of testing AI systems that produce different outputs for the same input
Speaker: Shana Mansbach
How to evaluate AI safety when model outputs don’t directly correlate with user actions and outcomes?
This highlights the complexity of assessing real-world harm potential from AI system outputs
Speaker: Shana Mansbach

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Transforming Agriculture_ AI for Resilient and Inclusive Food Systems

Transforming Agriculture_ AI for Resilient and Inclusive Food Systems

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on how artificial intelligence can support the transition toward more transparent, responsible, and inclusive food systems, bringing together leaders from government, industry, academia, and international organizations. The session was co-hosted by the Netherlands and the OECD, with participants from Indonesia, India, FAO, and Wageningen University sharing diverse perspectives on AI’s role in agriculture and food security.


Ambassador Harry Verweij of the Netherlands emphasized how AI and digitalization offer enormous opportunities to increase productivity and sustainability in food production while enhancing climate resilience. He highlighted Dutch innovations like precision farming that can achieve up to 90% water savings through smart irrigation and predictive models for disease control. The OECD’s Sara Rendtorff Smith presented promising evidence from real-world AI deployments, including AI-enabled precision spraying that reduces pesticide use by up to 30% without compromising yields, and computer vision systems that cut herbicide use in half by targeting only weeds.


However, speakers acknowledged significant challenges in AI adoption across different regions. While 96% of Australian farmers use digital tools, only 12% of Chilean farmers do, highlighting a concerning digital divide. Professor Arwin Sumari from Indonesia outlined his country’s unique challenges, including 17,000 islands separated by oceans and unequal distribution of AI talent, while describing Indonesia’s seven-pillar AI roadmap focusing on regulation, ethics, investment, data, innovation, talent development, and use cases.


Debjani Ghosh from India’s NITI Aayog emphasized the need for problem-driven AI solutions rather than applying AI broadly to every challenge. She identified food wastage as a critical area where AI could make significant impact, noting the paradox that while the world produces enough food for 8 billion people, millions remain hungry due to distribution and access issues. Dr. Arun Pratihast from Wageningen University stressed the importance of making AI solutions work in low-tech farming environments, citing three main challenges: data scarcity, lack of trust from farmers, and scalability issues.


The discussion concluded that while AI offers vast potential for transforming food systems, successful implementation requires problem-driven approaches, local context consideration, farmer engagement, and building trust through transparency and responsible data collection practices.


Keypoints

Major Discussion Points:

AI’s potential to enhance food system resilience and anticipatory action: Speakers emphasized how AI can help predict and respond to agricultural shocks, climate challenges, and supply chain disruptions before they escalate, with examples including early warning systems for pests, diseases, and weather events.


The digital divide and inclusivity challenges in AI adoption: A central concern was ensuring AI benefits reach smallholder farmers and developing countries, addressing gaps in digital infrastructure, connectivity, and access to technology that could deepen existing inequalities.


Data governance, trust, and transparency issues: Multiple speakers highlighted the need for responsible data collection and sharing, farmer trust in AI systems, and the importance of explainable AI that farmers can understand and rely upon for decision-making.


Problem-driven vs. technology-driven approaches: Panelists stressed the importance of identifying specific agricultural problems first (like food waste reduction) rather than applying AI broadly, and ensuring solutions work in low-tech farming environments with local context.


International cooperation and ecosystem building: Discussion of collaborative frameworks between governments, industry, academia, and international organizations to scale AI solutions responsibly, with examples from Netherlands-Indonesia partnerships and OECD initiatives.


Overall Purpose:

The discussion aimed to explore how artificial intelligence can support the transition toward more transparent, responsible, and inclusive food systems, bringing together leaders from government, industry, academia, and international organizations to examine both opportunities and practical challenges in AI deployment for agriculture.


Overall Tone:

The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlook about AI’s transformative potential while acknowledging significant challenges. The discussion was collaborative and solution-oriented, with participants building on each other’s points and sharing concrete examples. There was a notable emphasis on urgency given global food security challenges, but the tone remained constructive and focused on actionable partnerships and policy frameworks.


Speakers

Speakers from the provided list:


Sara Rendtorff Smith: Session moderator, representing the OECD


Harry Verweij: Ambassador-at-Large and Special Envoy for AI of the Kingdom of the Netherlands, co-chair of the sixth working group on economic growth and social good


Dejan Jakovljevic: Chief Information Officer and Director of Digital FAO and Agroinformatics Division at FAO of the United Nations, based in Rome


Arwin Datumaya Wahyudi Sumari: Indonesian Air Force officer and professor at the State Polytechnic of Malang, co-inventor of the Knowledge Growing System (cognitive artificial intelligence framework)


Debjani Ghosh: Distinguished Fellow and Chief Architect of NITI Frontier Tech Hub


Arun Pratihast: Senior Researcher at Wageningen University Environmental Research


Speaker 5: Role/title not mentioned


Additional speakers:


Ambassador Fawai: Ambassador-at-Large and Special Envoy for AI of the Kingdom of the Netherlands (Note: This appears to be the same person as Harry Verweij, as Sara introduces “Ambassador Fawai” but Harry Verweij responds)


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion on artificial intelligence’s role in transforming global food systems brought together diverse perspectives from government, industry, academia, and international organisations to examine opportunities and practical challenges in AI deployment. Co-hosted by the Netherlands and the OECD, the session featured participants from Indonesia, India, FAO, and Wageningen University, each contributing unique insights into both the transformative potential and implementation challenges of AI in agriculture.


Setting the Strategic Context

Ambassador Harry Verweij of the Netherlands opened the discussion by positioning AI and digitalisation as powerful tools for addressing interconnected challenges in global food systems. He acknowledged working with colleagues from Indonesia as co-chairs of the sixth working group and expressed Netherlands’ support for Indonesia’s ambition to join the OECD. The Ambassador emphasised that strengthening global food security represents a strategic priority for the Netherlands, particularly given the country’s role as both a major agricultural trader and innovation hub despite its relatively small size.


The Ambassador highlighted concrete achievements in Dutch precision farming, including AI-enabled smart irrigation systems achieving up to 90% water savings and predictive models for disease control that optimise crop yields whilst minimising inputs. The Netherlands’ approach demonstrates how advanced AI ecosystems emerge from the intersection of strong technical universities, innovative companies like ASML, NXP, and Philips, and collaborative partnerships between science, government, and industry. Crucially, he stressed that every country faces unique local challenges requiring tailored solutions, emphasising the importance of international cooperation and knowledge sharing.


Evidence-Based Potential and Current Applications

Sara Rendtorff Smith from the OECD provided compelling evidence of AI’s real-world impact in agriculture, drawing from studies across the EU and Southeast Asia. The data reveals significant environmental benefits without compromising productivity: AI-enabled precision spraying has reduced pesticide use by up to 30% whilst maintaining yields, and computer vision systems that distinguish between crops and weeds can cut herbicide use by half.


Beyond immediate farm applications, AI is revolutionising agricultural innovation itself. Researchers have identified drought-tolerant traits in crops, whilst AI platforms in Asia are shortening breeding cycles by predicting optimal combinations for enhanced resilience in vital staple crops. The OECD’s work also highlights AI’s role in strengthening entire food supply chains through enhanced traceability, market transparency, and smart logistics systems that can reduce losses, improve compliance, and strengthen food safety.


Indonesia’s Comprehensive National Approach

Professor Arwin Datumaya Wahyudi Sumari provided detailed insights into Indonesia’s unique challenges and comprehensive response strategy. As he explained, Indonesia faces extraordinary logistical challenges: “17,000 islands separated by ocean. We only have 36% of land, 64% of water, and 100% of air.” The country operates across three time zones, has unequal distribution of AI talent, and faces significant price variations where rice costs can multiply several times in remote eastern regions compared to western areas.


Indonesia’s response involves a sophisticated seven-pillar national AI roadmap encompassing regulation, ethics, investment, data governance, innovation, talent development, and specific use cases. This framework explicitly embraces multi-stakeholder collaboration through what Sumari described as a “helix” model involving government, industry, academia, media, and communities. The approach includes AI systems for predicting soil conditions before opening new agricultural land—part of the president’s program to develop almost 1 million hectares of new rice fields—and optimising fertiliser content for different crop types.


Significantly, Professor Sumari distinguished between “smart farming” and “intelligent farming,” noting: “We don’t say smart farming. Smart is not really intelligent. Intelligent is different.” This distinction represents a more sophisticated understanding of AI’s potential to create adaptive, learning systems through what he calls the “Knowledge Growing System”—his co-invention that evolves with local conditions and farmer needs rather than simply automating existing processes.


Reframing the Food Security Challenge

A critical insight emerged regarding the fundamental nature of global food security challenges. Both Debjani Ghosh from India’s NITI Aayog and Dejan Jakovljevic from FAO highlighted a crucial paradox: whilst the world produces sufficient food to feed 8 billion people, approximately 700 million people remain hungry. This suggests that the primary challenge is not production capacity but rather distribution, access, and waste reduction.


Ghosh argued that the biggest problem to address through AI is food wastage throughout supply chains, requiring focus on logistics, cold chain infrastructure, and transportation optimisation rather than simply increasing agricultural output. This perspective represents a fundamental shift from traditional agricultural AI applications that focus primarily on farm-level productivity improvements to addressing systemic inefficiencies in food distribution networks.


The Digital Divide and Trust Challenges

However, the discussion revealed a stark digital divide that threatens to deepen existing inequalities. Whilst 96% of farmers are using digital tools in Australia, only 12% do so in Chile, illustrating how technological advancement can exacerbate global disparities. As Jakovljevic observed, this divide has become an existential issue where exclusion from digital ecosystems increasingly means exclusion from economic and social systems entirely.


The challenge is particularly acute because, unlike previous technological transitions, it is no longer possible to operate effectively outside digital ecosystems. This reality means that farmers and communities without access to AI technologies face not just reduced opportunities but potential complete marginalisation from modern agricultural systems.


Trust represents perhaps the most critical factor in successful AI adoption. Multiple speakers emphasised that farmers must have control over how their data is collected, shared, and used, and that AI systems must be explainable and transparent rather than operating as “black boxes.” This requirement extends beyond technical explainability to encompass genuine farmer participation in AI system development.


The Problem-Driven Approach Imperative

A critical theme throughout the discussion was the need for problem-driven rather than technology-driven approaches. Ghosh articulated this challenge directly: “the biggest problem with AI today is that we throw AI at every problem that exists and expect that something will happen out of it.”


Dr. Arun Pratihast from Wageningen University reinforced this perspective through concrete examples from field research across Asia, Africa, and Latin America, including work with the World Cereal Project with the European Space Agency. He identified three fundamental challenges preventing effective AI implementation: data scarcity at local levels, lack of trust between farmers and AI systems, and scalability issues that prevent successful pilots from expanding to broader applications.


Pratihast’s research demonstrates that whilst AI models may work effectively at global scales, they often fail when applied to local contexts where farmers operate. This failure occurs because farmers’ expectations and needs differ significantly from what AI developers anticipate, leading to advisory systems that farmers don’t trust or follow. The solution requires engaging farmers in the development process and ensuring that AI systems work effectively in low-tech environments with limited connectivity.


International Cooperation and Practical Solutions

The discussion highlighted the essential role of international cooperation in scaling AI benefits globally. The OECD’s work on developing AI policy toolkits and maintaining policy navigators covering over 2,000 policies across 80 jurisdictions demonstrates the complexity of creating interoperable systems that can support cross-border applications.


Promising solutions are emerging to address implementation challenges. The development of multilingual AI advisory services accessible through basic phone calls rather than smartphones demonstrates how AI can be made more inclusive. Similarly, the focus on developing AI solutions that work in low-connectivity environments whilst gradually building digital infrastructure represents a pragmatic approach to bridging the digital divide.


The emphasis on creating centres of excellence focused on specific problems rather than generic AI applications offers another pathway forward. Rather than establishing broad AI centres, the focus should be on problem-specific centres addressing challenges such as cold chain optimisation, climate-resilient crop development, or supply chain waste reduction.


Unresolved Challenges and Future Directions

Despite promising developments, several critical challenges remain. The lack of adequate data sharing mechanisms, particularly in developing countries, continues to limit AI effectiveness. Scaling successful AI pilot projects beyond initial implementations remains a persistent challenge, suggesting that current approaches to technology transfer and capacity building require fundamental revision.


The balance between horizontal AI governance and sector-specific agricultural regulations across different jurisdictions also requires further development to ensure coherent and effective policy frameworks. Fragmented data governance frameworks present particular challenges for AI tools that support trade, traceability, and resilient food supply chains across borders.


Conclusion: Towards Inclusive and Resilient Food Systems

The discussion concluded with recognition that whilst AI offers vast potential for transforming food systems, realising this potential requires fundamental shifts in approach. Success depends on moving from technology-driven to problem-driven approaches, ensuring that solutions are developed with rather than for farmers, and creating governance frameworks that promote both innovation and inclusion.


The path forward requires sustained international cooperation, significant investment in digital infrastructure and capacity building, and continued focus on ensuring that AI benefits are broadly shared rather than concentrated among those who already have access to advanced technologies. Most importantly, it requires recognition that AI is not a panacea but rather a powerful tool that must be deployed thoughtfully and responsibly to address specific, well-defined challenges in global food systems.


The convergence of perspectives from government, industry, academia, and international organisations around these principles suggests growing maturity in understanding AI’s role in agriculture, providing a foundation for coordinated action to ensure that AI contributes to building food systems that are more productive, sustainable, transparent, responsible, and inclusive for all stakeholders.


Session transcriptComplete transcript of the session
Sara Rendtorff Smith

Session started. Thank you. the Netherlands, and Indonesia, as you’ll see reflected on the panel. And together with our distinguished panelists, we’ll explore how artificial intelligence can support the transition towards food systems that are more transparent, responsible, and inclusive. So this session is bringing together leaders from government, industry, academia, and international organizations to examine both opportunities and the practical challenges ahead from data sharing and infrastructure to governance frameworks and the partnerships needed to ensure that AI benefits are broadly shared. And before we begin the panel discussion, it’s my honor to invite His Excellency, Ambassador Fawai, Ambassador -at -Large and Special Envoy for AI of the Kingdom of the Netherlands, who will deliver welcome remarks. Welcome, Ambassador.

Harry Verweij

Thank you, Sarah. Is this working? Yeah. Thank you all for sharing this wonderful moment for me because we’re here with Madam Gorshan and Admiral Samari from Indonesia. Together we formed the chair and co -chair of the sixth working group on economic growth and social good in preparation for the summit. And I just wanted to say how much I was impressed with you, Madam Gorshan, how you managed the working group and how the outcomes were drafted and delivered, especially also delivered in the plenary. It’s not up to me, but I say well done. Really great. But thank you very much. It was really a wonderful journey with you. So, ladies and gentlemen, the use of digitalization and artificial intelligence in agriculture is developing rapidly.

It offers enormous opportunities to increase the productivity and sustainability of local food production. It offers opportunities to improve nature conservation and to foster a sustainable foster climate resilience in an inclusive and sustainable way. When this is all – when this – it also contributes to the autonomy and stability of countries. For the Netherlands, strengthening global food security is a strategic priority. Reliable, sustainable, and affordable food systems are essential for societal stability, economic development, and particularly in vulnerable regions. The ambitions in our digitalization agenda for agriculture, nature conservation, and food are to connect digitalization to the transition of agriculture needed for more food security, reduction of environmental impact, and climate resilience via public and private investments. Our primary focus on increasing productivity with lower environmental impact and improving climate adaptation, strengthening the resilience of food systems through response.

use of AI and digital technologies. Concerning today’s topic, the Dutch ambition is to enhance food security by making food systems more resilient and sustainable for all stakeholders. In my vision, digitalization and AI are powerful tools for that. They have already proven that they can significantly increase food productivity and reduce food losses. In addition, AI solutions can enhance the efficiency and resilience of food systems by supporting farmers to respond to sustainability requirements, make risk assessments, implement sustainable farming practices, and enable them to provide trustworthy and quality data sets about those efforts to be shared throughout the supply chain. The Netherlands has a strong AI ecosystem. Thanks to our technical universities and partners, we have a strong ecosystem of AI and companies like ASML, NXP, and Philips.

Despite its relatively small size, the Netherlands is not only a huge trader in agricultural produce, but also a global key player in agro -innovation and technology development due to the interaction between plant and animal science and technological knowledge systems in the Netherlands. Companies, science and government invest mutually in solutions for societal challenges. Examples include precision farming with AI, such as water savings of up to 90 % through smart irrigation, optimal crop yields with minimal input, and predictive models for disease control. To support digitalization in the agricultural sector in low – and middle -income countries, the Netherlands facilitates Dutch ICT agribusinesses to collaborate with businesses and startups there. And as you are… We are aware in the Netherlands that strong ICT ecosystems and highly innovative agricultural ecosystems come together.

ICT agricultural solutions combine the in -depth agricultural knowledge and advanced technology development in my country. Examples are applications for early warning of pests and diseases, optimization of water use and optimized plant breeding processes. Dutch companies and knowledge institutions are open to co -work on tailor -made solutions. Every country has its own typical local challenges and requires tailor -made solutions. Today special attention will be drawn to AI -powered solutions for small farmers and SMEs in producing countries in order to enhance their access to global agricultural supply chains while protecting their data. Our goal is to improve the ICT ecosystem and improve the ICT ecosystem in our country. We are committed to work together on this through knowledge sharing, co -operation and collaboration.

creation and capacity building so that AI solutions are locally relevant, inclusive and accessible to farmers. The need for an inclusive AI has also been central to our discussions in the working group of the Economic Growth and Social Group leading up to the summit. It fits well the summit motto, people, planet and progress. So I would like to thank India for its leadership in focusing on an inclusive AI future and underline that the Netherlands stands ready to contribute by forging concrete partnerships, sharing knowledge and technology while striving for measurable results in order to ensure that AI serves all of humanity. And I recall the Honourable Prime Minister’s speech in Flendry to which he alluded as well.

Ladies and gentlemen, we are honored to organize this important event together with the OECD, the go -to organization when it comes to AI governance, and to discuss the opportunities for international knowledge sharing and cooperation with FAO, the Wageningen University in the Netherlands, and the distinguished co -chairs of the Working Group on Economic Growth and Social Growth, India and Indonesia. We warmly thank India for hosting this summit and look forward to continuing and strengthening our cooperation in the field of AI and agriculture, both bilaterally and within the global partnership on AI. We also thank our co -chair Indonesia for continuing cooperation and we would like to highlight our appreciation and firm support of Indonesia’s ambition to join the OECD and its commitment to global standards and evidence -based policymaking.

International knowledge sharing and cooperation is needed to accelerate the development and application of new technologies. With the help of trustworthy AI. Having AI. And agricultural ecosystems on the agenda in this important AI summit is extremely valuable and a. forward in order to make a positive impact for all stakeholders. I wish you a fruitful meeting and look forward to our conclusions, and thank you for this opportunity to listen. So the floor is now Sarah.

Sara Rendtorff Smith

Thank you, Ambassador. And on behalf of the OECD, I just want to thank once again the Netherlands for the leadership in convening this timely discussion. And as was just reflected in the Ambassador’s remarks, the Netherlands is obviously a pioneer in advancing food and agriculture innovation, and we are so delighted to have them as co -chairs as well of the OECD FAO Advisory Group on Responsible Agricultural Supply Chains. From the OECD’s perspective, we clearly see this dynamic of agriculture and food systems today operating in an increasingly volatile environment, and farmers face a wide variety of shocks, from droughts, floods, pests, to conflicts and economic crises. With growing frequency and severe… and so therefore strengthening resilience while also ensuring inclusion, as was also stressed by Ambassador Federe, is really an urgent global priority that I hope we can talk about today.

AI in this regard offers significant potential. We’re seeing AI systems and tools being applied to optimize the use of critical resources, as was already mentioned, such as water, fertilizer, and pesticides, and also to reduce environmental pressure while enhancing productivity. The OECD and JPEI, which also met today in a ministerial session, have been examining AI use cases in agriculture with a focus on the EU and on Southeast Asia, and we continue these dialogues. And what we’re seeing there is that the evidence from real -world deployment is really, really promising. So, for example, AI -enabled precision spraying has reduced pesticide use by up to 30 percent, and this is actually without compromising yield. while computer vision green on brown systems can cut herbicide used by up to half by targeting only the weeds that require the treatment and thus not the crops.

And in addition, we’re seeing how forecasting, monitoring, and early detection of climatic and biological threads means that AI systems can strengthen our capacity to respond to crises before they even escalate, so some degree of preemption. AI is also revolutionizing agricultural innovation itself and supporting more efficient plant breeding that can develop climate -adaptive variety in a fraction of the traditional time. And here we also have some interesting data seeing in Central Europe that researchers have identified drought -tolerant traits in crops such as sorghum and chickpea that boost yields by up to 25 % during end -season drought. And in Asia, meanwhile, we’re also seeing global AI hybrid rice platform demonstrating how AI can shorten breeding cycles by predicting optimal parent combinations and enhancing resilience in one of the world’s most vital staple crops.

Beyond the farm gate, AI is also reinforcing the resilience of our entire food supply chains. And AI -enabled traceability, market transparency, and smart logistics can reduce losses, improve compliance, and strengthen food safety systems. Evidence from these digital traceability initiatives across the OECD members demonstrates a growing maturity of exactly these systems, so something really to look out for. But technology alone, as we know, does not ensure impact, and so adoption is where we’re really looking now, and that remains quite uneven still. And this is obviously why we’re all here in Delhi. So while we’re seeing in Australia that 96 % of farmers are using digital tools, the same number for Chile is just 12%. And this is highlighting a digital divide that could deepen existing inequalities if we don’t look to address it.

There’s also important challenges in the use of AI, and this goes back to sort of the core work of the OECD, looking not just at the benefits but also the challenges associated with AI. Farmers and regulators need transparency in how AI systems make their decisions, but at the same time fragmented data governance frameworks introduce complexity to the use of AI tools that support the trade, traceability, and resilient food supply chains across the border. And this highlights the need for greater interoperability, which is also a theme at this summit. So structural barriers including high cost, limited digital skills, and lack of trust. These are some of the things that continue to slow the uptake of AI.

So bridging these gaps, which should be a priority for all of us, requires investment in connectivity and other digital infrastructure, in skills and affordable solutions. So smallholders, women, farmers in remote areas who play a critical role in enhancing global food security, they’re able to also benefit from AI’s potential. And farmers must be able that their data is collected, shared, and used responsibly. So in this area, the OECD is working to help countries put in place policies that promote these objectives through an AI policy toolkit. And this toolkit will provide practical, context -specific guidance to countries. The toolkit builds on our policy navigator. If you haven’t already visited it, it’s on osd .ai. And it so far covers more than 2 ,000 policies across 80 jurisdictions.

So this is where you can find examples. Examples of national AI strategies, but also in specific sectors. And we continue to update this, and for anyone in this room representing a country not represented, we encourage you to visit and to also contribute your policies. We’re also advancing work on digital governance in agriculture. This is within GPAY that I mentioned earlier, a priority there, where we examine governance models across countries and their applications for responsible digital transformation more broadly. We also see strong complementarities with the global AI impact comments, which is a key deliverable of this summit, and which shares concrete use cases of AI with known impact and scaling potential. So for the OECD advancing trustworthy AI consistent with our OECD AI principles requires a strong enabling ecosystem alongside technological progress.

And what we’re seeing is that if we succeed, we’re really in a position to raise productivity. sustainably and also strengthen resilience in agricultural supply chains, including by ensuring that the benefits of innovation are widely shared and existing divides are not deepened in the process. So I really look forward to this panel’s insights to help us take this conversation forward, looking at practical pathways to achieve this vision. And with this, it’s my pleasure to introduce our esteemed panel. Many have traveled far to be here. So first, I would like to introduce Professor Arvind Sumari, who is an Indonesian Air Force officer and professor at the State Polytechnic of Malang. Welcome. And also we have with us, next to Professor Sumari, we have Mr.

Dayan Jakoblevich. He’s Chief Information Officer and Director of Digital FAO and Agroinformatics Division at FAO of the United Nations, based in Rome. We also have with us… We have with us today the pleasure of having Debjani Ghosh, Ms. Debjani Ghosh. Distinguished Fellow and Chief Architect of NITI Frontier Tech Hub. And finally, it’s my pleasure to introduce Dr. Arun Pratihast, Senior Researcher at Wageningen University Environmental Research. So welcome to this session. And what we will see today is each of our speakers bringing a unique perspective on how AI can help build food systems that are resilient and inclusive, which is the topic of the session. And after the panel discussion, I will also be giving the floor to anyone in the room who might have questions.

So now let’s begin. I’ll hand the floor over to Dan, who will set the scene for the conversation. Dan, you have the floor.

Dejan Jakovljevic

Thank you very much. And I would like to welcome everyone on behalf of the Food and Agriculture Organization. I thank you to our hosts here. The summit from India, but also ECD and. government of the Netherlands ambassador thank you when we look at agri -food I heard in the interventions before about the agriculture and the food we look at agri -food systems from the FAO perspective why because the food itself as if we look at the agriculture food is one product but not only one so there is a whole ecosystem behind agriculture of products that are not necessarily food and they are equally important when we make considerations when we look at for example at the water use transport and many others so in from agri -food systems perspective AI brings us fantastic opportunities and if we look at our topic today in terms of inclusiveness and resilience and inclusion and inclusion and inclusion and resilience and inclusion and resilience and inclusion and inclusion and inclusion and inclusiveness and resilience and resilience and resilience inclusiveness is still a big issue if we just think back back maybe two, three years before the, let’s say, chat GPT came out, the inclusiveness and the digital divide was still strong and present.

And the key issue is that it used to be possible to exist outside of the digital ecosystem. We all know we could maybe go to the bank, but nowadays it’s not. So if a farmer or communities are outside of the digital ecosystem, they suddenly are outside of any ecosystem almost. And now with the AI, it makes it even worse. So this is something we need to continue to press on and jointly in making sure that everybody has equal opportunity within the digital ecosystems. And on the positive, let’s say, note, on the positive, let’s say, note, on the AI when it comes to inclusiveness. We see very encouraging opportunities with AI. What I mean by that is we can, in fact, lower the entry barrier to knowledge.

Just two days ago, I’ve seen here actually this opportunity at the event, great advancements, the new tool that was produced by government of India where farmers can, with a phone call, as not everybody has a smartphone, can get advisory in the area of agriculture, from shrimp cultivation to pest diseases and similar. So this is great. The service can be in many languages. So this is a fantastic opportunity example where AI can help us actually lower the entry point to the AI. In the same time, for governments, it’s even more so difficult. to have the capacity to build the AI infrastructure to provide such services. So this is, again, I think one area, and forums like this help us consider what it takes to build it.

When we look at the resilience specifically, I was very happy to hear in the previous openings you mentioned resilience in terms of, Jeff and from Ambassador, we heard on anticipation. So I would say this is the key word. The key word is anticipation. So anticipate the shocks to the agri -food systems that impacts the food security. We know we have natural disasters. We know we have also conflicts. We have many different factors that impact agri -food systems. So building the systems that are capable of absorbing the shocks of these situations and anticipating. Anticipatory actions to when the shocks happen, what can be done to kind of. go over these shocks. So this is where AI can be a great enabler, where we can then, with new capabilities, anticipate these shocks, and with the help of data and our joint work, really, put together decision -making tools, anticipatory tools, situation rooms, to be able to quickly not only anticipate, but when something happens, we don’t really improvise, but we have tools in hand to address these situations.

We still have about 700 million people without food on the table today. So from this perspective at FAO, and I’m sure we shared the same sense of urgency to actually do something. So I wanted to say from this perspective, we are very grateful to be part of this conversation and thank you for your time. And we can work together in finding the new solution. So I thank you for that. and I’m looking forward to our panel. Thank you.

Sara Rendtorff Smith

intelligence research group and are the co -inventor of the Knowledge Growing System, a cognitive artificial intelligence framework designed to enable adaptive and evolving decision making. So from Indonesia’s vantage point, we’d be interested to hear where you see the most significant AI capability gaps across the agricultural system and where you see the greatest opportunities at the same time for AI to make food supply chains more efficient and resilient, something we also heard as a priority. And we also know that Indonesia is one of the countries advancing an ambitious AI agenda. So if you could briefly outline also the key pillars of Indonesia’s AI roadmap, this is of interest and to explain how you are balancing horizontal AI governance with more sector -specific regulation in agriculture.

Over to you. Thank you.

Arwin Datumaya Wahyudi Sumari

Thank you, Sarah. First, I would like to deliver my appreciation and congratulations to the host, India, and also my chair, Ms. Goss, and also my dear colleagues from the land ambassador harry first letter for coaching our working group together and also other speakers and Sarah thank you and our audience regarding your question about the artificial intelligence for Indonesia as we already know together that Indonesia is not only the agriculture but also maritime nation we we were self -sufficient in in rice about 20 30 years ago and then it wasn’t a I for making our country had sufficient in in rice but nobody I is something that that can make our program to be to become a self -sufficient country in right can be achieved.

We are much aware that the ideology is developing very fast, not only in America or Europe, but also in Asia, especially in Indonesia. This rapid and democratic application across all agricultural potential areas presents significant challenges, especially given the potential location which are separated by ocean. And you already know that Indonesia has 17 ,000 islands separated by ocean. We only have 36 % of land, 64 % of water, and 100 % of air. And this is a challenge for us. If you don’t believe me, you can count the numbers of our islands. And this is a challenge for us. And we also have another challenge. We are living above the ring of fire. There are also other challenges for our people of Indonesia. And as I mentioned previously, this gap is further widened by lack of democratically supporting AI infrastructure, such as telecommunication.

We have three different times region, the west region, center region, and eastern region. And each one has different one hour, one to another. And also, there is a problem with unequal distribution of AI talent. I think the problem is not only in Indonesia, but also all over the world. In terms of the biggest opportunities for utilization. AI in the food supply chain, especially in agriculture country like Indonesia. efforts to do such as like we can use AI for prediction of soil condition and nutrition before opening new land for agriculture. Our president has a program to open almost 1 million hectares of new rice files. 1 ,000 hectares in some big island of Indonesia in order to get the safe efficiency in the next five years.

And then we also use the AI for prediction of the most appropriate food crops given the soil condition and nutrition of existing agricultural land. We have seven dozen islands and each island has different soil condition, different soil nutrition. And you can use AI to predict what kind of nutrition, what kind of soil condition, what kind of vitamin that belongs to that soil. So we can predict the proper crops, the proper plants that have to be planted in that area. The second one about optimizing the most optimal fertilizer content to produce the best harvest result as well as optimizing the volume of water required according to the type of fertilizer given. Some of my students, they did some experiment how to predict the percentage of fertilizer combined together to get the most optimum production of any kind of crops.

Even if it is corn, rice, or sweet potato. And then we also can use AI for intelligent farming. We don’t say smart farming. Smart is not really intelligent. Intelligent is different. There is knowledge that has to be grown in the system. So intelligent farming is just like a human. They grow their knowledge within their brain. By optimizing the seed planting in the land so that plants can grow and develop healthily to produce the best products to optimization of the harvest process until delivery to logistic warehouses. So it’s just like end -to -end mechanism. And then we also can predict the weather dynamics just as a short step of the flood and something like that. So we can predict the weather dynamics to obtain the right conditions.

So that’s the vision for planting seed and reducing the level of crop failures. The crop failures that… This often happens if the farmer, they fail to predict what kind of pest, what kind of, what type of the soil and everything. And then the last one, optimizing the logistic transportation route to reduce the operational and other unnecessary costs. You can count how much operational costs to deliver the crop production from one island to another island in Indonesia. The price in the eastern area can be double or triple times in eastern area. So if we buy rice in eastern area only $1, it can be $3. $5, $6 in. eastern area. So that’s why we need AI to optimize the transportation and logistic transportation routes.

Whether it is from water, from the ocean or sea, and also from the air. Regarding the policy and regulation, you asked about the air roadmap, right? And then about how to balance the horizontal AI government with sector -specific agriculture, right? Yeah, we are proud. UNESA is proud to be a leader in our region, exploring how AI policy and regulation can be powerful tools for promoting trustworthy AI, especially in critical verticals like the agricultural sector. This one. Agriculture is very important to UNESA because most of the people in Indonesia, they are farmers not only in Java Island but also in other big islands in Indonesia if you see, there are five big islands in Indonesia from western area like Sumatra and then Java and the southern area we have Borneo in the central, also Sulawesi, or Celebes and the biggest one in the eastern area is Papua Island still have so much area that can be explored to become a rice field our national AI roadmap is not merely a technological blueprint it is a strategic framework designed to create an ecosystem that harnesses AI for inclusive and resilient system, including food system, so there are two keywords in here inclusive and resilient inclusive means it must be transparent AI must be transparent, AI must be explainable.

We’ve been having problems with the neural network -based system that the black box cannot be explained in plain. And then the second was Sicilian. This is very important for agricultural -based nation. So the implementation of AI needs a strong and sustainable national ecosystem, like my dear colleague, Ambassador, first of all mentioned about ecosystem. The AI cannot be implemented, cannot be applied without a strong and sustainable ecosystem that collaborate all stakeholders, not only government, but also business, industries, communities, media, and also academia. so we have a concept of helix maybe you ever heard about quad helix, five helix, six helix that’s very important so when we are developing the ANS roadmap the government in this case Ministry of Digital Information and Communication and Digital Affairs is open a voluntary contribution from all stakeholders not only the government but also from industry, academia media and communities so our roadmap has seven pillars that include AI regulation AI ethics, that’s important the third one is investment like it was mentioned before about financing when I was working the attending the US forum in AI export they mentioned about financing financing is very important, without that there is no AI ecosystem financing and investment and then the third AI data, the fifth one AI innovation and then the next one AI talent development the last one is AI use case so because we embrace all stakeholders so we assure there is no one left behind.

Thank you.

Sara Rendtorff Smith

Thank you very much professor and we can come back to those in more detail later perhaps in the Q &A but I really want to thank you for sharing the promising use cases from Indonesia, very instructive I think for this discussion and now I would like to turn over you talked about the helix and how we work together to have the industry perspective from Ms. Ghosh India as we mentioned also co -chairs the summit working group and so I’d be interested to hear now that we’re seeing AI as quickly becoming foundational to agricultural productivity and food security but the big question now is whether as we mentioned it will deepen inequalities or indeed democratize the opportunity so from your vantage point Ms.

Ghosh what practical steps are needed to broaden access to AI capabilities so that emerging economies and smallholder farmers can also benefit and fully participate and as adoption accelerates hopefully broadly how should public -private partnerships evolve to scale responsible AI deployment and prevent the AI divide? Thank you.

Debjani Ghosh

It’s a very long answer question. I’ll try and keep my answer very short. But before I do that I have to acknowledge the presence of Yeah, okay. But before I do that I have to acknowledge the presence of I think one of the The biggest experts in this field of agriculture in this room, Professor Ramesh Chand, who’s also a very esteemed member of NITI Aayog. And I requested him not to come for this session. I’m going to be too nervous if you’re going to be sitting right in front of me. But yeah, let’s see if we live up to his expectations or not. You know, the biggest problem with AI today is that we throw AI at every problem that exists.

And we expect that something will happen out of it. Right. And as a result, we generalize the technology a bit too much. See, the thing with AI is if you really want to unlock the technology, you have to know what exactly are you solving for? What problems? And then you have to go deep because there are so much that has to come together for AI to work. For example, is the data in place? How good is the quality? Is the ecosystem in place? Are capabilities in place? So AI requires investments. And AI is a pretty deep investment overall. Right. So it’s very important to understand what problems do you want to solve with AI. And I think that’s one of the biggest issues today because we are not taking the time to think through it.

We keep saying AI is the magical world for everything. Right. So now let’s look at the food system. And I hope I’m correct. Professor Chan, I’ve learned this a bit from you also. But I think the biggest issue today is while the world is producing enough food to feed, I think, 8 billion people. But there are still millions and millions. Who are hungry. So there’s a paradox. And I think when you start breaking it down further to understand the exact problems as to why this exists. distribution? The entire access to food, do you have access to food so there is surplus and there is deficiency and then you don’t have a bridge to ensure that there is distribution happening at real time that is needed.

And what this results in is tremendous amount of food shortage, food wastage. And some of the culprits when we think of it, of course geopolitical wars are a big culprit, conflicts are a big culprit but climate is another big culprit. So this is how you sort of at least how I, because I look at everything from a tech lens, I’m by no means an expert in the domain but when I look at it from a technology lens and I say how do I best apply the technology to this problem, this is the domain that we have to play with. So now when you look at it if I have to say where do I want to go deep the problem to solve for at least when I look at all of this is the biggest problem to suffer in the food supply chain according to me right now just purely looking at it from a tech lens is the wastage.

How do I bring down food wastage? What role can AI play to bring down food wastage? So then you start looking at logistics, you start looking at supply, the cold chains that exist globally or not. You start looking at trade, you start looking at geopolitical agreements because all of that will come into mind. Now in terms of industry coming together to solve for AI again if you want the best out of industry you have to ensure that there is alignment on the problem statement you want to solve. Otherwise everyone will come and everyone will do the same pilot everywhere. That’s what’s happening today. When you look at AI executions around India and around the world, and because of the AI commons that we have built, every country is trying out the same thing, farmer advisory, right?

Every country is trying it out, but why is it not scaling? Why are we not solving for other problems? So again, it’s very important to identify what is the problem statement? How do you ensure that when industry gets involved, there is a route to market? And there is a route to commercialization because that becomes very important for industry. And one of the things that we advocate is coming up with maybe a center of excellence, a center of innovation that is identified to solve specific problems. I think one of the problems today we have with COEs are you have AI COEs, you have blockchain COEs. I really don’t understand what that means. But what if we had a COE to say that how do we ensure that the cold chain problem is solved across the country?

How do we have a COE that ensures that climate resilient crops in XYZ areas can be grown, right? And then bringing the industry together to say that how do we collaborate to create, I think gives you the right kind of outcomes. Thank you

Sara Rendtorff Smith

very much, Mishkosh. And this is a perfect segue, I think, to our next speaker, turning to the research community and how to really bridge research into advanced AI to more practical tools. So, Dr. Pratihast, I would like to turn to you now for, you know, some examples of, you know, how these advanced AI tools can really be made to good use in more low -tech farming environments. And maybe you can give us some concrete examples, what distinguishes those who succeed from those who don’t. Maybe speaking also to some of the points that Mishkosh raised. Thank you. Thank

Arun Pratihast

you. Thank you for invitation. It’s very timely discussion. And of course, always when we talk about AI, we often talk about the technology, how fast the model are, how big the data set they can handle, what are the parameters. That we always talk. But if you think about the food system, and of course, Terry mentioned that, you know, food system have different layers. And bottom of this layer is basically a smallholder farmers. And that farmer operate in a different environment. If you look at it last year, there’s billions of euro investment has been done in the tech industry to build more models. Is the same thing happen to the smallholder farmers? No. So there is often there is a problem that what we want to solve in the server room or computer, it doesn’t work in the field.

Right. So we. Really need to think how. the AI or model which we are really developing that is applicable to the grassroots level. And so within the Wakeningen and personally, I have been working in Asia, Africa and Latin America. And one of the problems, basically, there is three problems we are facing in this whole AI domain nowadays. First is really data scarcity. Still, there is not enough data. The data is not shared. As you mentioned, there is no ecosystem. There is no fair infrastructure where data can be shared. And that hinders the model. The model works on a global scale, but when you want to work on the local scale, it doesn’t work. It doesn’t provide the input that is expected for the smallholder farmers.

Second is the trust. Often, the farmers don’t own, and then, of course, the… the model and the farmer’s expectation is different and then there’s often not much trust how to apply this in the local level. That’s why most of this advisory is failed. Farmer doesn’t follow the advisory because it doesn’t make sense. And then third thing is scalability. Often we think that scale is not only the technical scale. Like you process something fast doesn’t mean that it can apply the same way. So we need to really think differently. And that’s why like we started I give a couple of three concrete examples. One example about food security. We need to understand what is the map.

Where are the crops? There is no global map that is accurate enough. So with the help of European Space Agency four years ago we started the World Cereal Project where we try to map the global crop length. So we started the World Cereal Project the World Cereal Project and we started still the maps are not perfect because India, China, many countries they don’t share their data so there is no data and if there is no data we have fantastic model we have built very nice geo -embedding with NASA harvest but applicability of this model is still very low in this country second thing is about high tech solution in low tech environment for example chocolate industry cocoa, agroforestry is really suffering from the climate change and we have established many advisory services but not from the researcher or tech perspective but engaging farmer perspective and that works we build basically chatbot with their language that really understand what they need and how we can translate their problem they know which disease are coming so we are using computer vision from their lens and then we are training and that works So there are a couple of things which we really see that if you really want to make these things working, you need to make sure that these solutions should work in low -tech environment.

Most of the things, connectivity has gone up. People are on social media, but still data is not there. Data infrastructure is not there. And always tech industry or like we as a modeler, whatever we call. So we see always data as the input and output. Data should be as the infrastructure. We should engage farmers in that infrastructure. And then only we can achieve the

Sara Rendtorff Smith

Thank you very much. And I think with this, unfortunately, we’re coming to a close on time. I think maybe the speakers can be kind enough to stay a little bit after if there are questions. We won’t have much time for Q &A. but just to thank you all for really providing a diverse set of perspectives for the timely discussions to the ambassador of the Netherlands for framing this important discussion and I think some of the key takeaways perhaps is that there is vast potential and we saw the Indonesian perspective of all these very concrete examples also Dejan talking about potential for anticipatory action and we heard about this global and even domestic paradox of food insecurity when there really is enough food but it may not be distributed enough or properly and also I think importantly that to have impact with AI we need to make sure that it is problem driven that it is driven by the local context and the farmers who need to use it and maybe lastly a very important point which is exactly core to the work we do at the OECD that to drive this adoption we also need to ensure that there is trust in what is produced.

And this requires, obviously, a number of factors, such as explainability and transparency and so on, and also responsible data collection. But just with that, let me thank the panelists for their rich inputs. Please do stick around a little bit for some questions, maybe in the margin. And thanks again to the Kingdom of the Netherlands for co -hosting this event with the OECD. Thank you.

Speaker 5

Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
H
Harry Verweij
2 arguments143 words per minute989 words414 seconds
Argument 1
AI can increase productivity and sustainability while reducing environmental impact through precision farming, smart irrigation (90% water savings), and predictive disease control models
EXPLANATION
Ambassador Verweij argues that AI and digitalization offer enormous opportunities to enhance agricultural productivity while simultaneously reducing environmental impact. He emphasizes that these technologies can contribute to climate resilience and food security in an inclusive and sustainable manner.
EVIDENCE
Examples include precision farming with AI, such as water savings of up to 90% through smart irrigation, optimal crop yields with minimal input, and predictive models for disease control
MAJOR DISCUSSION POINT
AI applications and opportunities in agriculture
AGREED WITH
Dejan Jakovljevic, Sara Rendtorff Smith
DISAGREED WITH
Debjani Ghosh, Arwin Datumaya Wahyudi Sumari
Argument 2
International cooperation and knowledge sharing are essential to accelerate AI development and application in agriculture
EXPLANATION
The Ambassador emphasizes that accelerating the development and application of new technologies in agriculture requires collaborative efforts between countries. He highlights the importance of trustworthy AI and the value of having agricultural ecosystems on the agenda of important AI summits.
EVIDENCE
Netherlands’ collaboration with India, Indonesia, OECD, and FAO; bilateral and multilateral partnerships within the global partnership on AI; support for Indonesia’s OECD membership ambitions
MAJOR DISCUSSION POINT
Policy and governance frameworks
AGREED WITH
Arwin Datumaya Wahyudi Sumari, Sara Rendtorff Smith
D
Dejan Jakovljevic
2 arguments140 words per minute738 words316 seconds
Argument 1
AI enables anticipatory actions for food system shocks and can lower entry barriers to agricultural knowledge through multilingual advisory services accessible via phone calls
EXPLANATION
Jakovljevic argues that AI can help build resilience by enabling anticipatory actions to address shocks to agri-food systems. He also highlights how AI can make agricultural knowledge more inclusive by lowering entry barriers through accessible services in multiple languages.
EVIDENCE
Example of a new tool by the government of India where farmers can get agricultural advisory through phone calls (not requiring smartphones) in many languages, covering topics from shrimp cultivation to pest diseases
MAJOR DISCUSSION POINT
AI applications and opportunities in agriculture
AGREED WITH
Sara Rendtorff Smith, Arun Pratihast
Argument 2
Global paradox exists where enough food is produced for 8 billion people yet 700 million remain hungry due to distribution failures
EXPLANATION
Jakovljevic points out the critical issue that despite sufficient global food production, hundreds of millions of people still lack access to food. This highlights systemic failures in food distribution and access rather than production capacity.
EVIDENCE
700 million people without food on the table today despite sufficient global food production
MAJOR DISCUSSION POINT
Food security and supply chain resilience
A
Arwin Datumaya Wahyudi Sumari
4 arguments109 words per minute1277 words698 seconds
Argument 1
AI can predict optimal soil conditions, fertilizer content, and crop selection for Indonesia’s diverse 17,000 islands, plus optimize logistics to reduce transportation costs that can triple prices in remote areas
EXPLANATION
Professor Sumari explains how AI can address Indonesia’s unique geographical challenges by optimizing agricultural practices across its many islands with different soil conditions. He also emphasizes how AI can solve the significant cost disparities caused by complex logistics between islands.
EVIDENCE
Indonesia has 17,000 islands with different soil conditions and nutrition; rice prices can be $1 in one area but $3-6 in eastern areas due to transportation costs; president’s program to open 1 million hectares of new rice fields
MAJOR DISCUSSION POINT
AI applications and opportunities in agriculture
AGREED WITH
Harry Verweij, Sara Rendtorff Smith
DISAGREED WITH
Debjani Ghosh, Harry Verweij
Argument 2
Indonesia faces infrastructure challenges across separated islands with unequal AI talent distribution and varying time zones
EXPLANATION
Professor Sumari outlines the structural challenges Indonesia faces in implementing AI solutions, including geographical separation, infrastructure gaps, and human resource distribution issues. These challenges are compounded by the country’s position on the ring of fire and diverse time zones.
EVIDENCE
Indonesia has 17,000 islands, 36% land and 64% water, three different time regions, living above the ring of fire, lack of democratically supporting AI infrastructure such as telecommunication
MAJOR DISCUSSION POINT
Challenges and barriers to AI adoption
Argument 3
Indonesia’s national AI roadmap includes seven pillars: regulation, ethics, investment, data, innovation, talent development, and use cases
EXPLANATION
Professor Sumari describes Indonesia’s comprehensive approach to AI development through a strategic framework that encompasses multiple dimensions. The roadmap is designed to create an inclusive and resilient ecosystem that involves all stakeholders in society.
EVIDENCE
Seven pillars: AI regulation, AI ethics, investment, AI data, AI innovation, AI talent development, and AI use cases; involvement of all stakeholders including government, business, industries, communities, media, and academia through helix collaboration model
MAJOR DISCUSSION POINT
Policy and governance frameworks
Argument 4
Need for transparent and explainable AI systems that involve all stakeholders through multi-helix collaboration between government, industry, academia, media, and communities
EXPLANATION
Professor Sumari emphasizes that AI implementation requires transparency and explainability, particularly addressing the black box problem of neural network-based systems. He advocates for a collaborative approach involving all sectors of society to ensure no one is left behind.
EVIDENCE
Problems with neural network-based systems that cannot be explained; concept of quad helix, five helix, six helix collaboration; government opening voluntary contribution from all stakeholders in developing the AI roadmap
MAJOR DISCUSSION POINT
Inclusive and responsible AI development
AGREED WITH
Harry Verweij, Sara Rendtorff Smith
S
Sara Rendtorff Smith
6 arguments94 words per minute2039 words1289 seconds
Argument 1
AI-enabled precision spraying reduces pesticide use by 30% without compromising yield, and computer vision systems cut herbicide use by half
EXPLANATION
Smith presents evidence from OECD research showing that AI applications in agriculture can significantly reduce chemical inputs while maintaining productivity. These technologies demonstrate how AI can contribute to more sustainable farming practices.
EVIDENCE
AI-enabled precision spraying reduces pesticide use by up to 30% without compromising yield; computer vision green on brown systems can cut herbicide use by up to half by targeting only weeds
MAJOR DISCUSSION POINT
AI applications and opportunities in agriculture
AGREED WITH
Harry Verweij, Arwin Datumaya Wahyudi Sumari
Argument 2
Digital divide creates exclusion from entire ecosystems, with adoption rates varying dramatically from 96% in Australia to 12% in Chile
EXPLANATION
Smith highlights the stark disparities in digital technology adoption across countries, which could deepen existing inequalities if not addressed. This digital divide represents a significant barrier to equitable access to AI benefits in agriculture.
EVIDENCE
96% of farmers using digital tools in Australia compared to just 12% in Chile; evidence from digital traceability initiatives across OECD members showing growing maturity but uneven adoption
MAJOR DISCUSSION POINT
Challenges and barriers to AI adoption
AGREED WITH
Dejan Jakovljevic, Arun Pratihast
Argument 3
AI can strengthen anticipatory capacity to respond to climate, conflict, and economic crises before they escalate
EXPLANATION
Smith argues that AI systems can enhance resilience by enabling proactive responses to various threats facing food systems. This anticipatory capability represents a significant advancement over reactive approaches to crisis management.
EVIDENCE
Forecasting, monitoring, and early detection of climatic and biological threats; AI systems strengthening capacity to respond to crises before they escalate; examples from Central Europe and Asia showing climate-adaptive varieties
MAJOR DISCUSSION POINT
Food security and supply chain resilience
AGREED WITH
Harry Verweij, Dejan Jakovljevic
Argument 4
Need for interoperable governance frameworks to support cross-border trade, traceability, and resilient food supply chains
EXPLANATION
Smith emphasizes that fragmented data governance frameworks create complexity for AI tools that support international trade and supply chain management. She advocates for greater interoperability to enable effective cross-border AI applications.
EVIDENCE
Fragmented data governance frameworks introduce complexity; AI-enabled traceability, market transparency, and smart logistics can reduce losses and improve compliance
MAJOR DISCUSSION POINT
Policy and governance frameworks
Argument 5
OECD is developing AI policy toolkit with practical guidance and maintains policy navigator covering 2,000 policies across 80 jurisdictions
EXPLANATION
Smith describes OECD’s efforts to provide practical, context-specific guidance to countries for AI policy development. The policy navigator serves as a comprehensive resource for countries to learn from existing AI policies and strategies.
EVIDENCE
AI policy toolkit providing practical, context-specific guidance; policy navigator on oecd.ai covering more than 2,000 policies across 80 jurisdictions with examples of national AI strategies and sector-specific policies
MAJOR DISCUSSION POINT
Policy and governance frameworks
AGREED WITH
Harry Verweij, Arwin Datumaya Wahyudi Sumari
Argument 6
Farmers must have control over how their data is collected, shared, and used responsibly
EXPLANATION
Smith emphasizes the importance of responsible data governance in agricultural AI applications. She argues that farmers need to maintain agency over their data to build trust and ensure ethical AI deployment.
MAJOR DISCUSSION POINT
Inclusive and responsible AI development
D
Debjani Ghosh
2 arguments156 words per minute887 words339 seconds
Argument 1
AI is often applied generically without identifying specific problems to solve, leading to failed pilots and lack of scaling
EXPLANATION
Ghosh argues that the biggest problem with AI today is the tendency to apply it broadly without clearly defining the specific problems it should address. This approach leads to ineffective implementations and prevents successful scaling of AI solutions.
EVIDENCE
Every country trying out the same farmer advisory pilots but not scaling; lack of alignment on problem statements leading to everyone doing the same pilot everywhere
MAJOR DISCUSSION POINT
Challenges and barriers to AI adoption
DISAGREED WITH
Arun Pratihast
Argument 2
Focus should be on solving food wastage in supply chains rather than production, requiring alignment on specific problem statements for effective industry collaboration
EXPLANATION
Ghosh identifies food wastage as the primary problem to address in food systems, rather than production capacity. She argues that solving distribution, logistics, and supply chain issues should be the priority for AI applications in agriculture.
EVIDENCE
World produces enough food for 8 billion people but millions remain hungry; focus on logistics, cold chains, trade, and geopolitical agreements; need for centers of excellence focused on specific problems like cold chain solutions
MAJOR DISCUSSION POINT
Food security and supply chain resilience
DISAGREED WITH
Harry Verweij, Arwin Datumaya Wahyudi Sumari
A
Arun Pratihast
2 arguments152 words per minute690 words271 seconds
Argument 1
Data scarcity, lack of farmer trust in AI recommendations, and scalability issues prevent effective implementation at grassroots level
EXPLANATION
Dr. Pratihast identifies three critical barriers to AI adoption in agriculture: insufficient and unshared data, farmers’ distrust of AI recommendations that don’t align with their expectations, and scalability challenges that go beyond technical processing speed.
EVIDENCE
Billions of euro investment in tech industry but not reaching smallholder farmers; models work globally but fail locally; farmers don’t follow advisory because it doesn’t make sense; World Cereal Project mapping challenges due to countries not sharing data
MAJOR DISCUSSION POINT
Challenges and barriers to AI adoption
AGREED WITH
Dejan Jakovljevic, Sara Rendtorff Smith
Argument 2
Solutions must work in low-tech environments and engage farmers in data infrastructure rather than treating them as passive recipients
EXPLANATION
Dr. Pratihast emphasizes that effective AI solutions for agriculture must be designed to function in environments with limited technological infrastructure. He advocates for treating farmers as active participants in data infrastructure rather than merely as end users.
EVIDENCE
Examples from cocoa agroforestry with chatbots in local languages and computer vision from farmers’ perspectives; World Cereal Project with European Space Agency; work across Asia, Africa, and Latin America
MAJOR DISCUSSION POINT
Inclusive and responsible AI development
DISAGREED WITH
Debjani Ghosh
S
Speaker 5
1 argument9 words per minute4 words26 seconds
Argument 1
Acknowledgment and gratitude for the session and discussion
EXPLANATION
Speaker 5 provides a brief acknowledgment at the end of the session, expressing thanks for the discussion and presentations. This represents a standard closing courtesy in formal panel discussions.
MAJOR DISCUSSION POINT
Session conclusion
Agreements
Agreement Points
AI can significantly enhance agricultural productivity and sustainability
Speakers: Harry Verweij, Sara Rendtorff Smith, Arwin Datumaya Wahyudi Sumari
AI can increase productivity and sustainability while reducing environmental impact through precision farming, smart irrigation (90% water savings), and predictive disease control models AI-enabled precision spraying reduces pesticide use by 30% without compromising yield, and computer vision systems cut herbicide use by half AI can predict optimal soil conditions, fertilizer content, and crop selection for Indonesia’s diverse 17,000 islands, plus optimize logistics to reduce transportation costs that can triple prices in remote areas
All three speakers agree that AI offers concrete, measurable benefits for agricultural productivity while simultaneously reducing environmental impact through precision applications and optimized resource use
International cooperation and multi-stakeholder collaboration are essential for AI success in agriculture
Speakers: Harry Verweij, Arwin Datumaya Wahyudi Sumari, Sara Rendtorff Smith
International cooperation and knowledge sharing are essential to accelerate AI development and application in agriculture Need for transparent and explainable AI systems that involve all stakeholders through multi-helix collaboration between government, industry, academia, media, and communities OECD is developing AI policy toolkit with practical guidance and maintains policy navigator covering 2,000 policies across 80 jurisdictions
Speakers consistently emphasize that effective AI implementation requires collaborative frameworks involving multiple stakeholders and international knowledge sharing mechanisms
AI can enable anticipatory and proactive responses to agricultural challenges
Speakers: Harry Verweij, Dejan Jakovljevic, Sara Rendtorff Smith
AI can increase productivity and sustainability while reducing environmental impact through precision farming, smart irrigation (90% water savings), and predictive disease control models AI enables anticipatory actions for food system shocks and can lower entry barriers to agricultural knowledge through multilingual advisory services accessible via phone calls AI can strengthen anticipatory capacity to respond to climate, conflict, and economic crises before they escalate
All speakers agree that AI’s predictive capabilities enable proactive rather than reactive approaches to agricultural challenges, from disease control to crisis management
Digital divides and access barriers must be addressed for inclusive AI adoption
Speakers: Dejan Jakovljevic, Sara Rendtorff Smith, Arun Pratihast
AI enables anticipatory actions for food system shocks and can lower entry barriers to agricultural knowledge through multilingual advisory services accessible via phone calls Digital divide creates exclusion from entire ecosystems, with adoption rates varying dramatically from 96% in Australia to 12% in Chile Data scarcity, lack of farmer trust in AI recommendations, and scalability issues prevent effective implementation at grassroots level
Speakers acknowledge that significant barriers exist to AI adoption, particularly for smallholder farmers and developing countries, requiring targeted interventions to ensure inclusive access
Similar Viewpoints
Both speakers criticize the current approach to AI implementation in agriculture, emphasizing that generic applications without clear problem definition and farmer engagement lead to failed outcomes
Speakers: Debjani Ghosh, Arun Pratihast
AI is often applied generically without identifying specific problems to solve, leading to failed pilots and lack of scaling Data scarcity, lack of farmer trust in AI recommendations, and scalability issues prevent effective implementation at grassroots level
Both speakers identify the core problem as distribution and access rather than production capacity, highlighting the need to focus AI solutions on supply chain and logistics challenges
Speakers: Debjani Ghosh, Dejan Jakovljevic
Focus should be on solving food wastage in supply chains rather than production, requiring alignment on specific problem statements for effective industry collaboration Global paradox exists where enough food is produced for 8 billion people yet 700 million remain hungry due to distribution failures
Both speakers emphasize the importance of farmer agency and participation in AI systems, advocating for responsible data governance and farmer-centric design approaches
Speakers: Sara Rendtorff Smith, Arun Pratihast
Farmers must have control over how their data is collected, shared, and used responsibly Solutions must work in low-tech environments and engage farmers in data infrastructure rather than treating them as passive recipients
Unexpected Consensus
Problem-focused rather than technology-focused approach to AI
Speakers: Debjani Ghosh, Arun Pratihast, Sara Rendtorff Smith
AI is often applied generically without identifying specific problems to solve, leading to failed pilots and lack of scaling Solutions must work in low-tech environments and engage farmers in data infrastructure rather than treating them as passive recipients Need for interoperable governance frameworks to support cross-border trade, traceability, and resilient food supply chains
Unexpectedly, speakers from industry, academia, and international organizations all converged on criticizing technology-first approaches, instead advocating for problem-driven AI development that prioritizes user needs and practical implementation challenges
Food distribution rather than production as the primary challenge
Speakers: Debjani Ghosh, Dejan Jakovljevic
Focus should be on solving food wastage in supply chains rather than production, requiring alignment on specific problem statements for effective industry collaboration Global paradox exists where enough food is produced for 8 billion people yet 700 million remain hungry due to distribution failures
Both industry and UN perspectives unexpectedly aligned on identifying distribution and logistics as the core challenge rather than agricultural production capacity, suggesting a shift in how food security problems are conceptualized
Overall Assessment

Strong consensus emerged around AI’s technical potential for agriculture, the need for inclusive and collaborative approaches, and the importance of addressing distribution rather than production challenges. Speakers consistently emphasized problem-driven rather than technology-driven solutions.

High level of consensus with significant alignment across different stakeholder perspectives (government, industry, academia, international organizations). This suggests a maturing understanding of AI’s role in agriculture that prioritizes practical implementation and inclusive access over technological capabilities alone. The implications are positive for coordinated policy development and implementation strategies.

Differences
Different Viewpoints
Primary focus for AI applications in food systems
Speakers: Debjani Ghosh, Harry Verweij, Arwin Datumaya Wahyudi Sumari
Focus should be on solving food wastage in supply chains rather than production, requiring alignment on specific problem statements for effective industry collaboration AI can increase productivity and sustainability while reducing environmental impact through precision farming, smart irrigation (90% water savings), and predictive disease control models AI can predict optimal soil conditions, fertilizer content, and crop selection for Indonesia’s diverse 17,000 islands, plus optimize logistics to reduce transportation costs that can triple prices in remote areas
Ghosh argues for prioritizing food wastage and supply chain issues over production, while Verweij emphasizes productivity and sustainability improvements, and Sumari focuses on optimizing production through soil prediction and logistics
Approach to AI implementation and problem definition
Speakers: Debjani Ghosh, Arun Pratihast
AI is often applied generically without identifying specific problems to solve, leading to failed pilots and lack of scaling Solutions must work in low-tech environments and engage farmers in data infrastructure rather than treating them as passive recipients
Ghosh emphasizes the need for specific problem definition before AI application, while Pratihast focuses on adapting AI solutions to work in low-tech environments and engaging farmers as active participants
Unexpected Differences
Scale and scope of AI intervention priorities
Speakers: Debjani Ghosh, Multiple other speakers
AI is often applied generically without identifying specific problems to solve, leading to failed pilots and lack of scaling Various speakers promoting broad AI applications across multiple agricultural domains
Unexpectedly, Ghosh took a contrarian stance against the general enthusiasm for broad AI applications, arguing for more focused problem-solving approaches while other speakers promoted comprehensive AI adoption across various agricultural sectors
Overall Assessment

The main areas of disagreement centered on prioritization of AI applications (production vs. supply chain focus), implementation approaches (problem-specific vs. comprehensive), and the balance between technological advancement and practical farmer needs

Moderate disagreement level with constructive differences in emphasis rather than fundamental opposition. The disagreements reflect different perspectives on implementation strategies rather than conflicting goals, which could lead to complementary approaches if properly coordinated

Partial Agreements
Both speakers acknowledge significant digital divides and infrastructure challenges, but Smith focuses on global adoption disparities while Sumari emphasizes Indonesia’s specific geographical and infrastructure constraints
Speakers: Sara Rendtorff Smith, Arwin Datumaya Wahyudi Sumari
Digital divide creates exclusion from entire ecosystems, with adoption rates varying dramatically from 96% in Australia to 12% in Chile Indonesia faces infrastructure challenges across separated islands with unequal AI talent distribution and varying time zones
Both agree on the importance of farmer agency and trust in data use, but Smith emphasizes responsible data governance frameworks while Pratihast focuses on building trust through farmer engagement and locally relevant solutions
Speakers: Sara Rendtorff Smith, Arun Pratihast
Farmers must have control over how their data is collected, shared, and used responsibly Data scarcity, lack of farmer trust in AI recommendations, and scalability issues prevent effective implementation at grassroots level
Both emphasize the importance of collaboration and stakeholder involvement, but Verweij focuses on international cooperation while Sumari emphasizes domestic multi-stakeholder collaboration and AI transparency
Speakers: Harry Verweij, Arwin Datumaya Wahyudi Sumari
International cooperation and knowledge sharing are essential to accelerate AI development and application in agriculture Need for transparent and explainable AI systems that involve all stakeholders through multi-helix collaboration between government, industry, academia, media, and communities
Takeaways
Key takeaways
AI offers significant potential to transform agriculture through precision farming, predictive analytics, and supply chain optimization, with proven results like 30% pesticide reduction and 90% water savings The main challenge is not food production but distribution – enough food exists to feed 8 billion people yet 700 million remain hungry due to supply chain inefficiencies and wastage A major digital divide exists in AI adoption, ranging from 96% farmer adoption in Australia to only 12% in Chile, which could deepen existing inequalities if not addressed Successful AI implementation requires problem-driven approaches rather than generic technology deployment, with focus on specific issues like food wastage reduction rather than broad applications Trust and transparency are critical for farmer adoption – AI systems must be explainable and developed with farmer input rather than imposed from external tech perspectives International cooperation and knowledge sharing are essential, requiring interoperable governance frameworks and multi-stakeholder collaboration including government, industry, academia, and communities AI solutions must be designed to work in low-tech environments and engage farmers as active participants in data infrastructure rather than passive recipients
Resolutions and action items
OECD to continue developing AI policy toolkit with practical, context-specific guidance for countries OECD to maintain and expand policy navigator covering AI policies across jurisdictions, encouraging more countries to contribute Netherlands committed to forging concrete partnerships and sharing knowledge/technology for inclusive AI solutions Indonesia to implement national AI roadmap with seven pillars: regulation, ethics, investment, data, innovation, talent development, and use cases FAO to continue building anticipatory tools and situation rooms for food system shock response Focus on establishing centers of excellence around specific problem statements (e.g., cold chain solutions, climate-resilient crops) rather than generic AI centers
Unresolved issues
How to effectively bridge the digital divide and ensure equitable access to AI technologies across different economic contexts Lack of adequate data sharing mechanisms and infrastructure, particularly in developing countries where farmers don’t share agricultural data No global accurate crop mapping system exists due to data sharing restrictions by major countries like India and China Fragmented data governance frameworks that complicate cross-border AI applications in food supply chains High costs, limited digital skills, and infrastructure gaps that continue to slow AI uptake among smallholder farmers How to scale successful AI pilot projects beyond initial implementations Balancing horizontal AI governance with sector-specific agricultural regulations across different jurisdictions
Suggested compromises
Develop AI solutions that work in low-connectivity environments while gradually building digital infrastructure Create multilingual, accessible AI advisory services (like phone-based systems) that don’t require smartphones or high-tech devices Focus on specific, measurable problem-solving (like food wastage reduction) rather than attempting to solve all agricultural challenges simultaneously Establish public-private partnerships with clear routes to market and commercialization to ensure industry engagement while serving public good Build farmer trust through transparent, explainable AI systems that incorporate local knowledge and farmer perspectives in development Develop context-specific solutions for different regions while maintaining interoperability for global food supply chains
Thought Provoking Comments
The key issue is that it used to be possible to exist outside of the digital ecosystem. We all know we could maybe go to the bank, but nowadays it’s not. So if a farmer or communities are outside of the digital ecosystem, they suddenly are outside of any ecosystem almost. And now with the AI, it makes it even worse.
This comment reframes the digital divide as an existential issue rather than just a technological gap. It highlights how AI isn’t just creating new opportunities but potentially making exclusion more severe and comprehensive than ever before.
This observation shifted the discussion from focusing purely on AI’s benefits to acknowledging its potential to deepen inequalities. It set up the framework for subsequent speakers to address inclusion as a critical challenge, influencing how other panelists framed their responses about accessibility and democratization.
Speaker: Dejan Jakovljevic
The biggest problem with AI today is that we throw AI at every problem that exists. And we expect that something will happen out of it… it’s very important to understand what problems do you want to solve with AI.
This comment challenges the prevailing AI hype and calls for a more strategic, problem-driven approach. It cuts through the technological enthusiasm to demand clarity about actual use cases and outcomes.
This critique fundamentally redirected the conversation from discussing AI capabilities to focusing on problem identification and solution design. It influenced the subsequent discussion by emphasizing the need for specific problem statements and measurable outcomes, moving away from generic AI applications.
Speaker: Debjani Ghosh
While the world is producing enough food to feed 8 billion people, but there are still millions and millions who are hungry. So there’s a paradox… the biggest problem to solve for in the food supply chain according to me is the wastage.
This insight reframes the global food security challenge from production to distribution and waste, identifying a specific, actionable target for AI intervention rather than broad agricultural improvement.
This comment shifted the focus from increasing food production (the traditional agricultural AI narrative) to addressing systemic inefficiencies. It provided a concrete problem statement that other speakers could build upon and demonstrated the kind of specific problem identification she had just advocated for.
Speaker: Debjani Ghosh
Often, the farmers don’t own… the model and the farmer’s expectation is different and then there’s often not much trust how to apply this in the local level. That’s why most of this advisory is failed. Farmer doesn’t follow the advisory because it doesn’t make sense.
This comment exposes a critical gap between AI development and real-world implementation, highlighting the disconnect between technological capabilities and user needs. It explains why many AI initiatives fail despite technical success.
This observation brought the discussion full circle to the practical realities of AI deployment. It reinforced earlier points about inclusion and problem-driven approaches while adding the crucial dimension of user trust and local relevance. It grounded the entire discussion in the reality of implementation challenges.
Speaker: Arun Pratihast
We don’t say smart farming. Smart is not really intelligent. Intelligent is different. There is knowledge that has to be grown in the system. So intelligent farming is just like a human. They grow their knowledge within their brain.
This distinction between ‘smart’ and ‘intelligent’ systems introduces a more sophisticated understanding of AI capabilities, suggesting systems that learn and adapt rather than just automate existing processes.
This conceptual distinction elevated the technical discussion and introduced the idea of adaptive, learning systems. It influenced how other speakers thought about AI not just as a tool but as an evolving capability that could grow with farmer needs and local conditions.
Speaker: Arwin Datumaya Wahyudi Sumari
Overall Assessment

These key comments fundamentally shaped the discussion by moving it from a technology-centric to a human-centric perspective. Jakovljevic’s observation about digital exclusion set the stage for a more critical examination of AI’s potential negative impacts. Ghosh’s critique of unfocused AI application and her identification of food waste as a specific target problem provided a methodological framework that influenced how other speakers approached the topic. Pratihast’s insights about farmer trust and local relevance brought practical implementation challenges to the forefront, while Sumari’s distinction between smart and intelligent systems added conceptual depth. Together, these comments created a progression from identifying systemic challenges to proposing focused solutions to addressing implementation realities, resulting in a more nuanced and actionable discussion about AI in agriculture.

Follow-up Questions
How can we bridge the digital divide to ensure AI benefits reach smallholder farmers in remote areas?
This addresses the critical gap where only 12% of farmers in some countries like Chile use digital tools compared to 96% in Australia, highlighting inequality that could deepen without intervention
Speaker: Sara Rendtorff Smith
What governance models and interoperability frameworks are needed for cross-border AI applications in food supply chains?
Fragmented data governance frameworks create complexity for AI tools supporting trade and traceability across borders, requiring coordinated international approaches
Speaker: Sara Rendtorff Smith
How can we build anticipatory systems that can predict and respond to shocks in agri-food systems before they escalate?
This is critical for building resilience against natural disasters, conflicts, and other factors that impact food security, moving from reactive to proactive responses
Speaker: Dejan Jakovljevic
What specific financing mechanisms and investment models are needed to support AI infrastructure development in low and middle-income countries?
Financing was identified as crucial for AI ecosystem development, but specific models for agricultural AI in developing countries need further exploration
Speaker: Arwin Datumaya Wahyudi Sumari
How can we create accurate global crop mapping when countries like India and China don’t share their agricultural data?
Data sharing barriers prevent the creation of comprehensive global food security maps, limiting the effectiveness of AI models for crop monitoring and prediction
Speaker: Arun Pratihast
What are the most effective models for Centers of Excellence focused on specific agricultural problems rather than general AI applications?
Current AI Centers of Excellence are too broad; problem-specific centers (e.g., for cold chain solutions or climate-resilient crops) may be more effective
Speaker: Debjani Ghosh
How can AI advisory services be designed to build farmer trust and ensure local relevance?
Many AI advisory services fail because farmers don’t trust or follow the advice, indicating a need for better understanding of farmer perspectives and local contexts
Speaker: Arun Pratihast
What are the best practices for making high-tech AI solutions work effectively in low-tech farming environments?
There’s a disconnect between advanced AI developed in server rooms and practical applications that work for smallholder farmers with limited technology access
Speaker: Arun Pratihast
How can we optimize logistics and transportation routes using AI to reduce food price disparities between regions?
In Indonesia, rice prices can be 3-6 times higher in eastern regions due to transportation costs, suggesting AI optimization could address food affordability
Speaker: Arwin Datumaya Wahyudi Sumari
What data infrastructure and sharing mechanisms need to be established to support farmer participation in AI ecosystems?
Data should be treated as infrastructure rather than just input/output, requiring new frameworks for farmer engagement and data sharing
Speaker: Arun Pratihast

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

The Global Power Shift India’s Rise in AI & Semiconductors

The Global Power Shift India’s Rise in AI & Semiconductors

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion focused on India’s strategic positioning in artificial intelligence and semiconductor technologies, examining the country’s potential to achieve leadership in these critical sectors. The session was moderated by Jaya Jagadish, a semiconductor industry veteran, and featured Dr. Thomas Zakaria from AMD, Professor Vivek Kumar Singh from NITI Aayog, and Rahul Garg, CEO of Moglix.


The panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems, and policy. India was positioned as well-suited for this technological shift due to its engineering talent, growing silicon design capabilities, and expanding manufacturing ecosystem. However, the discussion highlighted that momentum alone is insufficient—success requires strategic sequencing, capital discipline, and institutional alignment.


Dr. Zakaria stressed the importance of moving “from compute to capability,” suggesting India focus on sovereignty in data and applications while building supply chain resilience in specialized areas like co-packaged optics rather than competing directly in leading-edge chip manufacturing. Professor Singh outlined India’s systematic approach through initiatives like the India AI Mission and emphasized the need for strategic autonomy—indigenizing critical components while maintaining global collaboration in other areas.


Rahul Garg discussed the private sector’s readiness, noting increased capital flow and manufacturing appetite post-COVID, though he acknowledged the need for public-private partnerships to compete at global scale. The panel addressed talent development challenges, with Singh highlighting how AI tools are transforming education from memory-based to creative, solution-oriented learning.


The discussion concluded that India’s opportunity in AI and semiconductors is real but time-bound, requiring decisive execution and bold strategic decisions to avoid future regrets. Sustainability was identified as a core design principle rather than a trade-off, essential for responsible technological advancement.


Keypoints

Major Discussion Points:

Building India’s AI and Semiconductor Ecosystem: The panel discussed India’s positioning in the global AI and semiconductor landscape, emphasizing the need for alignment across silicon, software, systems, and policy to achieve true AI leadership. India’s strengths include engineering talent, silicon design capabilities, and a growing manufacturing ecosystem.


Strategic Focus Areas for Near-term Value Creation: Rather than competing directly in advanced fabs (2nm technology), India should focus on contributing to the AI infrastructure supply chain through areas like co-package optics, interconnect technologies, and other critical components where supply chains are not yet established globally.


Public-Private Partnership Models and Capital Requirements: The discussion highlighted the need for substantial capital investment (hundreds of billions of dollars) and strategic public-private partnerships to compete globally. Examples from the US Genesis Project were cited as models for de-risking innovation through government investment in shared infrastructure and research.


Talent Development and Educational Transformation: The panel addressed how to prepare the next generation for an AI-driven future, emphasizing the shift from memory-based learning to creative problem-solving, the abundance of learning resources available today, and the need for continuous reskilling even for experienced professionals.


Balancing National Security with Global Collaboration: The conversation explored India’s challenge of maintaining its traditional culture of open knowledge sharing while developing strategic autonomy in critical technologies, requiring clear rules about what to indigenize versus what to keep open for international collaboration.


Overall Purpose:

The discussion aimed to examine India’s strategic opportunities and challenges in AI and semiconductors, focusing on how to build credible sovereign capabilities while leveraging global partnerships. The panel sought to provide actionable insights on policy, investment, talent development, and strategic positioning for India’s technology leadership ambitions.


Overall Tone:

The discussion maintained an optimistic and forward-looking tone throughout, with speakers expressing confidence in India’s potential while acknowledging realistic challenges. The tone was collaborative and constructive, with panelists building on each other’s insights. There was a sense of urgency about seizing the current moment, balanced with pragmatic advice about focusing on achievable goals. The conversation remained professional and encouraging, particularly when addressing students in the audience about the opportunities available to them in this technological transformation.


Speakers

Speakers from the provided list:


Moderator: Role not specified in detail, appears to be the session moderator who introduced the panelists and managed the discussion format


Jaya Jagadish: Session moderator with three decades of experience in semiconductor industry doing design engineering, expertise in compute evolution from single threaded processors to massively parallel AI systems


Thomas Zacharia (Dr. Thomas Zakaria): Senior Vice President for Strategic Technical Partnerships and Public Policy at AMD, Inc.; previously led Oak Ridge National Laboratory where he oversaw deployment of multiple world-leading supercomputing systems including Frontier (the first exascale supercomputer); expertise in scientific discovery, national compute infrastructure, public policy, and global partnerships


Vivek Kumar Singh (Professor): Senior advisor on science and technology at NITI Aayog; plays central role in shaping India’s science, technology and innovation architecture; background in computer science, data analytics and academic leadership; expertise in R&D governance, university industry collaboration, and state level innovation ecosystems


Rahul Garg: Founder and CEO of Moglix; built one of India’s leading industrial supply chain platforms, expanded into manufacturing and industrial finance; expertise in scale, capital and execution in India’s industrial ecosystem


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive panel discussion examined India’s strategic positioning in artificial intelligence and semiconductor technologies, exploring the country’s potential to achieve global leadership in these transformational sectors. The session brought together diverse expertise from policy, industry, and technology leadership to address critical questions about India’s technological sovereignty and competitive positioning.


Panel Overview and Strategic Framework

The discussion was framed around India’s opportunity in AI and semiconductors, with the moderator establishing that AI represents perhaps the most transformational technology of our lifetimes. The session featured Jaya Jagadish, a veteran semiconductor industry executive with three decades of experience in semiconductor design engineering; Dr. Thomas Zakaria from AMD; Professor Vivek Kumar Singh from NITI Aayog; and Rahul Garg, founder and CEO of Moglix.


Jaya Jagadish emphasised that true AI leadership requires systematic alignment across four fundamental pillars: silicon, software, systems, and policy. She noted from her experience conducting panels within AMD that this transformational power has created a global contest for AI leadership, with every nation seeking self-reliance and competitive advantage.


Distinguishing Compute from Capability

Dr. Thomas Zakaria provided crucial strategic clarity by distinguishing between “compute” and “capability,” arguing that India’s opportunity lies in moving beyond mere computational infrastructure to building strategic capabilities that create lasting value. He further refined the discussion by separating sovereignty from resilience—two concepts that are often conflated but require different approaches.


Sovereignty involves ensuring that data and applications remain resident within the country and relevant to national contexts, while resilience focuses on supply chain independence and strategic positioning in global technology networks. Zakaria noted that AMD employs 10,000 people in India and expressed interest in identifying the top 50 startups among India’s 50,000 for potential partnerships.


India’s Policy Framework and Strategic Initiatives

Professor Vivek Kumar Singh outlined India’s systematic approach through initiatives like the India AI Mission, which allocates 10,000 plus crores for five years through seven comprehensive pillars addressing all aspects of AI development. He emphasised that India has already demonstrated its capability to create digital public infrastructure at population scale, earning global recognition as an IT superpower.


Singh identified a critical gap: whilst India excels at knowledge creation through its universities and R&D institutions, the country needs to improve its ability to convert this knowledge into marketable products with socioeconomic impact. He highlighted government initiatives including tax holidays for data centers and the AI Coach platform, as well as NASSCOM’s Future Skills Prime programme that provides aggregated access to online courses.


Manufacturing Transformation and Capital Mobilisation

Rahul Garg provided insights into India’s manufacturing transformation, particularly in the post-COVID environment. He observed that the pandemic fundamentally shifted perspectives on supply chain resilience when India lacked sufficient capacity for critical items like masks and oxygen concentrators during the crisis, highlighting the importance of domestic manufacturing capability.


Garg noted significant capital mobilisation, with private sector commitments of $100 billion within the week for data centres and localisation efforts. However, he raised critical questions about execution capabilities, acknowledging that whilst capital is flowing, the ability to execute at the required speed and scale remains uncertain.


The discussion revealed that Indian companies have naturally evolved toward vertical integration, building complete technology stacks from design through manufacturing to end products, unlike Western markets that typically develop through horizontal specialisation.


Talent Development and Educational Evolution

The panellists addressed fundamental changes in how knowledge is acquired and applied in the AI era. Singh emphasised that students today have unprecedented access to learning resources, making this “the best time to be a student.” He highlighted India’s advantages including the third-largest startup ecosystem globally and extensive support systems for skill development.


However, Jaya Jagadish noted from her work leading a future skills committee that massive reskilling efforts are needed, not just for new graduates but for experienced professionals. She mentioned that even her batchmates with 25 years in Silicon Valley feel threatened by technological change, illustrating the widespread nature of this challenge.


Balancing Security and Openness

Vivek Singh acknowledged that balancing national security concerns with global collaboration represents a fundamental shift for India, which has traditionally embraced knowledge as a common good. He introduced the concept of “strategic autonomy”—maintaining independence where critical national interests are at stake whilst remaining open to collaboration in non-sensitive areas.


This balance is particularly challenging in AI and semiconductors, where technologies often have dual-use applications and supply chains are globally integrated. The discussion suggested that India needs sophisticated frameworks for assessing which components require domestic control versus international collaboration.


Sustainability Considerations

An important dimension addressed sustainability as a fundamental design principle. Thomas Zakaria emphasised that leading companies have obligations to minimise environmental impact through efficient design, though he acknowledged the complexity of sustainability challenges where solutions to 21st-century problems often address issues created by 20th-century approaches.


An audience member who teaches AI and sustainability at IIM reinforced this theme, advocating for sustainability to be integrated into every design decision rather than treated as a trade-off.


Strategic Recommendations and Opportunities

The panel concluded with specific recommendations for India’s path forward. Thomas Zakaria suggested that India should focus on contributing to supply chains for leading-edge AI infrastructure deployment rather than attempting to compete directly in cutting-edge chip manufacturing. He specifically mentioned opportunities in co-packaged optics and interconnect technologies, where global supply chains are not yet established.


Rahul Garg emphasised the need to scale ambition beyond the domestic market, arguing that whilst India has become excellent at fast-following global trends (citing rapid ChatGPT adoption as an example), true leadership requires competing at global scale. This necessitates unprecedented coordination between public and private capital.


Vivek Singh stressed the importance of converting India’s strong research capabilities into practical products with socioeconomic impact, requiring cultural transformation within academic and research institutions to embrace commercialisation alongside traditional knowledge creation.


Future Challenges and Considerations

The discussion highlighted that India’s opportunity in AI and semiconductors is real but time-bound. Success requires strategic sequencing, capital discipline, institutional alignment, and infrastructure depth working in coordination. The panel’s insights suggest that India possesses fundamental ingredients for leadership—engineering talent, growing design capabilities, expanding manufacturing ecosystem, and policy commitment—but converting these advantages into global competitive position requires unprecedented coordination and sustained investment at massive scale.


The session concluded with Jaya Jagadish noting the engaging discussion and presenting mementos to the panellists, reinforcing that whilst challenges are significant, India’s opportunity to achieve meaningful leadership in AI and semiconductors remains achievable through systematic execution of well-coordinated strategies.


Session transcriptComplete transcript of the session
Moderator

Thank you. Thank you. across CPUs, GPUs, SoCs, and AI engines that power cutting -edge compute systems worldwide. She brings a rare combination of deep silicon expertise, global product leadership, and national ecosystem engagement. She is deeply committed to talent development in the ecosystem as well. Please join me in welcoming Jaya, who will be moderating our session. Our first panelist is Dr. Thomas Zakaria, Senior Vice President for Strategic Technical Partnerships and Public Policy at AMD, Inc. Dr. Zakaria previously led Oak Ridge National Laboratory, where he oversaw the deployment of multiple world -leading supercomputing systems, including Frontier, the first exascale supercomputer. His career spans scientific discovery, national compute infrastructure, public policy, and global partnerships. Please welcome Dr. Thomas Zakaria.

Joining us is Professor Vivek Kumar Singh, Senior… advisor on science and technology at NITI IO. Professor Singh plays a central role in shaping India’s science, technology and innovation architecture. From R &D governance to university industry collaboration and state level innovation ecosystems. With a background in computer science, data analytics and experience in academic leadership at leading institutions, he bridges research depth with national policy execution. Please welcome Professor Vivek Kumar Singh. My apologies. And finally, we have Mr. Rahul Garg, founder and CEO of Moglix. Rahul has built one of India’s leading industrial supply chain platforms and has expanded into manufacturing and industrial finance, navigating the realities of scale, capital and execution in India’s industrial ecosystem. Please welcome Mr.

Rahul Garg. We will now be beginning the discussion. Thank you so much for joining us.

Jaya Jagadish

All right. Good afternoon, everyone. And I would like to extend a very warm welcome to each one of you for this session. And thank you for taking time to be here with us. So we are meeting at a moment when AI is no longer a niche technology. And these conversations have become foundational. And there is a shift in shaping the entire economies. And that’s the global impact that this technology can have. And having spent about three decades in semiconductor industry doing design engineering, I have seen compute evolve. From a single threaded processor to massively parallel AI systems. And that’s. stupendous growth that we have seen and a transformation of technology. And honestly, AI is a technology that is probably the most transformational that we will be able to see in our lifetimes.

And true AI leadership is something globally there’s a contest. Every country wants to achieve self -reliance and, you know, leadership in AI. And that’s the importance of this technology that we are talking. But true AI leadership itself happens when silicon, software, systems, and policy, all of these aspects have to come together to achieve that leadership. No one aspect can really get us there. And that’s what truly excites me for today’s session. We have experts who have the knowledge. In each of these, many of these aspects, and we will be, you know, asking questions and they’ll be sharing their perspectives on this, which I’m sure all of us will enjoy listening to. So coming to India, from what I see, India is truly well poised for this technology shift.

And we bring together engineering talent, silicon design strength, and a growing ecosystem of system and infrastructure partners, including manufacturing. But what truly defines and makes this moment different is the scale and the speed at which we are moving. So we do see a strong commitment, but what is also important is collaboration. No one country or one organization can truly achieve the results or be successful at this, but we all need to collaborate. We all need to become very aware because this is not a simple thing. It has the potential to touch human lives and humanity. At a time when we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation where we are in a situation through this panel, today I want to look at three perspectives.

First, how do we continue to build the intellectual foundation? Second, how do we build manufacturing depth and supply chain resilience through a sustained investment model? And third, how do we build a credible, sovereign AI capability? I will get to Vivek. I’ll

Vivek Kumar Singh

Thank you, Jaya. This is a very important thing, very important question. I think India has already taken a call to go in a big way in the whole deep tech domain. And a lot of changes that we see happening in terms of AI compute, then AI data centers and so on. Recently, we all heard about the tax holidays for data centers that are going to be created in India. Also, platforms like AI Coach, because that’s very, very important. If you want to create AI applications for India, you need AI data, which is centered in India, which is for the context of India. So what I believe, when you talk about credibility and how credible we are into this deep tech domain, comprising AI, semiconductor, biomanufacturing, even other, areas what is very very important is that credibility doesn’t come only from announcements so what we what we really need to know and what we really need to do is to go at a scale and fortunately a lot of positive changes are already happening we have india ai mission we all know about that 10 000 plus crores for five years and it’s a very systematic effort where we have almost all so all seven pillars address you know all kind of needs that we need for ai and similarly if you look at semiconductors we we all know about what is happening in fabs also you know we know that india has a very strong ecosystem of uh you know vlsi design semiconductor design and so on unfortunately most of that ip is not with india but you know there’s a time when is also going to happen that india would also be owning a lot of ip so credibility i think uh for india would be very very important and this is coming not only as part of announcement but it’s it’s coming you know it’s coming it’s coming it’s coming it’s coming it’s coming it’s coming it’s coming you know as part of commitment for scaled deployments, for scaled growth, accelerated growth.

And what we see now is something, you know, which nobody could have thought of 10 years back or 5 years back. So we, I believe we are on the track and we are very much into, you know, into the whole realm of AI and semiconductors. And a lot of push is there and the whole ecosystem is evolving and we all, as we move further, we all are going to work towards, you know, creating a very, very credible ecosystem for overall growth of the sector.

Jaya Jagadish

Now, great insights. Thank you, Vivek. Now, moving to Rahul. There’s clearly a growing momentum to strengthen manufacturing in India. Given your journey, you have expanded Moglex from digital marketplaces into manufacturing and industrial financing. Do you believe the Indian private sector is truly ready financially and have the mind? set to take on long term investments that are needed?

Rahul Garg

So firstly, thank you for having me. I think the question is very pertinent because again, pre -COVID there was a very different environment, both from a geopolitics perspective, supply chain perspective. And I think the supply chain as a word started to become popular in COVID times. So and I think I take some pride in the fact that at least as Moglex, we have been part of seeing the supply chain journey in the country as well as continuing to now see the manufacturing journey. On the specific point that you raised on both from a will perspective, capital perspective, demand perspective, if you look at these three aspects of it. So I think the demand in the country clearly is growing rapidly.

And one of the changes that has happened, obviously India becoming the larger in terms of GDP size, consumer demand, people expecting faster and faster products, people want variety. products so on so forth so i think demand discretionary spend is increasing the one significant change that has happened post covid we see is that while the demand is growing there is also an increasing uh appetite for people to start building more and more manufacturing to also start to look at many of those being localized rather than just depend on global supply chains because obviously we have gone through moments where like we may not have enough mask capacity in the country we may not have enough oxygen concentrated capacity in the country and we some of those shocks kind of uh got both the private and the public sector realizing that there is a bare minimum manufacturing that needs to happen in the country for it to be truly self reliant at the population scale that we are in so i think that will has gotten generated the capital is starting to flow in i think the question on whether the capital is large enough and long term enough i think we are seeing increase increasing trend that there are clearly government will whether it is in terms of the fund that we have seen of 1 lakh crore, now 1 .2 billion dollar for specific AI deep tech, things like that.

But also private capital, which within this week, the numbers that I’m hearing is more than 100 billion dollar plus commitment from the private capital companies saying that they are going to invest into data centers, localizing, so on, so forth. So I think the capital is happening today. Is it happening broad -based? The answer may be no. No. But has it started to happen? And has it started to go from like maybe few hundred crores to few billions of dollars? So that is happening. Can we execute at the same speed and scale? Only time will tell.

Jaya Jagadish

Sure. No, there’s definitely an increased momentum. But along with manufacturing, I mean, I’m also biased more towards the design front based on my experience. I do definitely want to see lot more local startups. And Vivek just mentioned, we don’t have the IPOs. I mean, having our own IPs is one of the key steps. we need to take. So moving on, question for Thomas. If advanced fabs remain limited globally, where should India focus on in the near future? Where can we realistically create value in the next three to seven years?

Thomas Zacharia

Thank you, Jaya, and I just want to echo the sentiments that my colleagues here on the panel have mentioned, so I’ll build on that. So I think the opportunity for India is to move from compute to capability, right? I mean, that’s really where we need to be. And I’ll pick a couple of areas. So sovereignty and resilience gets intermingled. So I’m going to sort of keep those two things separate. Sovereignty is one where you are really trying to make sure that your data and your application or use cases are resident in country and it’s relevant to country. And that’s an area that is uniquely India to lead because no one else is going to do that.

It has to be done and you already mentioned the opportunity, we were with the CEO of Medi today talking about 50 ,000 startups. I don’t know how to get my head wrapped around 50 ,000 startups so I asked him, can you tell me who the top 50 are so that perhaps a company like AMD can partner with them and try to help them to mature. So that is on the sovereignty side. On the resiliency side the reality is that clearly India needs any sovereign country expects to have resiliency create their own IP and India should have the same aspiration given the scale of ambition and scale of population and here I think while we certainly should have an ambition to go up the development cycle to the leading edge of chip design.

I think there is an opportunity to also look at being part of the supply chain for leading edge deployment. So you don’t necessarily have to be at the two nanometer scale for GPUs or CPUs. There are critical technology in the deployment at scale of AI infrastructure where India can play a role. For instance, we know that the entire ecosystem is going to be driven to optics as interconnect technology, co -package optics. And there are clear supply chain that is not available globally. That is something that is being considered today. And leading candidates obviously today I would say are U .S., Japan, Malaysia. But those are the kind of niche areas where India can stab the jib.

And that is the journey where you are. really contributing to the first -of -a -kind or the nth -of -a -kind leading -edge technology. So that’s the way I would approach it.

Jaya Jagadish

Great insights. Thank you, Thomas. Now, continuing, today AI leadership is ultimately limited not by ambition, but by access to secure scalable computing resources. So, Thomas, continuing with you, you have led exascale class systems and now you’re working on sovereign AI partnerships globally. In the U .S., programs such as Genesys and broader national compute initiatives have attempted to systematically align infrastructure, research, and industrial capacity. So what lessons from these models are actually applicable to India?

Thomas Zacharia

So I think this is a great area for public -private partnership, in my view. The public part of it is a uniquely government function. Government brings both policy as well as the demand signal, particularly in the area of science and innovation, critical infrastructure, whether it is energy sector or national security, as well as uniquely government missions. And the opportunity here is to, I mean, India has supercomputing mission. I think there is an opportunity, and I think India is already thinking about deploying this national supercomputing mission and national scientific infrastructure that is on a trajectory to be at global leading scale. So today, countries like U .S. China. China is a particularly interesting example. China developed the intellectual ecosystem around HPC, which then translated to AI, over a period of 20 to 25 years.

It was intentional. And if you look at where the AI penetration, AI adoption, AI infrastructure resides globally, you can directly trace that to investments in sort of supercomputing mission that built the underlying infrastructure. So I think that is a great opportunity. Already plans are there. But it’s not a static view. So one of the things that I would encourage as we plan for the future is not plan based on where things are, but plan on where things will be by the time we deploy this kind of infrastructure.

Jaya Jagadish

That’s great. A future -looking planning is what we… Thank you, Thomas. Vivek, moving to you, from a policy standpoint, how do we balance national security concerns with openness and global collaboration?

Vivek Kumar Singh

Vivek Murthy Well, it’s a very tricky question, I would say. You know, for a country like India, we all know, you know, I mean, the kind of culture that we have in India is we have always believed in the fact that knowledge is a common good. And that is how, you know, our whole innovation ecosystem has been operating. Our universities have been creating a lot of knowledge and we all, you know, researchers, R &D persons, they have been trained with the fact that whatever you create should be for a, you know, for a common good. There were never efforts to productize them, to convert them into socioeconomic goods, to, you know, protect them with, you know, excess rights and so on.

So that was the common thing that we have been doing earlier. But what is happening now is that we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in a way that is, you know, we are not doing it in is a completely different word.

And that is where our academia, our R &D institutions are also being asked to, you know, change the complete quotes. So it’s not only that researchers, you know, faculty members in universities, they should end up with research publication, that’s all. So it’s very, very important that you productize also. Now what is happening, see, if you talk about the culture of innovation and how you see in terms of the global world that we are in, particularly for sectors like AI and semiconductor, I think what we need to do is we need to go for a, you know, a strategic decision -making in the sense that what is it that we want to do? So, for example, there are certain sectors where, you know, the setup that we are using has certain components which may be used in some critical deployment.

So in those cases, what we need is a set of clarity of rules. What is it that we would like to indigenize? What is it? that we would like to have built on our own? And what is it that we can keep open for rest of the world, for collaboration and so on? So I think what we need is, I would say two words would be important is strategic autonomy. Autonomy in the sense that autonomy where it is needed, but at all other places where we can collaborate with the world, where we can contribute in terms of collective knowledge creation, India can always play a role and India is playing a role

Jaya Jagadish

Great. Rahul, question to you. As AI infrastructure scales, demand patterns for chips and hardware will shift. How should Indian manufacturers position themselves early? And secondly, where are the first mover advantages?

Rahul Garg

I think. we are kind of late to the party in some sense in the semiconductor and chips some say it’s two decades three decades late to the party right so and then there are couple of countries which have a disproportionate advantage not just in terms of what is more popularly known as the 2nm and two three companies dominating that but also in terms of the entire ecosystem that is required around all of those factories and chipsets and systems so on so forth so I think for us I think the India journey will be its own unique path so that’s one thing that I’ve always at least over the last 20 years I’ve seen that if you were to wait for landline to become 10 % of the population we would have not had the mobile revolution if you had to wait for credit cards then I mean like it would not have happened right so I think the in this new era that we are living I think the manufacturers will have to find few spots which may not be as obvious which given the conventional way the countries and ecosystems are built and I think one of the good advantages of events like this is you start to have a very large population and smart and talented people throw darts at hundreds of problems simultaneously and maybe five years later we will say like okay we knew that these are the three things which will work or all of that kind of thing right but I don’t think there is like a today unique path I mean definitely does seem that we need to start building capabilities and capabilities need to be built design we have capabilities we don’t have the productize capability so that is one capability which needs to be built the manufacturing capability while we are starting with some of the fabs which are in the mid zone but the entire ecosystem of chemical suppliers like clean room suppliers utility suppliers.

How do you make sure that. there is enough packaging verification and many of that ecosystem getting developed. So all of those are going to happen simultaneously. So I think the opportunity remains in all of the areas. And I think therefore, at least my encouragement to even my management and the way we are looking at Moglex and so on and so forth is, I mean, you try 10 things. Do not be scared to try one thing or two things and then you fail. And conventionally also while in the Western and so world, there have been horizontal capabilities companies have built and scaled. In India, historically over the last 15 years, every startup, every large company has built vertical stacks of companies.

So they are doing an integrated. They may be chip designed to manufacture, to systems, to product. I mean, like that’s how just the model has evolved so far. So. I think that’s what vertical stack manufacturing all. parts of the ecosystem will have to give a shot and maybe over time will become horizontal.

Jaya Jagadish

That’s great. Thank you. So, you know, I do see quite a few students in the audience. So one thing that we are now facing is kind of with this technology. What is knowledge? How do we acquire knowledge? I mean, traditionally, we go to schools, universities for that. But today it’s at your fingertips. And with that advancement of AI, it’s just going to get better. You want to learn about something, you always have it on your fingertips. So what really do we need? How do we prepare the next generations to solve the problems of the future is the question. I mean, we cannot just stick around with our traditional ways of learning. And we have to scale and adapt to the newer ways.

So question for you, Vivek, how can we prepare ourselves and equip ourselves for this next phase that’s coming?

Vivek Kumar Singh

well I would say efforts have already started so that’s the best thing and as you rightly said this is the best time to be a student you know if you take yourselves 20 years back you will always be constrained with resources the best that you have is lot of books you will have to go to a library there are books that you can’t afford and so on and books are also not on time so you have later editions and so on so what is happening now is you know with lot and lot of information information which is which is you know can be customized for you specifically then you have lot of recommended systems you have retrieval augmented generation systems you know all of this with generative and what is happening so the best part is that you have plenty of information you want to learn anything you want to acquire a skill you always have resources and most of the time these resources you really don’t have to pay for that because this lot of material that you can access for for free.

The programs, particularly for India, we have something called NASCOM’s Future Skill Prime, where you can, you know, which is an aggregator for a lot of online courses. Similarly, there are platforms across the world that you can use. Now, what is happening is that essentially what we have been doing in our universities and generic colleges and, you know, other institutions earlier is that it was largely a kind of memory -based learning where we were acquiring knowledge, we were memorizing things. But now, over a period of time, it’s a more synthetic perspective which is being, you know, percolated across institutions. So, students now are going more into that creative aspect where they’re able to create solutions for certain problems.

And with the whole ecosystem around startups, we all know India is the third largest startup ecosystem of the world. With a lot of support system that we have, most of our universities have incubators. You know, other support systems. So, this is… best time and that is why I said this is the best time to be a student if you want to do anything if you have a creative idea you always find support and there are lot of skilling programs from government of India from many other organizations many you know philanthropic supports are there so even lot of organizations which have their own products they are you know offering free of cost training to students so this is very good but what is important is largely also due to the fact that you know we keep on hearing that AI is going to cause a number of you know disruption in terms of jobs that are there because lot of jobs which were there in areas like software testing and customer support and all of that is gone but at the same time these technologies are also creating new jobs and for that you need to prepare yourself and fortunately the best part is that we have enough material enough resources enough support system that we can use to create a new job and for that you need to prepare yourself for the new kind of jobs that are going to come so this The whole revolution that we see in front of us will require massive skilling, a bit of reskilling also.

So many of my batchmates, 25 years into the careers, and they now feel that they have to reskill themselves with many, many new things. Life was very good somewhere in Silicon Valley, 25 years, a lot of money, but now they feel threatened. And that is the beauty of startups and all these new ideas that are there. So I would simply end by saying that this is the best time to be

Jaya Jagadish

Absolutely, totally agree. You know, I have to share this thing. I was actually conducting a panel discussion within AMD with the senior execs. And one of the fun questions was, if there is a machine or an equipment that you want to invent, what would it be? And the unanimous answer is, I would love to have a machine that can make me 20 years old. 20 years younger, right? So, you know, you guys are extremely lucky, make use of this opportunity to the maximum. All right. So as we come to the final leg of this discussion, India’s opportunity in AI and semiconductors is very real. But it’s also time bound. Momentum alone will not be enough. Sequencing, capital discipline, institutional alignment and infrastructure depth truly matters.

And all these areas have to work in complete alignment with each other. So, you know, let me close the session by asking each of you one more question. First one is for Rahul.

Moderator

was if there is a machine or an equipment that you want to invent, what would it be? And the unanimous answer is I would love to have a machine that can make me 20 years younger, right? So, you know, you guys are extremely lucky, make use of this opportunity to the maximum. All right. So as we come to the final leg of this discussion, India’s opportunity in AI and semiconductors is very real, but it’s also time bound. Momentum alone will not be enough. Sequencing, capital discipline, institutional alignment and infrastructure depth truly matters. And all these areas have to work in complete alignment with each other. So, you know, let me close the session by asking each of you one more question.

First. First one is for Rahul. In the global race where others are moving fast, what is the one move India must execute flawlessly to stay competitive?

Rahul Garg

I think like many other things I think it’s not one move so maybe we do everything as a Bollywood dance move right so they’re like 10 moves to everything but I think the one of the things which has happened at least from my vantage in the startup ecosystem is over the last 15 years we have become extremely good at being fast followers like maybe 15 years back if there was a product or a service in US or in Europe it would take three to five year lag to come to India and now maybe that lag is like one month 15 days I mean like so probably chat GPT within the first one month the maximum number of users are coming from India right so we have become extreme fast thanks to technology that we are fast followers the number of apps that might be built in India might be higher than most countries combined together maybe US China might be the only ones but otherwise I think India would be in the top three in terms of building all the apps in the world I think the move that needs to happen is the scale of ambition beyond India into the global platform because most of this effort that has happened in last 15 years are around kind of dominating the Indian consumer businesses applications so on so forth I think we need to up the game on global and we would require a significant amount of public private upping the game because most of the countries capital pools that we are fighting cannot be only attracted by the private players so I mean if someone is raising 100 billion dollar 200 billion dollar we need to at least start the race with 10 billion 15 billion 20 billion right which is not possible today completely in the private so I think how do we bring this and push the you capital bar, global bar together as a government and as private players I think that’s one thing I would love to see

Moderator

That’s a very valid statement Right Next question to Thomas Thomas, if we had to place one strategic bet that defines India’s position in AI and semiconductors by 2030 what should it be?

Thomas Zacharia

So I’m going to repeat what Rahul said there is, you know, the one I don’t know much about Bollywood dance moves but I would say one move is certainly ambitious I’m going to sort of regress back to a few previous questions since we have a few minutes, I thought I will sort of start with public -private alignment. Rahul mentioned that it is very, very hard for private sector to to to raise the kind of capital that’s being raised elsewhere in India. And that’s part of it is, you know, so one of the important things that government can do is to de -risk that enterprise. Now, I don’t believe that government should de -risk a private sector’s business venture by investing in that effort.

But there are unique places where government can de -risk through public -private partnerships that would enable this ecosystem to develop so that additional ventures can be taken up by private sector on their own. Because I don’t think that my taxpayer money should be used to subsidize. I mean, look, there is a role. So you mentioned Genesis. I did not describe Genesis. I don’t know whether… of you in the audience know what genesis is so I’ll take a couple of minutes to just discuss that as an opportunity to think about how to frame public private partnership so today United States spends a trillion dollars a year in R &D expenditure and roughly about 20 to 25 percent of that is government the rest is private sector now if you look at the R &D spend in the United States it’s been steadily growing at about keeping up with inflation maybe slightly above inflation 2 to 3 percent year over year but if you look at innovation output it’s been flat lined part of it is because the problems are getting more and more complicated discovering new materials cure for cancer all All those things are increasingly, significantly impactful for society, but also significantly challenging.

So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources to bring academia, national laboratories, and private sector to identify what they call lighthouse problems, so you can call it grand challenge problems, that are relevant, that is likely to move the needle across these areas. And the government is then investing substantial resources for compute infrastructure, software stack, partnering with private sector in these important problems. Because it is being done in a… open, collaborative framework, private… This work is, in my view, appropriate for government to invest because the government is not investing directly in any particular business, but the business is able to take the fruits of this collaboration to drive innovation in their own sector.

So I think that is a really good model. I think it was already alluded to. If you are a fast follower or if you follow anybody, the danger is that it may be appropriate for a business, but as a nation, anytime you follow somebody, and if that is your ambition, you are destined at best to be number two, at best, because there is always somebody ahead of you. So I think for a country with the history of India, the ambition of India, the talent of India, and now the will of India, there is nothing wrong with aspiring to be, strategically deciding where India can go. can be world leading in part of this, I mean, no country is going to dominate every aspect of this ecosystem, identifying strategically where one can be that leader globally.

And I would say there are, at least if I can speak to AMD, just as an example, we were discussing about Helios and how it is based on open standards. There are many components. It may not be GPU that you start, but there are many components there where a private sector in India can aspire to be a leading provider based on open standards so that a business like AMD or a public private sector would say, well, I can get a better product, better total cost of ownership if I can plug into that. And one last thing, I cannot let you get away with, you know, just this time. I’m being great for the youngsters. Ray Kurzweil said that today, for each of us in this room, we age only eight months for every chronological year because of advances in medical care.

And that is true because longevity, people are living longer because of better drugs, better health, living, etc. So AI has the added advantage of providing greater solutions. So it’s not just the youngsters, there is hope for us if

Moderator

Absolutely. So we are all lucky to be here at this age of AI. We are truly lucky to be in this. No, that was very insightful. Thank you, Thomas. So Vivek, a question for you. I’m going to say what is the one bold decision, but I’m going to change that to what are some of the bold decisions. We must take to ensure we don’t look back and regret five years from today.

Vivek Kumar Singh

well i think uh what the the biggest advantage that india has is of course you know a huge pool of talent so that is something that we all need to rely on that’s that’s uh that’s the most important thing for a country like india and this essentially see india has an inherent culture of innovation so it’s not that you know we’re always following or we’re looking at technologies and so on the fact is the ecosystems where we have been living in they were not you know uh geared up they were not situated in the context that we were creating products so the culture of transforming that innovation to products has not been there unfortunately for a long period of time things are changing now and probably what we need to do is to invest more in our youth to invest more in skilling to invest uh more in how do we convert the knowledge that we generate generate in our universities, in our R &D labs into actual usable products which have, you know, socio -economic impact.

So that’s the most important thing that I believe we should be looking at. Of course, given the fact that we also have an advantage that we have the advantage of scale also. So, you know, a lot of things that we have done and we have proved it in terms of the digital public infra that we have created at a population scale of size India, it matters a lot. If you go to any part of the world, particularly anywhere in Europe, if you identify yourselves from India, you know, and you are in some discussion related to IT and so on, so you will always be regarded with, you know, a lot of depth in the sense that everybody believes that India is an IT superpower largely because of the talent that we have.

So this is something that we should leverage on and we should do it. And something that we really need to invest in heavily to see what is going to come for the next generation and to provide an environment and our prime minister keeps on saying ease of doing business so that is something that we really need to look into to to enable and to create an environment where we are able to transform the knowledge that we create into usable products

Moderator

no absolutely right i mean talent skilling and ease of doing business i mean all of these are coming together for india and in fact i led the committee for future skills and i worked i got the opportunity to work with 13 other eminent leaders from industry academia across the board and one thing that stood out was if we can actually get our skilling right we can actually supply talent not just for india but globally you know that’s something that’s going to be very effective you if we can get our skilling actions right so thank you again Today’s conversation was truly insightful and inspiring. We touched upon many aspects of semiconductors and AI and the ecosystem and India’s potential as such.

And again, AI leadership will not really happen by accident. It will require a deliberate alignment across policy, industry, research, and infrastructure. And we have many strengths that we need to work on. We need to work on strengthening and leveraging for the growth that we are ambitious about. And truly what matters now is decisive execution, moving with clarity and with urgency. So it’s going to be a great journey. And I once again want to iterate that we are truly lucky to be here in this phase. And what a fantastic journey we have ahead of us. And let’s be committed to that journey of learning and advancement. Thank you also. much for attending this session. I appreciate your time.

Thank you. Do we have time for audience questions? We can take one question, one or two. Out of 500 out of 500 sessions here, this is one on semiconductor. I’m very glad that you guys organized it. Very, very insightful. A few amazing questions and a good response. Quickly to my question. I teach AI and sustainability at IIM and I cover the entire supply chain starting from the rear, going through the chip design, manufacturing, the semiconductor supply chain, essentially, and all the way to data centers and electronic use. So, sustainability is at the core of all design decisions in my class. And that’s what we are trying to teach the new management human resources in India. Your thoughts on having sustainability not as a trade -off but as a core design choice for every decision that is made either in India or some country.

Thomas Zacharia

So it’s a great question and I think every one of us, certainly I can speak for the company that I represent here and I must say I’m in India so I am going to give a shout out to the 10 ,000 AMDers in this country. AMD would not exist without you. We would not be able to do what we are doing without the contributions that they make every day. So India is already very much part of a global supply chain. Sustainability is very key. We design our products with a goal, explicit goal of flattening the energy curve because it’s easy to say we’re going to build megawatts and gigawatts, which we may because it is going to be a fundamental infrastructure in which society is going to progress.

But it’s incumbent on us to ensure that we are very, very thoughtful and committed to sustainability. I also would like to say that we have to be humble enough to know that we are not going to get everything right. I was at a U .S. National Academy meeting where Subhash Suresh, he was the president at the time, we had just rolled out the grand challenges for the 21st century. And he said, you know, if you look at it, the grand challenges of the 21st century. are attempting to solve the problems created by the solutions to the grand challenges of the 20th century. So the reality is that we don’t know what we don’t know. But yet, as long as we use sustainability as a core goal and be humble enough to know that we are not going to get it all right, then I think we cannot stop progress.

We need to continue to move forward. But know that we are not going to get everything right and course correct as we go along.

Moderator

Okay, I’m told we are out of time. Yes, actually we are running out of time. And I really appreciate for joining us for this session. And I’m very much heartfelt thankful to our distinguished guests. So as a token from Mighty’s side, I would like to give a short, I mean cute memento. Pooja, you can also join. Thank you. Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Vivek Kumar Singh
4 arguments201 words per minute2089 words622 seconds
Argument 1
India needs credible sovereign AI capability through scaled deployments and systematic efforts across all seven pillars of the AI mission
EXPLANATION
Singh argues that India’s credibility in AI comes not just from announcements but from scaled deployments and systematic growth. He emphasizes that India has taken a big step with the AI mission involving 10,000+ crores for five years, addressing all necessary needs through seven comprehensive pillars.
EVIDENCE
India AI mission with 10,000+ crores funding for five years, tax holidays for data centers, AI Coach platform for India-specific AI data and context
MAJOR DISCUSSION POINT
Building credible sovereign AI capability
Argument 2
Strategic autonomy approach needed – indigenize critical components while maintaining global collaboration in non-sensitive areas
EXPLANATION
Singh advocates for a balanced approach where India maintains its traditional culture of knowledge as common good while strategically deciding what to indigenize versus what to keep open for global collaboration. This requires clear rules about what needs to be built domestically versus what can be collaborative.
EVIDENCE
India’s historical culture of treating knowledge as common good, need for strategic decision-making in sectors with critical deployment components
MAJOR DISCUSSION POINT
Balancing national security with global collaboration
Argument 3
Current era offers unprecedented access to learning resources and support systems, making it the best time to be a student with shift from memory-based to creative problem-solving education
EXPLANATION
Singh argues that students today have access to customized information, free resources, and support systems that were unavailable 20 years ago. The education system is shifting from memory-based learning to synthetic and creative problem-solving approaches, supported by startup ecosystems and incubators.
EVIDENCE
NASCOM’s Future Skill Prime platform, India as third largest startup ecosystem, university incubators, free training programs from organizations, government skilling programs
MAJOR DISCUSSION POINT
Preparing next generations for future challenges
Argument 4
India’s talent pool and innovation culture are key advantages that need better conversion from knowledge creation to usable products with socioeconomic impact
EXPLANATION
Singh emphasizes that India has inherent innovation culture and huge talent pool, but historically lacked ecosystems to convert innovation into products. The focus should be on investing in youth, skilling, and transforming university and R&D knowledge into actual usable products with socioeconomic impact.
EVIDENCE
India’s recognition as IT superpower globally, digital public infrastructure created at population scale, ease of doing business initiatives
MAJOR DISCUSSION POINT
Bold decisions for future competitiveness
AGREED WITH
Jaya Jagadish
T
Thomas Zacharia
4 arguments136 words per minute1744 words765 seconds
Argument 1
India should move from compute to capability, focusing on sovereignty where data and applications are resident in-country and resilience through supply chain participation
EXPLANATION
Zacharia distinguishes between sovereignty (keeping data and applications relevant to India within the country) and resilience (participating in global supply chains). He argues India should focus on sovereignty in areas unique to India while building resilience through strategic supply chain participation rather than trying to compete directly in leading-edge manufacturing.
EVIDENCE
50,000 startups mentioned by CEO, opportunity in co-package optics and interconnect technology supply chains, leading candidates currently being US, Japan, Malaysia
MAJOR DISCUSSION POINT
India’s focus areas for realistic value creation
DISAGREED WITH
Rahul Garg
Argument 2
India needs systematic alignment of infrastructure, research, and industrial capacity with government providing policy and demand signals
EXPLANATION
Zacharia advocates for public-private partnerships where government provides policy framework and demand signals, particularly in science, innovation, and critical infrastructure. He cites China’s 20-25 year intentional development of HPC ecosystem that translated to AI leadership as a model for systematic long-term planning.
EVIDENCE
US Genesis project model, China’s 20-25 year HPC to AI development trajectory, India’s existing supercomputing mission, US spending $1 trillion annually on R&D with 20-25% government contribution
MAJOR DISCUSSION POINT
Lessons from US and Chinese models for India
AGREED WITH
Rahul Garg
Argument 3
Government should de-risk private enterprise through public-private partnerships without directly subsidizing business ventures, similar to the US Genesis Project model
EXPLANATION
Zacharia argues that government should de-risk private sector investments through strategic partnerships rather than direct subsidies. The Genesis Project model involves government investing in compute infrastructure and software stack for ‘lighthouse problems’ in an open collaborative framework that private sector can leverage.
EVIDENCE
US Genesis Project addressing innovation output flatline despite growing R&D spend, government investment in lighthouse/grand challenge problems, open collaborative framework allowing private sector to commercialize results
MAJOR DISCUSSION POINT
Strategic public-private partnership models
AGREED WITH
Rahul Garg
DISAGREED WITH
Rahul Garg
Argument 4
Sustainability must be a core design choice rather than a trade-off, with explicit goals of flattening energy curves in product development
EXPLANATION
Zacharia emphasizes that sustainability should be built into design decisions from the beginning, with explicit goals of flattening energy consumption curves. He acknowledges the need for humility in recognizing that current solutions may create future problems, requiring continuous course correction while maintaining progress.
EVIDENCE
AMD’s explicit goal of flattening energy curve in product design, example of 21st century grand challenges solving problems created by 20th century solutions
MAJOR DISCUSSION POINT
Sustainability as core design choice
R
Rahul Garg
3 arguments172 words per minute1338 words466 seconds
Argument 1
Indian private sector shows growing appetite for localized manufacturing post-COVID, with increasing capital flow from both government and private sources exceeding $100 billion in commitments
EXPLANATION
Garg argues that COVID created awareness about supply chain vulnerabilities, leading to increased appetite for localized manufacturing. Both demand and capital are growing, with government funds of 1 lakh crore and private commitments exceeding $100 billion, though the capital may not yet be broad-based.
EVIDENCE
COVID supply chain shocks (masks, oxygen concentrators), government fund of 1 lakh crore, private capital commitments over $100 billion for data centers and localization
MAJOR DISCUSSION POINT
Private sector readiness for long-term investments
AGREED WITH
Thomas Zacharia
Argument 2
India must increase scale of ambition beyond domestic market to compete globally, requiring significant public-private capital coordination
EXPLANATION
Garg argues that while India has become excellent at fast-following (reducing lag from years to months), the next step requires scaling ambition to global platforms. This requires capital pools that private players alone cannot attract, necessitating coordinated public-private investment at the scale of 10-20 billion dollars to compete with global players raising 100-200 billion.
EVIDENCE
ChatGPT adoption in India within first month, India potentially in top three globally for app development, need to match global capital raising of 100-200 billion dollars
MAJOR DISCUSSION POINT
India’s strategic positioning for global competitiveness
AGREED WITH
Thomas Zacharia
DISAGREED WITH
Thomas Zacharia
Argument 3
Vertical integration model emerging in India where companies build entire stacks from design to manufacturing, unlike traditional horizontal specialization
EXPLANATION
Garg observes that Indian companies are building integrated vertical stacks covering chip design to manufacturing to systems to products, contrasting with the traditional Western model of horizontal specialization. He suggests trying multiple approaches simultaneously rather than being afraid of failure.
EVIDENCE
Historical pattern of Indian startups and large companies building vertical integrated stacks over last 15 years, contrast with Western horizontal capabilities model
MAJOR DISCUSSION POINT
Early positioning for AI infrastructure shifts
DISAGREED WITH
Thomas Zacharia
J
Jaya Jagadish
2 arguments141 words per minute1141 words484 seconds
Argument 1
True AI leadership requires alignment of silicon, software, systems, and policy working together
EXPLANATION
Jagadish argues that achieving AI leadership requires all four aspects – silicon, software, systems, and policy – to work in complete alignment. No single aspect can achieve leadership alone, and this comprehensive approach is what makes the current moment exciting with experts covering multiple aspects.
EVIDENCE
Three decades of experience in semiconductor industry witnessing evolution from single-threaded processors to massively parallel AI systems
MAJOR DISCUSSION POINT
Comprehensive approach to AI leadership
AGREED WITH
Moderator
Argument 2
Proper skilling initiatives can enable India to supply talent globally, not just domestically
EXPLANATION
Jagadish emphasizes that if India gets skilling right, it can supply talent not just for domestic needs but globally. She references her experience leading a committee on future skills with 13 industry and academic leaders, highlighting the global potential of Indian talent development.
EVIDENCE
Experience leading future skills committee with 13 eminent leaders from industry and academia
MAJOR DISCUSSION POINT
Talent development for global competitiveness
AGREED WITH
Vivek Kumar Singh
M
Moderator
3 arguments97 words per minute981 words602 seconds
Argument 1
India’s opportunity in AI and semiconductors is real but time-bound, requiring momentum beyond just announcements with proper sequencing, capital discipline, institutional alignment and infrastructure depth working in complete alignment
EXPLANATION
The moderator emphasizes that while India has significant opportunities in AI and semiconductors, success requires more than just momentum or announcements. All critical areas including sequencing of initiatives, disciplined capital allocation, institutional coordination, and deep infrastructure development must work together in complete alignment to achieve success.
MAJOR DISCUSSION POINT
Comprehensive alignment needed for AI and semiconductor success
AGREED WITH
Jaya Jagadish
Argument 2
In the global race where others are moving fast, India needs strategic execution rather than single moves to stay competitive
EXPLANATION
The moderator frames the challenge as a global competitive race where other countries are advancing rapidly. Rather than looking for one silver bullet solution, India needs comprehensive strategic execution across multiple fronts to maintain competitiveness in the AI and semiconductor space.
MAJOR DISCUSSION POINT
Strategic positioning in global AI competition
Argument 3
Talent development and skilling initiatives have global potential if executed correctly, enabling India to supply talent worldwide
EXPLANATION
The moderator reinforces the importance of getting skilling initiatives right, noting that proper talent development can position India not just to meet domestic needs but to become a global supplier of skilled talent. This builds on insights from working with industry and academic leaders on future skills development.
EVIDENCE
Experience leading future skills committee with 13 eminent leaders from industry and academia
MAJOR DISCUSSION POINT
Global talent supply potential through proper skilling
Agreements
Agreement Points
India’s talent pool and human capital are fundamental advantages that need better utilization
Speakers: Vivek Kumar Singh, Jaya Jagadish
India’s talent pool and innovation culture are key advantages that need better conversion from knowledge creation to usable products with socioeconomic impact Proper skilling initiatives can enable India to supply talent globally, not just domestically
Both speakers emphasize that India’s human talent is a core strength that, if properly developed through skilling initiatives, can serve not just domestic needs but global markets. They agree on the need to convert knowledge into practical products with socioeconomic impact.
Public-private partnerships are essential for achieving AI and semiconductor leadership
Speakers: Thomas Zacharia, Rahul Garg
India needs systematic alignment of infrastructure, research, and industrial capacity with government providing policy and demand signals Indian private sector shows growing appetite for localized manufacturing post-COVID, with increasing capital flow from both government and private sources exceeding $100 billion in commitments
Both speakers agree that successful AI and semiconductor development requires coordinated public-private partnerships, with government providing policy framework and demand signals while private sector contributes capital and execution capability.
India needs to move beyond being a fast follower to becoming a global leader
Speakers: Thomas Zacharia, Rahul Garg
Government should de-risk private enterprise through public-private partnerships without directly subsidizing business ventures, similar to the US Genesis Project model India must increase scale of ambition beyond domestic market to compete globally, requiring significant public-private capital coordination
Both speakers acknowledge that while India has become excellent at fast-following, true leadership requires scaling ambition to compete globally and requires strategic capital coordination at unprecedented scales.
Comprehensive alignment across multiple dimensions is required for AI leadership
Speakers: Jaya Jagadish, Moderator
True AI leadership requires alignment of silicon, software, systems, and policy working together India’s opportunity in AI and semiconductors is real but time-bound, requiring momentum beyond just announcements with proper sequencing, capital discipline, institutional alignment and infrastructure depth working in complete alignment
Both emphasize that AI leadership cannot be achieved through single initiatives but requires comprehensive alignment across technology, policy, infrastructure, and institutional dimensions working in coordination.
Similar Viewpoints
Both speakers advocate for a balanced approach to sovereignty – being strategic about what to indigenize versus what to keep open for collaboration, distinguishing between sovereignty (domestic control of critical applications/data) and resilience (strategic supply chain participation).
Speakers: Vivek Kumar Singh, Thomas Zacharia
Strategic autonomy approach needed – indigenize critical components while maintaining global collaboration in non-sensitive areas India should move from compute to capability, focusing on sovereignty where data and applications are resident in-country and resilience through supply chain participation
Both speakers are optimistic about the current educational and learning environment, emphasizing that students today have unprecedented access to resources and that proper talent development can position India as a global talent supplier.
Speakers: Vivek Kumar Singh, Jaya Jagadish
Current era offers unprecedented access to learning resources and support systems, making it the best time to be a student with shift from memory-based to creative problem-solving education Talent development and skilling initiatives have global potential if executed correctly, enabling India to supply talent worldwide
Both speakers acknowledge the need for integrated approaches – Thomas emphasizing sustainability integration into design from the beginning, and Rahul noting India’s tendency toward vertical integration across the entire technology stack.
Speakers: Thomas Zacharia, Rahul Garg
Sustainability must be a core design choice rather than a trade-off, with explicit goals of flattening energy curves in product development Vertical integration model emerging in India where companies build entire stacks from design to manufacturing, unlike traditional horizontal specialization
Unexpected Consensus
India should not aim to directly compete in leading-edge semiconductor manufacturing but focus on strategic supply chain participation
Speakers: Thomas Zacharia, Rahul Garg
India should move from compute to capability, focusing on sovereignty where data and applications are resident in-country and resilience through supply chain participation Vertical integration model emerging in India where companies build entire stacks from design to manufacturing, unlike traditional horizontal specialization
Despite the common narrative of India needing to compete directly in advanced chip manufacturing, both speakers surprisingly agree that India should focus on strategic supply chain participation and unique capabilities rather than trying to match leading-edge fabs. This pragmatic consensus suggests a more realistic and strategic approach to semiconductor development.
The importance of humility and course correction in technological development
Speakers: Thomas Zacharia, Vivek Kumar Singh
Sustainability must be a core design choice rather than a trade-off, with explicit goals of flattening energy curves in product development India’s talent pool and innovation culture are key advantages that need better conversion from knowledge creation to usable products with socioeconomic impact
Unexpectedly, both speakers emphasize the need for humility in technological development – Thomas explicitly mentioning that 21st century solutions may create new problems requiring course correction, and Vivek acknowledging the need to transform India’s traditional knowledge-sharing culture into product-focused innovation. This consensus on adaptive learning is significant for sustainable development.
Overall Assessment

The speakers demonstrate strong consensus on key strategic approaches: the critical importance of public-private partnerships, the need to leverage India’s talent advantages through proper skilling, the requirement for comprehensive alignment across multiple dimensions, and a pragmatic approach to sovereignty that balances indigenization with global collaboration. There is also unexpected agreement on focusing on strategic supply chain participation rather than direct competition in leading-edge manufacturing.

High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different backgrounds (policy, industry, technology) but align on fundamental strategic directions. This strong consensus suggests a mature understanding of India’s realistic opportunities and constraints in AI and semiconductors, which could facilitate coordinated action across government, industry, and academia. The alignment on practical approaches over aspirational goals indicates potential for effective implementation of India’s AI and semiconductor strategies.

Differences
Different Viewpoints
Approach to global competitiveness – following vs leading
Speakers: Rahul Garg, Thomas Zacharia
India must increase scale of ambition beyond domestic market to compete globally, requiring significant public-private capital coordination India should move from compute to capability, focusing on sovereignty where data and applications are resident in-country and resilience through supply chain participation
Garg advocates for scaling up to compete directly with global leaders through massive capital coordination, while Zacharia suggests a more strategic approach focusing on specific areas where India can lead rather than trying to match others in all areas
Role of government in private sector investment
Speakers: Rahul Garg, Thomas Zacharia
India must increase scale of ambition beyond domestic market to compete globally, requiring significant public-private capital coordination Government should de-risk private enterprise through public-private partnerships without directly subsidizing business ventures, similar to the US Genesis Project model
Garg calls for direct government-private capital coordination to match global investment levels, while Zacharia emphasizes government should de-risk rather than directly subsidize private ventures
Manufacturing strategy – vertical integration vs strategic positioning
Speakers: Rahul Garg, Thomas Zacharia
Vertical integration model emerging in India where companies build entire stacks from design to manufacturing, unlike traditional horizontal specialization India should move from compute to capability, focusing on sovereignty where data and applications are resident in-country and resilience through supply chain participation
Garg observes and seems to support the vertical integration approach where Indian companies build complete stacks, while Zacharia advocates for strategic positioning in specific supply chain components rather than trying to do everything
Unexpected Differences
Timeline and urgency of action
Speakers: Rahul Garg, Thomas Zacharia
India must increase scale of ambition beyond domestic market to compete globally, requiring significant public-private capital coordination India needs systematic alignment of infrastructure, research, and industrial capacity with government providing policy and demand signals
While both recognize the need for action, Garg emphasizes immediate scaling to match global competitors, while Zacharia advocates for longer-term systematic development citing China’s 20-25 year approach. This difference in timeline perspective was unexpected given both speakers’ industry backgrounds
Overall Assessment

The main disagreements center around strategic approach (direct competition vs strategic positioning), government role (direct investment vs de-risking), and manufacturing strategy (vertical integration vs selective participation). However, there is strong consensus on the need for public-private partnerships, talent development, and India’s potential in AI and semiconductors.

Moderate disagreement on methods and approaches, but strong alignment on overall goals and India’s potential. The disagreements are constructive and reflect different but complementary perspectives on achieving the same objectives. This suggests healthy debate that could lead to a more comprehensive strategy incorporating elements from all viewpoints.

Partial Agreements
Both agree on the need for strategic autonomy and sovereignty in critical areas, but Singh focuses more on policy frameworks for deciding what to indigenize vs collaborate on, while Zacharia emphasizes technical capability building in specific supply chain areas
Speakers: Vivek Kumar Singh, Thomas Zacharia
Strategic autonomy approach needed – indigenize critical components while maintaining global collaboration in non-sensitive areas India should move from compute to capability, focusing on sovereignty where data and applications are resident in-country and resilience through supply chain participation
All speakers agree on the importance of public-private partnerships and increased investment, but they differ on the mechanisms – Garg wants direct capital coordination, Zacharia prefers de-risking approaches, and Singh emphasizes systematic government-led initiatives
Speakers: Rahul Garg, Thomas Zacharia, Vivek Kumar Singh
Indian private sector shows growing appetite for localized manufacturing post-COVID, with increasing capital flow from both government and private sources exceeding $100 billion in commitments Government should de-risk private enterprise through public-private partnerships without directly subsidizing business ventures, similar to the US Genesis Project model India needs credible sovereign AI capability through scaled deployments and systematic efforts across all seven pillars of the AI mission
Takeaways
Key takeaways
India must achieve AI leadership through systematic alignment of silicon, software, systems, and policy rather than focusing on individual components India should transition from being a fast follower to a global leader by moving from compute to capability, focusing on sovereignty and supply chain resilience Public-private partnerships are essential for scaling investments, with government de-risking enterprises without directly subsidizing private business ventures India’s talent pool and innovation culture are key competitive advantages that need better conversion from knowledge creation to marketable products Strategic autonomy approach is needed – indigenizing critical components while maintaining global collaboration in non-sensitive areas Sustainability must be integrated as a core design principle rather than treated as a trade-off in all AI and semiconductor development Current technological era provides unprecedented learning opportunities, requiring shift from memory-based to creative problem-solving education India should focus on niche supply chain areas like co-package optics rather than competing directly in leading-edge chip manufacturing
Resolutions and action items
Leverage India’s 50,000 startup ecosystem by identifying and partnering with top performers for maturation support Invest heavily in youth skilling and conversion of university/R&D knowledge into usable products with socioeconomic impact Scale capital coordination between government and private sector to compete globally, targeting $10-20 billion initial investments Develop vertical integration model where Indian companies build complete stacks from design to manufacturing Implement strategic decision-making framework to determine what to indigenize versus what to keep open for global collaboration Focus on building capabilities in emerging areas like optics and interconnect technology for AI infrastructure
Unresolved issues
Specific mechanisms for converting India’s strong VLSI design capabilities into Indian-owned intellectual property Detailed roadmap for achieving the scale and speed required to compete with established global players Concrete strategies for balancing national security concerns with openness in global collaboration Specific identification of which technologies should be indigenized versus kept open for collaboration Timeline and milestones for transitioning from fast follower to global leader status Detailed framework for public-private partnership models that effectively de-risk without subsidizing
Suggested compromises
Focus on contributing to supply chain for leading-edge deployment rather than competing in cutting-edge chip design initially Adopt strategic autonomy approach – maintain sovereignty in critical areas while collaborating globally in non-sensitive domains Pursue vertical integration model suited to Indian ecosystem rather than traditional horizontal specialization Balance ambition for global leadership with realistic assessment of current capabilities and market position Integrate sustainability as core design principle while acknowledging that perfect solutions may not be immediately achievable
Thought Provoking Comments
The opportunity for India is to move from compute to capability, right? I mean, that’s really where we need to be. And I’ll pick a couple of areas. So sovereignty and resilience gets intermingled. So I’m going to sort of keep those two things separate.
This comment reframes the entire discussion by distinguishing between technical infrastructure (compute) and strategic value creation (capability), while also making a crucial distinction between sovereignty (data/applications resident in-country) and resilience (supply chain independence). This conceptual clarity provides a strategic framework for thinking about India’s positioning.
This shifted the conversation from generic discussions about manufacturing and investment to more strategic thinking about where India should focus its efforts. It led to deeper exploration of specific areas like co-package optics and supply chain positioning, moving the discussion from aspirational to tactical.
Speaker: Thomas Zacharia
China developed the intellectual ecosystem around HPC, which then translated to AI, over a period of 20 to 25 years. It was intentional. And if you look at where the AI penetration, AI adoption, AI infrastructure resides globally, you can directly trace that to investments in sort of supercomputing mission that built the underlying infrastructure.
This historical perspective reveals the long-term, systematic nature of building AI capabilities and challenges the notion that AI leadership can be achieved quickly. It provides a concrete example of how foundational investments in one area (HPC) can create advantages in emerging technologies (AI).
This comment introduced a temporal dimension to the discussion, emphasizing that India’s AI ambitions require sustained, long-term commitment rather than short-term initiatives. It influenced subsequent discussions about the need for patient capital and systematic planning.
Speaker: Thomas Zacharia
So it’s not only that researchers, you know, faculty members in universities, they should end up with research publication, that’s all. So it’s very, very important that you productize also… So I think what we need is, I would say two words would be important is strategic autonomy. Autonomy in the sense that autonomy where it is needed, but at all other places where we can collaborate with the world.
This comment addresses a fundamental cultural shift needed in India’s research ecosystem – moving from knowledge creation to knowledge commercialization. The concept of ‘strategic autonomy’ provides a nuanced approach to balancing self-reliance with global collaboration, avoiding the extremes of complete isolation or total dependence.
This reframed the discussion about national security and openness from a binary choice to a strategic decision-making framework. It influenced how other panelists discussed the balance between protecting critical capabilities while remaining open to collaboration.
Speaker: Vivek Kumar Singh
In India, historically over the last 15 years, every startup, every large company has built vertical stacks of companies. So they are doing an integrated. They may be chip designed to manufacture, to systems, to product. I mean, like that’s how just the model has evolved so far.
This observation identifies a unique characteristic of Indian business model evolution that differs from Western horizontal specialization. It suggests that India’s approach to building capabilities might naturally tend toward vertical integration, which could be an advantage in complex technology stacks like AI and semiconductors.
This insight shifted the conversation toward recognizing India’s unique strengths and approaches rather than simply copying Western models. It influenced discussions about how India might leverage its natural tendency toward integrated solutions as a competitive advantage.
Speaker: Rahul Garg
If you follow anybody, the danger is that it may be appropriate for a business, but as a nation, anytime you follow somebody, and if that is your ambition, you are destined at best to be number two, at best, because there is always somebody ahead of you.
This comment challenges the ‘fast follower’ strategy that Rahul had praised, arguing that national-level strategy requires different thinking than business strategy. It pushes for leadership ambitions rather than catching up, which is particularly relevant for a country of India’s scale and talent.
This created a productive tension in the discussion, forcing participants to think beyond incremental improvement to breakthrough positioning. It elevated the conversation from tactical execution to strategic vision and influenced the final recommendations about bold decision-making.
Speaker: Thomas Zacharia
Ray Kurzweil said that today, for each of us in this room, we age only eight months for every chronological year because of advances in medical care… So it’s not just the youngsters, there is hope for us if [AI continues advancing]
While seemingly light-hearted, this comment connects AI advancement to human longevity and personal stakes, making the technology discussion more relatable and urgent. It suggests that the AI revolution will benefit current participants, not just future generations.
This humanized the technical discussion and reinforced the urgency of getting AI strategy right. It connected the abstract policy and technology discussions to personal and societal benefits, adding emotional resonance to the strategic imperatives discussed throughout the session.
Speaker: Thomas Zacharia
Overall Assessment

These key comments fundamentally shaped the discussion by providing strategic frameworks, historical context, and conceptual clarity that elevated the conversation from generic enthusiasm about AI to sophisticated strategic thinking. Thomas Zacharia’s distinction between compute and capability, and sovereignty versus resilience, provided analytical structure that other panelists built upon. His historical perspective on China’s systematic approach introduced temporal thinking that influenced discussions about patient capital and long-term planning. Vivek’s concept of strategic autonomy offered a nuanced middle path between isolation and dependence, while Rahul’s observation about vertical integration highlighted India’s unique strengths. The challenge to ‘fast follower’ mentality pushed the discussion toward leadership ambitions rather than catch-up strategies. Together, these comments transformed what could have been a superficial discussion about India’s AI potential into a sophisticated analysis of strategic positioning, cultural adaptation, and the need for systematic, long-term thinking in building technological capabilities.

Follow-up Questions
Can you tell me who the top 50 startups are so that perhaps a company like AMD can partner with them and try to help them to mature?
Thomas mentioned being overwhelmed by the scale of 50,000 startups in India and wanted to identify the top performers for potential partnerships, indicating a need for better startup ecosystem mapping and prioritization.
Speaker: Thomas Zacharia
How do we convert the knowledge that we generate in our universities, in our R&D labs into actual usable products which have socio-economic impact?
This addresses the critical gap between academic research and commercialization in India, which is essential for building indigenous IP and competitive advantage in AI and semiconductors.
Speaker: Vivek Kumar Singh
What is it that we want to do in terms of strategic decision-making for sectors like AI and semiconductor – what should we indigenize versus what can remain open for collaboration?
This relates to developing a clear framework for strategic autonomy while maintaining beneficial international collaborations, which is crucial for national security and economic competitiveness.
Speaker: Vivek Kumar Singh
Can we execute at the same speed and scale as the capital commitments being made?
While capital is flowing into Indian manufacturing and AI infrastructure, there’s uncertainty about execution capabilities matching the financial commitments, which could determine success or failure of these initiatives.
Speaker: Rahul Garg
How do we ensure sustainability is not a trade-off but a core design choice for every decision in AI and semiconductor development?
This addresses the critical need to integrate environmental considerations into the design and deployment of AI infrastructure and semiconductor manufacturing from the outset, rather than as an afterthought.
Speaker: Audience member (IIM professor)
How should we plan for where AI and semiconductor technology will be by the time we deploy infrastructure, rather than planning based on current capabilities?
This highlights the need for forward-looking infrastructure planning that anticipates technological evolution, ensuring investments remain relevant and competitive over their operational lifetime.
Speaker: Thomas Zacharia
How can India identify and develop niche areas in the AI/semiconductor supply chain where it can achieve first-mover advantage despite being late to the overall market?
This addresses the strategic challenge of finding competitive opportunities in a market where other countries have significant head starts, requiring identification of emerging or underserved segments.
Speaker: Rahul Garg

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

The Role of Government and Innovators in Citizen-Centric AI

The Role of Government and Innovators in Citizen-Centric AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on how artificial intelligence, particularly large language models, can transform public sector operations and improve government services for citizens. The panel featured European AI leaders including Arthur Mensch from Mistral AI, Jarek Kutylowski from DeepL, Matteo Valero from Barcelona Supercomputing Center, and Roberto Viola from the European Commission, exploring AI applications in government and public administration.


Arthur Mensch emphasized that AI’s value lies in automating complex, fragmented processes rather than just individual productivity gains, citing examples like procurement and employment matching services. He stressed the importance of designing AI systems that allow humans to delegate entire workflows rather than requiring constant human intervention. Jarek Kutylowski highlighted AI’s potential in overcoming multilingual challenges in diverse societies, enabling real-time translation services for citizen interactions and government communications. Matteo Valero discussed the evolution from supercomputing to AI factories, explaining how these platforms provide free access to AI tools and expertise to make technology more accessible to society.


Roberto Viola addressed the “Solow paradox” – the observation that IT investments often don’t translate to productivity gains – arguing that AI can break this pattern through agentic AI that creates new processes rather than simply overlaying existing ones. However, he emphasized that successful AI adoption requires empowering public sector workers and redesigning organizational processes. The panelists agreed that the main challenges involve reskilling workers to become effective AI delegators and fundamentally rethinking workflows rather than just digitizing existing bureaucratic processes. The discussion concluded with calls for stronger partnerships between Europe and India, emphasizing that multiple futures for AI development exist beyond dominant global models.


Keypoints

Major Discussion Points:

AI Applications in Public Sector Efficiency: Discussion of how large language models and AI can automate complex processes, improve procurement, enhance public services, and address talent shortages in government administration through tools like job matching and report writing.


Multilingual Communication and Language Barriers: Exploration of how AI translation and language technologies can help governments serve diverse populations more effectively, enabling real-time conversations with citizens and translating official documents across multiple languages.


Infrastructure and Computing Capacity: Overview of European AI infrastructure including supercomputing centers, AI factories, and the EuroHPC initiative, emphasizing the need for accessible platforms that provide free AI services to citizens and researchers.


Organizational Change and Skills Development: Analysis of the challenges in AI adoption, including the need to reskill workers, redesign workflows, move from individual to collective productivity gains, and transform employees from individual contributors to AI process managers.


Policy Framework and International Collaboration: Discussion of how policy must evolve to support AI transformation, the importance of public-private partnerships, and the potential for EU-India collaboration in developing AI solutions for the global south.


Overall Purpose:

The discussion aimed to explore how artificial intelligence, particularly large language models, can transform public sector operations and improve citizen services. The panel sought to identify practical applications, address implementation challenges, and discuss policy frameworks needed to facilitate AI adoption in government while fostering international collaboration between Europe and India.


Overall Tone:

The discussion maintained an optimistic and collaborative tone throughout, with speakers expressing enthusiasm about AI’s potential while acknowledging realistic challenges. The conversation was forward-looking and solution-oriented, emphasizing partnership opportunities and shared values between Europe and India. There was a consistent theme of democratizing AI access and ensuring technology serves citizens effectively, with speakers balancing technical expertise with practical implementation concerns.


Speakers

Speaker 1: Area of expertise, role, and title not mentioned


Roberto Viola: Director General of DigiConnect, European Commission; plays a pivotal role in digital policies


Jarek Kutylowski: Founder and CEO of DeepL, a German company specializing in language technologies; working since 2017 in AI-based translation tools


Matteo Valero: Professor of computer architecture at Technical University of Catalonia; founding director of the Barcelona Supercomputing Center; director of an AI factory


Lucilla Sioli: Panel moderator/host; appears to be in a senior role at the European Commission (Roberto Viola refers to her as his boss)


Arthur Mensch: Co-founder and CEO of Mistral AI, a European company developing large language models


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This panel discussion brought together leading European AI experts to explore artificial intelligence applications in the public sector and opportunities for Europe-India collaboration. The conversation featured Arthur Mensch from Mistral AI, Jarek Kutylowski from DeepL, Matteo Valero from Barcelona Supercomputing Center, and Roberto Viola from the European Commission, with moderator Lucilla Sioli facilitating the discussion in India as part of broader efforts to build technology application capacity and foster global south partnerships.


AI Applications in Public Sector

Arthur Mensch emphasized that AI’s primary value in government lies in automating complex, fragmented processes rather than simply boosting individual productivity. He described Mistral’s “AI for Citizens” programme, which creates a horizontal platform for specific use cases around procurement, report writing, and public service delivery. Mensch noted that their product evolved from “Le Chat” to “Vibe” and mentioned ongoing collaboration with France Travail, France’s employment agency, though his explanation of this partnership was cut short in the discussion.


The key insight Mensch provided was that successful AI implementation requires moving beyond individual productivity tools to collective process automation. He stressed that AI systems should allow humans to delegate entire workflows and “get out of the way” of the automation, comparing it to effective coding practices where tasks are given to AI systems for completion rather than constant human intervention.


Multilingual Communication and Language Solutions

Jarek Kutylowski addressed linguistic diversity challenges, reframing multilingualism from a challenge to something “pretty beautiful” about diverse societies. He highlighted how countries like Canada and Switzerland have requirements to communicate with citizens in multiple languages, creating natural applications for AI-powered language solutions. DeepL’s work demonstrates AI’s capacity to bridge communication gaps in both written and spoken conversations, enabling citizens to interact with government services in their preferred languages.


Kutylowski mentioned various translation applications, from R&D documentation to maintenance records, emphasizing that different government use cases require varying approaches—translating legislation demands different considerations than enabling real-time citizen conversations.


Infrastructure Development and European Computing Capacity

Matteo Valero provided historical context, tracing developments from Seymour Cray’s first supercomputers 50 years ago to today’s AI transformation. He explained how Europe’s EuroHPC initiative has created substantial computing capacity, noting that “out of the first 15” top supercomputers globally, “we have 6 in Europe.” This infrastructure foundation enables what he termed “AI factories”—platforms combining hardware, software, and skilled personnel experienced in technology transfer.


The Barcelona Supercomputing Center, employing 1,400 people with 500 focused on AI-related work, exemplifies this approach by providing free services and expertise to connect technology with society. Valero emphasized existing collaborations with Indian institutions, specifically mentioning SIDAC and the Institute of Science in Bengaluru, as foundations for expanded cooperation.


The Productivity Paradox and Organizational Challenges

Roberto Viola introduced the Solow Paradox—the observation that increased IT investment often correlates with decreased rather than increased productivity. He explained this occurs because new digital systems typically overlay existing processes rather than replacing them, creating dual systems that increase costs without improving efficiency.


Viola cited European Investment Bank research showing AI can generate 4% productivity increases, attributing this improvement partly to “agentic AI” that creates new processes rather than simply digitizing existing ones. However, he stressed that even sophisticated AI systems will fail without organizational readiness and employee empowerment, as reluctant adoption can make systems twice as expensive while delivering low adoption rates.


Skills Development and Demographic Adaptation

The discussion revealed insights about how different groups adapt to AI tools. Mensch observed that very young developers (around 23) and senior architects (35+) adapt most successfully to AI coding tools, while mid-career professionals struggle more because they’ve become attached to traditional working methods. Young developers naturally integrate AI into workflows, while senior professionals can provide architectural guidance to AI systems.


This observation led to broader discussions about fundamental skills transformation required for AI adoption. Mensch emphasized that successful AI implementation requires people to become effective delegators—a skill not typically emphasized in traditional educational systems.


Policy Innovation and Systemic Transformation

Viola argued that policy must be “disruptive” rather than merely adaptive to avoid creating “digital bureaucracy.” He advocated for completely reimagining state-citizen relationships, moving from models where citizens visit government offices to systems where government services reach citizens through AI agents and digital solutions.


This vision connects to concepts of digital identity and public digital infrastructure, where citizens control their own identity attributes and interact seamlessly with government services. Viola suggested that both Europe and India share beliefs in public digital infrastructure that could form foundations for innovative governance models.


International Collaboration and Alternative Futures

The discussion emphasized Europe-India collaboration potential in AI development. Valero noted that individual European nations cannot compete with China and the United States, which control over 80% of computing power, talent, and investment in AI. He proposed that a Europe-India alliance could provide an alternative development path.


Viola’s closing remarks provided significant insight about AI’s future, drawing on his experience attending AI summits from Bletchley Park (with 20 participants) to the current summit (with thousands). He argued that this growth in participation demonstrates how AI’s future remains unwritten, challenging assumptions that the world should simply accept predetermined AI development paths.


Key Takeaways

The discussion revealed mature understanding of AI implementation challenges extending beyond technical capabilities to organizational psychology, economic theory, and governance innovation. The speakers demonstrated consensus on key principles: the need for fundamental transformation rather than incremental digitization, the critical importance of human empowerment and skills development, and the potential for alternative AI development paths prioritizing public benefit.


The conversation highlighted potential for Europe-India collaboration to create AI ecosystems focused on citizen services, multilingual accessibility, and democratic governance. Rather than accepting predetermined technological futures, the panelists advocated for actively shaping AI development to serve diverse societies and governance models, positioning AI as a tool whose impact will be determined by current choices in its development and implementation.


Session transcriptComplete transcript of the session
Speaker 1

precisely this, how do we sort of build capacity in order for this technology to be applied significantly better. And in the days to come, I would really love to see a day when India and the EU collaborate much more closely to make this happen, not just in India, but all over the global south. Thank you very much for having me. Thank you very much. Don’t go away, because now I’m going to call the panel. We have a distinguished panel today, but we would like to take a picture first. So if I can invite Vice President and Secretary Krishnan to stand here, and then I invite Arthur Mensch. He’s the co -founder and CEO of the European Union.

He’s the CEO of Mistral AI, if you can just stand next to the Secretary, which is a European company developing large language model, but also Jarek Kutuloski. who is the founder and CEO of a German company called DeepL, which is on language technologies. Matteo Valero, who is a professor of computer architecture at Technical University of Catalonia and the founding director of the Barcelona Supercomputing Center. And from the European Commission, I’m pleased to announce Roberto Viola. He’s the director general of DigiConnect. And he plays a pivotal role. He’s the director general for our digital policies. Okay, so as I said, it’s a very distinguished panel from the European Union. And I would like to thank all of you for being here to participate.

I’ll start with Artur from Mistral. I repeat that he comes from Mistral, which is a European model and one of the main large language models. In your opinion, how can LLMs or general purpose models in general reshape the public sector? And as a developer, how do you work with governments to apply it in the public sector?

Arthur Mensch

I’m the co -founder of a company called Mistral and we effectively train language models and perception models and we then use them to create applications for businesses and for states typically the models is never enough to actually provide value for the states we work with we have a program called AI for citizens that have multiple pillars but when we work with states the first thing we work on is efficiency what generative AI allows you to do is to delegate tasks in general and to automate certain processes that can be fairly complex, that can be fragmented, that can involve multiple people, that can involve multiple tools that can deal with IT legacy and so a state is not different, an administration is not different from an enterprise in that respect, that they have IT problems, in that they have processes that are sometimes inefficient in that they have pressure on talents because there are a lot of people that are actually retiring, so knowledge is a very big problem and management of the ledges is a very big problem.

The kind of things we do is related to that. So we deploy our horizontal platform and we create use cases. We work backward from use cases that are around procurement, that are around writing reports on the, visible in that it can show to the citizens themselves is building public services on top of artificial intelligence. And so one example is we worked with France Travail which is an employment agency in France to actually help with the matching of job employers, of employers and of people seeking jobs. And often times people would just connect and they’re looking

Lucilla Sioli

Thanks a lot. I now turn to Yarek, founder of DeepL, which has been a very important part of the project. Yarek has been working since 2017 in AI -based translation tools. and so there is a lot of linguistic diversity in India as well as in the European Union and so how can the AI language models help to overcome this multilingualism issue I say of course we consider it also a benefit but in administration it can be sometimes be a challenge

Jarek Kutylowski

I would definitely try to not characterize it as an issue I think it’s something that’s actually pretty beautiful about a lot of the countries that are so multilingual and there’s a lot of differences in how deeply multilingualism is embedded in different countries and in different societies I think here in India everybody understands it extremely well but it’s not the only country in the world and there’s countries like Canada, there’s countries like Switzerland whom we’re working a lot with the public sector that have this intrinsic necessity of being able to connect to their citizens in very many languages and where partially that communication is even embedded as a part of their constitution. And here, those countries have been struggling over the years, maybe as you have indicated, on how to actually make this happen.

And AI and those kinds of frontier models that we build and the applications on top of them that are specifically tailored to bridging this communications gap, they help a lot. Nowadays, not only in written language, but also in spoken language, enabling real -time conversations maybe with citizens in a setting when they come up into an office and want to get… certain service done. So a lot of options there, but also a lot of complexity as those use cases that governments have really differ very, very much based on what you’re doing. It’s another challenge to translate legislation into different languages. It’s another challenge if you want to enable those real -time conversations with citizens. Quite a lot of exciting problems to solve.

Lucilla Sioli

Thank you very much. Now I turn to Matteo Valero. You are also a professor, but you’re also the director of the Barcelona Supercomputing Center. So can you maybe explain, you’re also in an AI factory, what the AI factory does and how it can help the transformation of the public sector and of SMEs?

Matteo Valero

Thank you, Lucila. Good afternoon to everyone. It’s my pleasure to be in India once more. Sorry? Sure. My pleasure to be here. You have an incredible country, believe me. So, thank you for inviting me. And I am going to start 50 years ago when Seymour Cray produced the first supercomputer, no? And this supercomputer increased the speed from 10 to 10 until now, okay? With this computer, we did simulation and we produced better results in science and engineering. In Europe, every country was alone until, thanks to Roberto Viola, we created the EuroHPC. And then, because we had the EuroHPC, we have now a reasonable amount of power in the supercomputers. So, if you look at the top 500, probably out of the first 15, we have 6 in Europe.

And we do science. We do science and this is very good. So now because the data, because the computer, and especially because the research of these guys and many others, the AI is invading us. It’s changing any activity we have. In my field, I am changing the way we do high performance processor. In the supercomputing center, let me tell you that we are 1 ,400 and we have 500 people doing hardware software, using or designing in topics related with the AI. There is no question that now the data, the control, the data, the computers, and the algorithms are dominating the world. So what we could do in Europe, we have the supercomputer, but we need to devote more energy in order to…

get the AI distribute around any activities. So the idea of the European Commission was create the factories and now the gigafactory. The AI Factory is a platform, AI platform is hardware and software, but as important as that, it’s co -located where there are people with the skills in AI, there are people with experience in transferring technology to the society. So the idea of this AI is the service is free, the people is free, is to connect as much as we can with the society to make a better world. This is the target for us. Obviously, there are many, many possible contributions, and one of them is the administration, and obviously how we can make happy to the citizen.

If we make happy to the citizen, we are successful, okay? And we can make happy to the citizen if we provide them with personalized information, accurate and fast. After that, a second question. I will give example, but I think… So this is the target for the AI factories and the gigafactories is the same but competing with the data center. Because I forgot one thing. What Europe could do is just to use this data in the platform from outside or create our platform to use our platform using this data. I think this is the right way to go.

Lucilla Sioli

Thanks a lot. So we have talked about what the models can do, the computing capacity that is made available. Now, Roberto, I would like to ask you, since you have reflected and designed all of this, how would you now… Mention your words. I’m your boss. Yes, he is. How would you help now facilitate the uptake of AI? By the public sector, because if we have the models, we have the compute capacity or we are building it. and we’re also building more access and more availability of data sets. But how can we make sure that the public sector actually uses AI?

Roberto Viola

Thank you. Thank you, Lucille. Good afternoon, everyone. It’s really, for me, a pleasure to be here and to be together with Lucille, with the three crown jewels of Europe, which are all very much representing what is for us giving out to citizens, society, innovation. Because you can test the Mistral on the web for free. You can use DPL for free on the web and test it and enjoy it. Translation from all Indian languages to European languages, I dare to say, yes. You can test Destination Earth. Destination Earth is the most sophisticated climate digital twin of the world, AI digital twin. You can replay the climate of the past into the future. You can zoom in in certain areas.

You can have a resolution which was an error of 200, 100 meters, three weather events, because there are already two twins which are running, the twin of the climate and the twin of extreme weather events. Again, for free on the web. So this is the first point I want to make. There’s an economist, maybe you know the name, Mr. Solow, that he expressed with numbers and, I mean, evidence a paradox. The more people invest in IT and software and other infrastructure, the less the productivity. Actually, there’s no… There’s no productivity gain in doing that. So it’s called the solo paradox because it’s a paradox because you as a user, me as a user, experience a much better user experience to have a public administration which is more digitized or an hospital where everything is digital as well or a doctor which is savvy because it has an AI co -pilot.

But in terms of productivity gain, according to the solo paradox and the numbers that he has, in a compelling way, put in front of us, there’s no productivity gain. So many economies, and of course, who solves in this room this paradox is for Nobel Prize of Economy. So, I mean, the challenge is open. So I’m not going to solve it, but I try to answer the question of Lucy. The reason why many have observed this is that because normally, IT, and that includes also now AI, overlaps what exists. And of course then it becomes very intuitive. Imagine an hospital, I mean, having all the doctors still and nurses and everyone in traditional process. So doing a bit of paperwork as they do, but also doing it digitally.

And having two systems running in parallel, of course, I mean, you imagine that the productivity doesn’t move much. Now we have seen some changes during COVID. Why? Because people, I mean, were secluded and they were forced, I mean, to use only digital. So in certain areas, sadly, I mean, you saw the productivity was in a way more linearly linked with the use of technology. The European Investment Bank has published an econometric study that shows AI as a productivity increase of 4%. Which is not the stellar numbers, I mean, some of the vendors around say, but it’s… compared to the solo paradox is not zero. And I think this sign is because with AI, especially agentic AI, you see the change.

So you don’t see the overlap anymore, one process with the other, but you could see that there’s a new process, new way of man -machine interacting and working. But Arthur, before, said something which is the key of all of this. Because if people in the public sector are not empowered, they don’t understand it. They are not part of this change. The change will not lead to any productivity gain. Because you can have the most expensive and sophisticated AI software of the world, probably absolutely not needed by the private sector, because better to have bespoke models, open source, that serve the purpose. But even if you have the most sophisticated, you still get that one. if you have someone that refuses to embrace the technology or in any case you have an organization a process that is not ready not fit for it then there’s no productivity gain.

Maybe as a citizen you can see their wonders but in reality I mean the old system becomes two times more expensive and the adoption rate is low and this is really the real challenge of artificial intelligence as paradoxically it can be. I think we can proudly say that as I see it in India and I see it now in Europe, we are developing an ecosystem which is really brilliant, self -reliant, sufficient in terms of good company producing open source, producing language technology, producing advanced algorithms. We have supercomputing center offering capacity. All of this, I mean, goes in a completely different model compared to other models, and it’s all fine. But now, I mean, we really need to work with the people and with the public administration and to make sure that we

Lucilla Sioli

Okay. So, Arthur, if I were to ask you, how do you get the AI accepted by the citizens and also the public administration? What kind of tools? You already provide, of course, the chatbot Le Chat. But what are other tools that you think will be easily accepted?

Arthur Mensch

Well, we’ve turned Le Chat into something that we call Vibe, actually, which is a product where we can delegate tasks. We can delegate tasks fully and delegate workflows. The challenge and the reason why you don’t see productivity gains when you deploy chatbots in enterprise is that basically you’re focusing on an individual productivity gain. so that’s the case in enterprise but it’s the same in administration and so if someone can actually write a mail faster it’s not actually changing the way your business is being run when the thing starts to change if you look at a full process let’s say procurement for instance which typically you entail like multiple touch points with multiple people and you ask the agent to actually run the process itself so you move from an individual productivity endeavor to a collective productivity endeavor and you move from being equipping ICs so individual contributors to equipping managers that are going to span the same way a manager will delegate sometimes to a human it can delegate sometimes to an AI process and there are two big challenges associated to that and that needs to be solved for product but also through human interaction I would say.

The first is that you need the process automation that we run and we design them we bring our engineers in, they work with subject matter experts and they design they write the code using our coding model and then they deploy the code that is going to run the automation and that’s going to ask questions, that’s going to interact with the tools. The way we design them is to try and get the humans out of the way because the problem is that the process only brings productivity gains if you’re not bottlenecked by the humans themselves, if you’re not interrupting them all the time. A good example is coding. If you want to code faster with AI, you need to give them tasks and then disappear and then you come back maybe one hour later and the task is done.

If the thing comes and nags you like five minutes after, maybe you’re doing something else and so the thing is actually not progressing as fast as it should. You need humans to actually get out of the way of the AI automation if you want this automation to work. And then the second thing that goes with it is that once you’ve done the automation, you need to rethink the organization because once you’ve automated your procurement process, well, suddenly the people that were actually running the analysis of the procurement needs to do something else. And that’s actually… Take… some thought around how you’re reorganizing people, how you’re rescaling people, how you’re turning individual contributors into people that will effectively manage AI -operated processes.

And so you need, and enterprises need, to actually turn people that were used to do menial work into people that are delegating those work. And as you know, and as every manager in the room knows, it’s actually fairly hard to learn how to delegate and to move from being an individual contributor to being a delegator. And because the only way AI actually brings you productivity gain is through strong delegation and long execution, well, every one of us needs to become a strong delegator. And so that takes some training. We are not trained to be delegators at school in Europe, I would say, at least in France. And that’s something we will need to learn. And so we’ll need to learn it early on.

And we need to reskill the people that are, that needs to learn a new way of doing things. A new way of working. And to rewire the brain. I think a very good example, I’ll stop with that. is that we have our coding tool called Mistral Vibe. And what we see is that if you take very young developers, they use it very quickly. And so they learn how to use it. They’re excited. The way they work is inherently wired to using AI because they are 23, and that’s everything they’ve ever done in coding is through AI. Then you have the very senior people that are like 35 years old, I guess, or 33 my age. And those are still much better than the agents themselves.

So they know how to design the architectures. They can give some very precise guidance on how the agents need to rewire the code or bring a new feature. And then the problem is the people in between. The people in between, well, they got very attached to writing code. And now they need to rewire. They were very good at writing code, but they need to become something else. else. And so that’s where reskilling is very, very important. And that applies to software engineering, but that will apply to everybody working on knowledge. And so that’s the three years ahead of us, and we need to work together to make it as smooth as possible.

Lucilla Sioli

Thanks. And indeed, Jarek, you have developed AI agents. So when you think of applying them to the public administration with all these caveats we just heard, what do you think the acceptance is going to be like?

Jarek Kutylowski

Yeah, I think it’s not only about the individuals and how people reskill and how people adopt AI and how they learn to use AI, but it’s also about organizations really rethinking the way that they are working. Thinking about workflows, thinking about processes, whether that’s like general purpose agentic workflows, or whether that’s something that has language at its core, rethinking of how do we do these things. We’ve gone over a couple of decades now improving those processes and maybe putting parts of AI, especially in language processes, that’s been already happening over the last years quite significantly. But we haven’t yet rethought those whole processes. Like, do we need that human review step anymore in a particular use case?

Or is it just enough to use AI? We have organizations who are translating R &D documentation for drug discovery and submitting that to the local regulators, just purely translated by AI with the appropriate guardrails. We have organizations that are translating plain maintenance records and using them as the source of truth. So there is a lot of potential in using AI, but you have to think a little bit out of the box and really forget the old ways of doing things. And the same holds for agentic AI, and I think even more so, because the potential of AI is even bigger. So it’s… It’s both for the public sector and for businesses. It’s a big redesign of how work gets really done.

And the bigger the organization, the obviously bigger the inertia that is out there. And public sectors tends to be the largest organizations in any country. So the challenge is even bigger there.

Lucilla Sioli

Thanks. And so, Matteo, as in the Computing Center of Barcelona, it is quite specialized also in applications for public sector, for health care. So what do you see as the main applications that are being developed on the basis of demand?

Matteo Valero

teach to the young people to understand the problem to propose solution. Thank you.

Lucilla Sioli

So, Roberto, you heard the challenges in terms of acceptance and implementation in the public sector, which were sometimes maybe the skills are not very strong. So how do you think that policy can really help to enable this transformation?

Roberto Viola

I think policy needs to be tuned with the transformation. So in a way, as I was trying to say also before, if you invent a digital bureaucracy, it’s a bureaucracy. It’s digital, but still it’s a bureaucracy. And you have then a bureaucrat and a digital and an AI agent bureaucrat. I mean, It would be very simple for the geniuses in this panel to produce an AI bureaucrat. And I’m sure AI can do bureaucracy even better than us, much better than us. Regulation -generating bots, yes. That would be super useful. Or regulation -correcting bots, that would be good. So you see, I think we need to be also from the legislation side disruptors and look at things with completely different eyes.

And for this, let me say that there are one thing that really does a striking similarity between Europe and India is this idea of believing in the public stock. So the idea that you can actually be managing your identity, your attribution. your capacity to sign, to timestamp, to actually exchange these attributes in an open source and an open model. This is for people and for businesses. And then in this way, the state is in your hand. I mean, it is actually, you have the bureaucracy under control because the bureaucracy, it’s you. So, if you actually reverse the logic of the citizen going to an office, that’s what you refer to, to the office going to the citizen with, of course, all sorts of nice agents, push notifications, attestations, then, I mean, you re -engineer the state.

So, my point is, if we dare, and I dare to say in India you are daring, in Europe we are daring, you can actually redesign the paradigm and then if you do that then creativity is really at work because there can be many different agents many different ideas on how to improve processes Thanks

Lucilla Sioli

Now we only have very little time to go but before leaving since we have these four geniuses I would like to ask you maybe a very last thought that you have on innovation in the public sector and how you can contribute

Arthur Mensch

I think public research is very important in particular I think partnerships between private companies and public efforts is actually something that works because doing research takes some infrastructure infrastructure takes some capital and so I think that’s the way we can actually accelerate together

Matteo Valero

I would say that the the the the the AI is a dual use technology and we need to look for the good use in this direction I think we can do a lot in Europe because as Roberto said we have the infrastructure and then we need a little more to invest a little more and then to define common projects because don’t forget that if you look at the power at the national level between the state and China they have more than 80 % of the computer more than 80 % of the people and more than 80 % of the investment so what we can do one possibility should be being in India alliance I think it would be very good to have an alliance between Europe and India in this topic we as BSE we have alliance with the SIDAC with the super competing centre and we have with the Institute of Science in Bengaluru.

And also financed by the Commission, we have a very good project that we are very happy to collaborate with you in the end. Thank you.

Lucilla Sioli

Now, time is up, but I would like to know very shortly, last thought from Roberto and Jarek.

Jarek Kutylowski

Yeah, I think we can build and we will build from the commercial side, from the business side, amazing products that are driving a lot of value creation in the AI space. I think that that’s clear. And we’re going to be trying to do that in a way, of course, that our users and our customers can be really delighted by those products. But I think there is a lot of work that the public sector can do in terms of bringing this importance of adopting this technology into the broad base of population. I think both the German and also the European countries are going to be very happy and also from all of the conversations that we had here, the European and Indian governments do understand that, but we should not underestimate this challenge.

And I think there needs to be a very strong partnership between… businesses and the public sector on driving that. Thanks.

Lucilla Sioli

Thank you.

Roberto Viola

I am one of the few that has been in the three summits. I mean, Blanchley, this one, and last year in Paris. And of course the size already gives you an idea how things have changed. In the kind of discussion room at Blanchley Park we were 20, including the leaders. I mean, that gives you an idea. Now, the point is, I’m so happy to be here because what I always thought a little bit is true. There’s not one future for AI and technology. And it is not written. It is not written. The thousands and thousands of people that participated to the summit this year will write the future. So those that tell you there’s only one way, I mean, there’s only one scale.

And the rest of the world should watch and applaud. and I mean adapt to it absolutely I mean this summit shows and application of AI in public service what India is doing, what Europe is trying to do shows there are many futures and as I was trying to say before the future is in our hand

Lucilla Sioli

Thanks a lot and with these very intelligent and smart sentences tell me to thank the speakers and thanks a lot for your participation Thank you Thank you Thank you

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Arthur Mensch
7 arguments163 words per minute1176 words431 seconds
Argument 1
AI can automate complex, fragmented processes and help with IT legacy issues in government administration
EXPLANATION
Arthur argues that generative AI allows delegation of tasks and automation of complex processes that involve multiple people and tools, addressing IT legacy problems. States face similar challenges to enterprises with inefficient processes, talent pressure from retiring employees, and knowledge management issues.
EVIDENCE
Examples include deploying horizontal platforms for use cases around procurement and writing reports
MAJOR DISCUSSION POINT
AI Applications in Public Sector Transformation
Argument 2
AI enables building public services like job matching platforms for employment agencies
EXPLANATION
Arthur describes how AI can create visible public services that directly benefit citizens. This involves building applications on top of artificial intelligence that provide tangible value to the public.
EVIDENCE
Worked with France Travail (employment agency in France) to help with matching job employers and people seeking jobs
MAJOR DISCUSSION POINT
AI Applications in Public Sector Transformation
AGREED WITH
Jarek Kutylowski, Matteo Valero, Roberto Viola
Argument 3
AI productivity gains require moving from individual to collective productivity through full process automation and delegation
EXPLANATION
Arthur argues that chatbots focusing on individual productivity don’t create significant gains. Real change happens when you automate full processes like procurement that involve multiple touchpoints and people, moving from equipping individual contributors to equipping managers who can delegate to AI processes.
EVIDENCE
Example of coding – if you want to code faster with AI, you need to give tasks and disappear for an hour, not be interrupted every five minutes
MAJOR DISCUSSION POINT
Challenges in AI Adoption and Productivity
AGREED WITH
Jarek Kutylowski, Roberto Viola
DISAGREED WITH
Jarek Kutylowski
Argument 4
Successful AI implementation requires humans to get out of the way of automation and organizations to rethink workflows completely
EXPLANATION
Arthur emphasizes that AI automation only brings productivity gains if humans don’t bottleneck the process by constantly interrupting it. Organizations need to reorganize people, rescale them, and turn individual contributors into managers of AI-operated processes.
EVIDENCE
Example of procurement process automation – once automated, people who were running procurement analysis need to do something else and be retrained
MAJOR DISCUSSION POINT
Challenges in AI Adoption and Productivity
Argument 5
People need to be reskilled from individual contributors to delegators who can manage AI-operated processes
EXPLANATION
Arthur argues that AI productivity gains come through strong delegation and long execution, requiring everyone to become strong delegators. This is challenging because people aren’t trained to be delegators in school, particularly in Europe/France.
EVIDENCE
Uses their coding tool Mistral Vibe as example – very young developers (23) adapt quickly, very senior people (35+) can give precise guidance, but people in between who were good at writing code struggle to rewire and become something else
MAJOR DISCUSSION POINT
Skills and Human Adaptation
AGREED WITH
Roberto Viola, Lucilla Sioli
Argument 6
Different age groups adapt differently to AI tools, with very young and very senior people adapting better than those in between
EXPLANATION
Arthur observes that very young developers use AI tools quickly because they’ve only ever coded with AI, while very senior people can provide architectural guidance. The challenge is with people in between who were attached to writing code and need to rewire their approach.
EVIDENCE
Specific example with Mistral Vibe coding tool showing different adaptation patterns across age groups
MAJOR DISCUSSION POINT
Skills and Human Adaptation
Argument 7
Partnerships between private companies and public research efforts are important for accelerating AI development
EXPLANATION
Arthur believes that collaboration between private companies and public efforts is effective because research requires infrastructure, and infrastructure requires capital. This partnership model can help accelerate development together.
MAJOR DISCUSSION POINT
Future of AI Development
J
Jarek Kutylowski
3 arguments148 words per minute703 words284 seconds
Argument 1
AI language models can bridge multilingual communication gaps in government services, enabling real-time conversations with citizens
EXPLANATION
Jarek argues that multilingualism should not be characterized as an issue but as something beautiful. AI and frontier models help countries with constitutional requirements for multilingual communication to connect with citizens in many languages, both in written and spoken form.
EVIDENCE
Examples of countries like Canada and Switzerland working with public sector on multilingual communication; organizations translating R&D documentation for drug discovery and maintenance records using AI
MAJOR DISCUSSION POINT
AI Applications in Public Sector Transformation
AGREED WITH
Arthur Mensch, Matteo Valero, Roberto Viola
Argument 2
Organizations need to forget old ways of doing things and redesign processes, with larger organizations facing greater inertia
EXPLANATION
Jarek emphasizes that successful AI adoption requires completely rethinking workflows and processes, not just improving existing ones. Organizations must question whether human review steps are still needed or if AI alone with appropriate guardrails is sufficient.
EVIDENCE
Examples of organizations translating drug discovery documentation and maintenance records purely with AI, using appropriate guardrails
MAJOR DISCUSSION POINT
Challenges in AI Adoption and Productivity
AGREED WITH
Arthur Mensch, Roberto Viola
DISAGREED WITH
Arthur Mensch
Argument 3
Strong partnership between businesses and public sector is needed to drive AI adoption across the broad population
EXPLANATION
Jarek acknowledges that while businesses will build amazing AI products that create value, there’s important work for the public sector in bringing AI adoption to the broad population base. This challenge should not be underestimated and requires strong collaboration.
EVIDENCE
References conversations showing that European and Indian governments understand this challenge
MAJOR DISCUSSION POINT
Future of AI Development
M
Matteo Valero
4 arguments139 words per minute685 words294 seconds
Argument 1
AI factories provide free platforms and expertise to connect technology with society and make citizens happy through personalized, accurate, and fast information
EXPLANATION
Matteo explains that AI factories are platforms with hardware and software, co-located with people who have AI skills and experience in technology transfer. The goal is to provide free services to connect with society and make citizens happy through personalized, accurate, and fast information.
EVIDENCE
Barcelona Supercomputing Center has 1,400 people with 500 working on AI-related topics; mentions the creation of EuroHPC and having 6 of the top 15 supercomputers in Europe
MAJOR DISCUSSION POINT
AI Applications in Public Sector Transformation
AGREED WITH
Arthur Mensch, Jarek Kutylowski, Roberto Viola
DISAGREED WITH
Roberto Viola
Argument 2
Europe has built significant supercomputing capacity through EuroHPC, with 6 of the top 15 supercomputers globally
EXPLANATION
Matteo describes how Europe moved from individual country efforts to collaborative EuroHPC initiative, resulting in significant supercomputing power. This infrastructure foundation is crucial for AI development and scientific advancement.
EVIDENCE
References the history from Seymour Cray’s first supercomputer 50 years ago to current top 500 rankings showing 6 of top 15 supercomputers in Europe
MAJOR DISCUSSION POINT
European-Indian Collaboration and Infrastructure
Argument 3
An alliance between Europe and India in AI would be beneficial, as both believe in public digital infrastructure
EXPLANATION
Matteo suggests that given the concentration of computing power, people, and investment in the US and China (over 80% each), Europe and India should form an alliance. He notes existing collaborations and projects between European and Indian institutions.
EVIDENCE
Barcelona Supercomputing Center has alliances with SIDAC supercomputing center and Institute of Science in Bengaluru, with Commission-financed projects
MAJOR DISCUSSION POINT
European-Indian Collaboration and Infrastructure
AGREED WITH
Speaker 1, Roberto Viola
Argument 4
AI is dual-use technology requiring focus on beneficial applications and common projects between nations
EXPLANATION
Matteo emphasizes that AI can be used for good or bad purposes, and there’s a need to focus on beneficial applications. He advocates for more investment and common projects, particularly given the competitive landscape with other global powers.
EVIDENCE
References the need for Europe to invest more and create common projects, noting the dominance of US and China in computing power, people, and investment
MAJOR DISCUSSION POINT
Future of AI Development
R
Roberto Viola
5 arguments136 words per minute1332 words587 seconds
Argument 1
The Solow paradox shows that IT investment often doesn’t increase productivity because new systems overlap with existing ones rather than replacing them
EXPLANATION
Roberto explains economist Solow’s paradox that despite IT investment improving user experience, productivity doesn’t increase because digital systems often run parallel to traditional processes rather than replacing them. However, he notes that AI shows a 4% productivity increase according to European Investment Bank studies.
EVIDENCE
Example of hospitals having both traditional and digital processes running in parallel; COVID showed productivity gains when people were forced to use only digital systems; European Investment Bank econometric study showing 4% AI productivity increase
MAJOR DISCUSSION POINT
Challenges in AI Adoption and Productivity
AGREED WITH
Arthur Mensch, Jarek Kutylowski
Argument 2
Public sector empowerment and understanding is crucial – without people embracing technology, there are no productivity gains
EXPLANATION
Roberto emphasizes that even the most sophisticated AI software won’t deliver productivity gains if people in the public sector don’t understand it, aren’t empowered, or aren’t part of the change. The old system becomes twice as expensive with low adoption rates.
EVIDENCE
Notes that you can have the most expensive AI software but if people refuse to embrace it or the organization isn’t ready, there’s no productivity gain
MAJOR DISCUSSION POINT
Skills and Human Adaptation
AGREED WITH
Arthur Mensch, Lucilla Sioli
DISAGREED WITH
Matteo Valero
Argument 3
Policy should focus on redesigning state paradigms, moving from citizens going to offices to offices going to citizens through digital agents
EXPLANATION
Roberto argues that policy needs to be disruptive and look at things with different eyes. Instead of creating digital bureaucracy, the focus should be on reversing the logic so that offices come to citizens through agents, push notifications, and attestations, essentially re-engineering the state.
EVIDENCE
Mentions the similarity between Europe and India in believing in public digital infrastructure for managing identity, attribution, and capacity to sign and timestamp
MAJOR DISCUSSION POINT
AI Applications in Public Sector Transformation
AGREED WITH
Arthur Mensch, Jarek Kutylowski, Matteo Valero
Argument 4
Both Europe and India are building self-reliant AI ecosystems with open source models and advanced algorithms
EXPLANATION
Roberto proudly states that both regions are developing brilliant, self-reliant ecosystems with good companies producing open source solutions, language technology, and advanced algorithms, along with supercomputing centers offering capacity. This represents a different model compared to other global approaches.
EVIDENCE
References the ecosystem of companies, supercomputing centers, and the different model being developed compared to other regions
MAJOR DISCUSSION POINT
European-Indian Collaboration and Infrastructure
AGREED WITH
Speaker 1, Matteo Valero
Argument 5
There are multiple possible futures for AI, not just one predetermined path, and these futures are being written by current participants
EXPLANATION
Roberto emphasizes that AI’s future is not predetermined or written by a single dominant force. He contrasts the small group at Blanchley Park with the thousands participating in the current summit, showing how the future is being shaped by diverse participants rather than a single narrative.
EVIDENCE
Compares three AI summits – Blanchley Park with 20 people including leaders, versus thousands participating in the current summit; mentions applications of AI in public service by India and Europe
MAJOR DISCUSSION POINT
Future of AI Development
S
Speaker 1
1 argument105 words per minute308 words175 seconds
Argument 1
India and EU should collaborate more closely to apply AI technology better, not just in India but across the global south
EXPLANATION
Speaker 1 expresses a desire to see increased collaboration between India and the European Union to build capacity for better AI technology application. The vision extends beyond just India to include broader impact across the global south region.
MAJOR DISCUSSION POINT
European-Indian Collaboration and Infrastructure
AGREED WITH
Matteo Valero, Roberto Viola
L
Lucilla Sioli
5 arguments128 words per minute506 words236 seconds
Argument 1
Linguistic diversity in both India and the EU can be a challenge in administration despite being a benefit
EXPLANATION
Lucilla acknowledges that while multilingualism is considered a benefit in both India and the European Union, it can sometimes present challenges in administrative contexts. She frames this as an issue that AI language models could help address.
MAJOR DISCUSSION POINT
AI Applications in Public Sector Transformation
Argument 2
The EU has developed models, computing capacity, and data sets that need to be utilized by the public sector
EXPLANATION
Lucilla notes that while Europe has built the foundational elements for AI implementation – including models, computing capacity, and data availability – the challenge now is ensuring actual adoption and use by public sector organizations. She emphasizes the need to facilitate uptake rather than just building infrastructure.
EVIDENCE
References the models, compute capacity being built, and more access to data sets
MAJOR DISCUSSION POINT
Challenges in AI Adoption and Productivity
Argument 3
Policy makers need to focus on facilitating AI uptake by the public sector beyond just building infrastructure
EXPLANATION
Lucilla emphasizes that having the technical components (models, computing capacity, datasets) is not sufficient – there needs to be deliberate policy focus on ensuring these tools are actually adopted and used effectively by public sector organizations. This represents a shift from infrastructure building to implementation facilitation.
MAJOR DISCUSSION POINT
Challenges in AI Adoption and Productivity
Argument 4
AI tools need to be easily accepted by both citizens and public administration
EXPLANATION
Lucilla raises the critical question of acceptance, recognizing that technical capability alone is insufficient if the tools are not accepted by end users. She specifically asks about what types of tools beyond chatbots would be easily accepted, indicating concern about user adoption barriers.
EVIDENCE
References Le Chat chatbot as an example of existing tools
MAJOR DISCUSSION POINT
Skills and Human Adaptation
AGREED WITH
Arthur Mensch, Roberto Viola
Argument 5
The Barcelona Supercomputing Center specializes in applications for public sector and healthcare
EXPLANATION
Lucilla highlights the specialized focus of the Barcelona Supercomputing Center on developing AI applications specifically for public sector use and healthcare applications. She seeks to understand what applications are being developed based on actual demand from these sectors.
MAJOR DISCUSSION POINT
AI Applications in Public Sector Transformation
Agreements
Agreement Points
AI requires fundamental organizational transformation rather than just technology overlay
Speakers: Arthur Mensch, Jarek Kutylowski, Roberto Viola
AI productivity gains require moving from individual to collective productivity through full process automation and delegation Organizations need to forget old ways of doing things and redesign processes, with larger organizations facing greater inertia The Solow paradox shows that IT investment often doesn’t increase productivity because new systems overlap with existing ones rather than replacing them
All three speakers agree that successful AI implementation requires complete process redesign and organizational transformation, not just adding AI tools to existing workflows. They emphasize that overlaying new technology on old processes leads to inefficiency and lack of productivity gains.
Human empowerment and reskilling are critical for AI success
Speakers: Arthur Mensch, Roberto Viola, Lucilla Sioli
People need to be reskilled from individual contributors to delegators who can manage AI-operated processes Public sector empowerment and understanding is crucial – without people embracing technology, there are no productivity gains AI tools need to be easily accepted by both citizens and public administration
There is strong consensus that technology alone is insufficient – people must be empowered, trained, and willing to embrace AI for it to deliver benefits. This includes both public sector workers and citizens who will use AI-enabled services.
Europe-India collaboration in AI development is beneficial and necessary
Speakers: Speaker 1, Matteo Valero, Roberto Viola
India and EU should collaborate more closely to apply AI technology better, not just in India but across the global south An alliance between Europe and India in AI would be beneficial, as both believe in public digital infrastructure Both Europe and India are building self-reliant AI ecosystems with open source models and advanced algorithms
All speakers support stronger Europe-India collaboration in AI, recognizing shared values around public digital infrastructure and the strategic importance of building alternative AI ecosystems to dominant global powers.
AI can significantly improve public services and citizen experience
Speakers: Arthur Mensch, Jarek Kutylowski, Matteo Valero, Roberto Viola
AI enables building public services like job matching platforms for employment agencies AI language models can bridge multilingual communication gaps in government services, enabling real-time conversations with citizens AI factories provide free platforms and expertise to connect technology with society and make citizens happy through personalized, accurate, and fast information Policy should focus on redesigning state paradigms, moving from citizens going to offices to offices going to citizens through digital agents
There is unanimous agreement that AI can transform public services by making them more accessible, personalized, and efficient. This includes multilingual support, automated processes, and reversing traditional service delivery models.
Similar Viewpoints
Both emphasize that AI success requires complete workflow redesign and allowing AI systems to operate without constant human interruption. They agree that organizations must fundamentally rethink processes rather than just improving existing ones.
Speakers: Arthur Mensch, Jarek Kutylowski
Successful AI implementation requires humans to get out of the way of automation and organizations to rethink workflows completely Organizations need to forget old ways of doing things and redesign processes, with larger organizations facing greater inertia
Both speakers highlight Europe’s success in building AI infrastructure and capabilities, emphasizing the importance of self-reliant technological ecosystems as an alternative to other global models.
Speakers: Matteo Valero, Roberto Viola
Europe has built significant supercomputing capacity through EuroHPC, with 6 of the top 15 supercomputers globally Both Europe and India are building self-reliant AI ecosystems with open source models and advanced algorithms
Both recognize that human factors and adaptation challenges are critical barriers to AI success, with different groups facing different challenges in adopting new technologies.
Speakers: Arthur Mensch, Roberto Viola
Different age groups adapt differently to AI tools, with very young and very senior people adapting better than those in between Public sector empowerment and understanding is crucial – without people embracing technology, there are no productivity gains
Unexpected Consensus
The importance of delegation skills over technical skills
Speakers: Arthur Mensch, Roberto Viola
People need to be reskilled from individual contributors to delegators who can manage AI-operated processes Public sector empowerment and understanding is crucial – without people embracing technology, there are no productivity gains
It’s unexpected that both a tech CEO and a policy maker would emphasize soft skills like delegation over technical AI skills. This suggests a mature understanding that AI success depends more on management and organizational capabilities than technical expertise.
The need for disruptive policy approaches rather than incremental digitization
Speakers: Roberto Viola, Arthur Mensch, Jarek Kutylowski
Policy should focus on redesigning state paradigms, moving from citizens going to offices to offices going to citizens through digital agents AI productivity gains require moving from individual to collective productivity through full process automation and delegation Organizations need to forget old ways of doing things and redesign processes, with larger organizations facing greater inertia
Unexpected consensus between a policy maker and tech entrepreneurs that incremental digitization is insufficient – all agree that radical transformation is necessary. This alignment suggests recognition that half-measures will fail.
Multiple futures for AI development are possible and desirable
Speakers: Roberto Viola, Matteo Valero
There are multiple possible futures for AI, not just one predetermined path, and these futures are being written by current participants AI is dual-use technology requiring focus on beneficial applications and common projects between nations
Unexpected philosophical alignment between a policy maker and academic researcher on rejecting technological determinism. Both emphasize human agency in shaping AI’s future rather than accepting a single dominant narrative.
Overall Assessment

The speakers demonstrate remarkable consensus on key issues: the need for fundamental organizational transformation rather than technology overlay, the critical importance of human empowerment and reskilling, the value of Europe-India collaboration, and AI’s potential to transform public services. There is also unexpected alignment on the importance of delegation skills, the need for disruptive rather than incremental approaches, and the possibility of multiple AI futures.

Very high level of consensus with significant implications for AI policy and implementation. The agreement spans technical, organizational, and policy dimensions, suggesting a mature and holistic understanding of AI transformation challenges. This consensus provides a strong foundation for collaborative action between Europe and India in developing alternative AI governance models focused on public benefit rather than purely commercial interests.

Differences
Different Viewpoints
Individual vs. collective productivity approach to AI implementation
Speakers: Arthur Mensch, Jarek Kutylowski
AI productivity gains require moving from individual to collective productivity through full process automation and delegation Organizations need to forget old ways of doing things and redesign processes, with larger organizations facing greater inertia
Arthur emphasizes moving from individual to collective productivity through delegation and process automation, while Jarek focuses on completely redesigning organizational processes and workflows. Arthur’s approach is more about changing management structures, while Jarek advocates for fundamental process reimagining.
Infrastructure-first vs. human-centered approach to AI adoption
Speakers: Matteo Valero, Roberto Viola
AI factories provide free platforms and expertise to connect technology with society and make citizens happy through personalized, accurate, and fast information Public sector empowerment and understanding is crucial – without people embracing technology, there are no productivity gains
Matteo emphasizes building AI infrastructure and platforms first to serve citizens, while Roberto stresses that human empowerment and understanding must come first, arguing that even sophisticated AI won’t work without people embracing the technology.
Unexpected Differences
Characterization of multilingualism in administration
Speakers: Lucilla Sioli, Jarek Kutylowski
Linguistic diversity in both India and the EU can be a challenge in administration despite being a benefit AI language models can bridge multilingual communication gaps in government services, enabling real-time conversations with citizens
Lucilla frames multilingualism as a challenge that needs to be overcome, while Jarek explicitly rejects this characterization, calling multilingualism ‘something that’s actually pretty beautiful’ and refusing to characterize it as an issue. This disagreement is unexpected because they’re discussing the same AI solutions but have fundamentally different perspectives on the problem definition.
Overall Assessment

The main disagreements center on implementation approaches rather than fundamental goals. Speakers disagree on whether to prioritize infrastructure building or human empowerment, individual vs. collective productivity strategies, and how to characterize multilingual challenges.

Low to moderate disagreement level. The speakers share common goals of AI adoption in public sector and Europe-India collaboration, but differ on tactical approaches and problem framing. These disagreements are constructive and complementary rather than conflicting, suggesting different aspects of the same challenges rather than fundamental opposition. The implications are positive – multiple valid approaches can be pursued simultaneously.

Partial Agreements
All speakers agree that human adaptation and organizational change are essential for AI success, but they disagree on the approach: Arthur focuses on reskilling people to become delegators, Jarek emphasizes completely redesigning processes, and Roberto stresses the need for empowerment and understanding first.
Speakers: Arthur Mensch, Jarek Kutylowski, Roberto Viola
People need to be reskilled from individual contributors to delegators who can manage AI-operated processes Organizations need to forget old ways of doing things and redesign processes, with larger organizations facing greater inertia Public sector empowerment and understanding is crucial – without people embracing technology, there are no productivity gains
Both speakers agree on the value of Europe-India collaboration and self-reliant ecosystems, but Matteo focuses on the strategic necessity due to US-China dominance, while Roberto emphasizes the philosophical alignment around public digital infrastructure and multiple AI futures.
Speakers: Matteo Valero, Roberto Viola
An alliance between Europe and India in AI would be beneficial, as both believe in public digital infrastructure Both Europe and India are building self-reliant AI ecosystems with open source models and advanced algorithms
Takeaways
Key takeaways
AI can significantly transform public sector operations by automating complex processes, improving efficiency, and enabling better citizen services through personalized, accurate, and fast information delivery Successful AI implementation requires moving from individual productivity gains to collective process automation, with humans learning to delegate tasks to AI systems rather than just using AI as assistive tools The main barrier to AI productivity gains is organizational inertia and the tendency to overlay new AI systems on existing processes rather than redesigning workflows entirely Reskilling is critical – people need to transition from individual contributors to managers who can effectively delegate to AI agents, with different age groups showing varying adaptation rates AI language models can effectively bridge multilingual communication gaps in government services, enabling real-time conversations with citizens in diverse linguistic environments Europe and India share similar approaches to AI development, both believing in public digital infrastructure and self-reliant AI ecosystems with open source models The future of AI is not predetermined – there are multiple possible paths, and current participants in AI development are actively shaping these futures
Resolutions and action items
Strengthen partnerships between private companies and public research institutions to accelerate AI development through shared infrastructure and capital Develop common AI projects between European and Indian institutions, building on existing collaborations like those between Barcelona Supercomputing Center and Indian institutes Focus policy efforts on redesigning state paradigms to move from citizens going to offices to offices going to citizens through digital agents Invest more in AI infrastructure and define common projects to compete effectively with larger nations that have more computing power and investment Create strong partnerships between businesses and public sector to drive AI adoption across the broad population base
Unresolved issues
How to solve the Solow paradox – the challenge that increased IT investment often doesn’t translate to measurable productivity gains How to effectively reskill the ‘middle group’ of workers who are neither very young (naturally adaptable) nor very senior (architecturally skilled) but are attached to traditional ways of working How to overcome organizational inertia in large public sector organizations that resist process redesign How to ensure AI remains focused on beneficial applications while managing its dual-use nature How to scale successful AI implementations from pilot projects to widespread adoption across government services
Suggested compromises
Balance between automation and human oversight by designing AI systems that can operate independently for extended periods while still involving humans at strategic decision points Combine individual and collective productivity approaches by starting with individual AI tools but gradually moving toward full process automation Blend top-down policy changes with bottom-up adoption by having leadership redesign processes while simultaneously training workers in delegation skills Mix proprietary and open-source AI solutions, using sophisticated models where needed but preferring bespoke, open-source models that serve specific public sector purposes
Thought Provoking Comments
The challenge and the reason why you don’t see productivity gains when you deploy chatbots in enterprise is that basically you’re focusing on an individual productivity gain… when the thing starts to change if you look at a full process… you move from an individual productivity endeavor to a collective productivity endeavor
This comment fundamentally reframes how we should think about AI implementation – shifting from individual tools to systemic process transformation. It challenges the common assumption that AI adoption should show immediate productivity gains and explains why many implementations fail.
This insight redirected the conversation from technical capabilities to organizational transformation challenges. It prompted subsequent speakers to discuss workflow redesign and the need for rethinking entire processes rather than just adding AI layers to existing systems.
Speaker: Arthur Mensch
There’s an economist, maybe you know the name, Mr. Solow, that he expressed with numbers… a paradox. The more people invest in IT and software and other infrastructure, the less the productivity… But in terms of productivity gain, according to the solo paradox… there’s no productivity gain.
This reference to the Solow Paradox provides crucial economic context that challenges the assumption that technology automatically leads to productivity gains. It introduces academic rigor to the discussion and explains why AI adoption faces systemic challenges.
This comment elevated the discussion from anecdotal observations to economic theory, providing a framework for understanding why AI implementation is challenging. It influenced subsequent speakers to focus more on the human and organizational factors rather than just technical solutions.
Speaker: Roberto Viola
We haven’t yet rethought those whole processes. Like, do we need that human review step anymore in a particular use case? Or is it just enough to use AI?… It’s a big redesign of how work gets really done. And the bigger the organization, the obviously bigger the inertia that is out there.
This comment identifies the core challenge of AI adoption – organizational inertia and the need for fundamental process redesign rather than incremental improvements. It highlights why large organizations, especially public sector ones, struggle with AI implementation.
This observation shifted the focus to organizational change management and the specific challenges faced by large institutions. It connected the technical discussion to practical implementation barriers and influenced the conversation toward policy and change management solutions.
Speaker: Jarek Kutylowski
If you invent a digital bureaucracy, it’s a bureaucracy. It’s digital, but still it’s a bureaucracy… So you see, I think we need to be also from the legislation side disruptors and look at things with completely different eyes.
This comment cuts to the heart of digital transformation failures – the tendency to digitize existing processes rather than reimagine them. It challenges policymakers to be disruptors rather than just adopters of technology.
This insight reframed the role of government from passive adopter to active disruptor, influencing the discussion toward more radical reimagining of public services. It connected technical capabilities to policy innovation and governance transformation.
Speaker: Roberto Viola
There’s not one future for AI and technology. And it is not written… those that tell you there’s only one way, I mean, there’s only one scale. And the rest of the world should watch and applaud… this summit shows… there are many futures and as I was trying to say before the future is in our hand
This closing comment challenges the dominant narrative of AI development being controlled by a few major players and asserts agency for different regions and approaches. It’s a powerful statement about technological sovereignty and alternative development paths.
This comment provided a unifying theme for the entire discussion, connecting the technical and implementation challenges discussed earlier to broader questions of technological independence and diverse approaches to AI development. It elevated the conversation from operational concerns to strategic vision.
Speaker: Roberto Viola
Overall Assessment

These key comments fundamentally shifted the discussion from a technical focus on AI capabilities to a deeper examination of systemic challenges in AI adoption. The conversation evolved through three phases: initial focus on what AI can do, followed by analysis of why implementation fails (Solow Paradox and organizational inertia), and finally toward a vision of alternative futures and the need for fundamental reimagining rather than incremental digitization. The most impactful insights challenged conventional wisdom about technology adoption and reframed the discussion around human factors, organizational change, and the need for disruptive rather than additive approaches to AI implementation in the public sector.

Follow-up Questions
How to solve the Solow paradox – why increased IT investment doesn’t lead to productivity gains
This is a fundamental economic challenge that affects AI adoption. Viola noted that whoever solves this paradox deserves a Nobel Prize in Economics, indicating it’s a critical research area for understanding AI’s true economic impact
Speaker: Roberto Viola
How to effectively reskill mid-career professionals (particularly those aged 25-35) to work with AI
Mensch identified a specific gap where mid-career professionals struggle more than both young developers and senior architects in adapting to AI tools, suggesting this demographic needs targeted reskilling approaches
Speaker: Arthur Mensch
How to train people to become effective delegators for AI systems
Since AI productivity gains require strong delegation skills, but people aren’t trained as delegators in school, this represents a critical educational and training gap that needs to be addressed
Speaker: Arthur Mensch
How to redesign organizational processes to fully leverage AI rather than just overlaying AI on existing processes
This addresses the core challenge of achieving real productivity gains from AI by fundamentally rethinking workflows rather than just adding AI to current processes
Speaker: Jarek Kutylowski
How to establish and strengthen the Europe-India alliance in AI and supercomputing
Valero suggested this alliance is necessary to compete with larger powers like China and the US, and mentioned existing collaborations that could be expanded
Speaker: Matteo Valero
How to effectively bring AI adoption to the broad population through public sector initiatives
This addresses the challenge of ensuring AI benefits reach all citizens, not just early adopters, requiring coordinated public-private partnerships
Speaker: Jarek Kutylowski
How to redesign state bureaucracy to be citizen-centric using AI and digital identity systems
This involves fundamentally reimagining government services where the state comes to the citizen rather than citizens going to government offices, enabled by AI agents and digital identity
Speaker: Roberto Viola
How to ensure AI is used for beneficial purposes given its dual-use nature
This addresses the critical challenge of governing AI development and deployment to maximize benefits while minimizing risks
Speaker: Matteo Valero

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Survival Tech Harnessing AI to Manage Global Climate Extremes

Survival Tech Harnessing AI to Manage Global Climate Extremes

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion focused on the application of artificial intelligence to address India’s critical challenges in weather, climate, and sustainability. The conversation was moderated by Akshara Kaginalkar and featured experts from government agencies, academia, venture capital, and technology companies, including representatives from the Ministry of Earth Sciences, NDMA, ANRF, NVIDIA, and the newly established India Research Organization (IRO).


Professor Amit Sheth explained that IRO was conceived following discussions with the Prime Minister to develop original AI research focused on creating small, agile, domain-specific models rather than relying on large foundational models. The organization aims to build hyper-local solutions for extreme weather prediction, health, and pharmaceutical applications. Dr. M. Ravichandran from the Ministry of Earth Sciences emphasized the need to integrate physics-based numerical models with AI for time-series analysis, particularly for predicting high-impact weather events like cloudbursts that current models cannot forecast accurately.


Several participants highlighted the importance of multimodal AI approaches that combine satellite data, sensor networks, and ground observations. Dr. Shivkumar Kalayanaraman discussed how generative AI could enable simple camera-based forecasting systems, while Karthik Kashinath from NVIDIA emphasized the potential of transfer learning and super-resolution techniques to achieve hyperlocal predictions. The discussion revealed that while global-scale AI weather models are showing promising results, significant work remains to achieve operational robustness at hyperlocal scales relevant to Indian conditions.


A key theme throughout the discussion was the critical importance of public-private partnerships and the need for trusted, accessible early warning systems. Manish Bhardwaj from NDMA stressed the requirement for reliable disaster preparedness systems that can serve vulnerable populations, while Sandeep Singhal highlighted the monetization challenges and opportunities in climate-related AI applications. The panel concluded that successful implementation requires collaboration between government agencies, research institutions, and private sector partners to create scalable solutions that balance economic growth with environmental protection.


Keypoints

Overall Purpose

This panel discussion was part of an AI Summit focused on exploring how artificial intelligence can address India’s critical challenges in weather, climate, and sustainability. The session brought together government officials, researchers, venture capitalists, and technology experts to discuss practical applications, funding mechanisms, and collaborative approaches for implementing AI solutions in climate science and disaster management.


Major Discussion Points

Integration of AI with Physics-Based Models: Multiple speakers emphasized the need to blend traditional numerical weather prediction models with AI approaches rather than replacing one with the other. The focus was on creating hybrid systems that leverage AI’s strength in time-series analysis and pattern recognition while maintaining the spatial accuracy of physics-based models, particularly for predicting extreme events like cloudbursts.


Hyperlocal and Multi-Modal Forecasting: There was significant discussion about developing AI systems capable of providing highly localized predictions (down to 1km resolution or finer) by combining multiple data sources including satellite imagery, ground sensors, and even simple camera-based systems. The emphasis was on creating “small, agile models” rather than large foundational models for specific regional applications.


Early Warning Systems and Disaster Preparedness: The conversation highlighted the critical need for trusted, accessible early warning systems that can reach vulnerable populations. This included discussion of voice-based applications that provide actionable guidance to different user groups (farmers, urban residents, etc.) and the integration of AI with existing disaster management infrastructure.


Funding and Public-Private Partnerships: Significant attention was given to funding mechanisms through ANRF (Anusandhan National Research Foundation), venture capital, and the need for collaborative approaches between government agencies, academia, and private sector. The discussion covered both grant funding for research and translation funding for scaling solutions.


Data Accessibility and Cross-Disciplinary Collaboration: Speakers emphasized the importance of opening up weather and climate data to researchers from diverse backgrounds beyond traditional meteorology, encouraging “jugaad” (innovative problem-solving) approaches that combine human behavioral elements with physical constraints in predictive models.


Overall Tone

The discussion maintained an optimistic and collaborative tone throughout, with participants showing enthusiasm for AI’s potential while acknowledging current limitations. The conversation was solution-oriented and practical, focusing on immediate actionable steps rather than theoretical possibilities. There was a strong emphasis on India-specific solutions and leveraging the country’s strengths in data availability and human resources. The tone remained consistently forward-looking, with participants building on each other’s ideas and expressing willingness to collaborate across sectors and disciplines.


Speakers

Speakers from the provided list:


Akshara Kaginalkar – Panel moderator/host


M. Ravichandran – Ministry of Earth Sciences Secretary, leading weather, climate and sustainability initiatives


Amit Sheth – Founder/founding team member of IRO (India Research Organization), AI researcher with focus on enterprise AI and foundational models


Manish Bhardwaj – Secretary of NDMA (National Disaster Management Authority), disaster management


Shivkumar Kalayanaraman – NRF (ANRF) CEO, research funding and AI for science initiatives


Sandeep Singhal – Venture capitalist with investment portfolios in energy transition and mobility


Dev Niyogi – Professor at UT Austin (University of Texas at Austin), affiliated to IIT Roorkee, founding team member of IRO, digital twin and AI-driven modeling frameworks


Praphul Chandra – Dean R&D at Atria University Bangalore, heading Center for Excellence for Data Sciences


Karthik Kashinath – Director of Center for Excellence for Data Sciences, Distinguished Scientist and Engineer at NVIDIA, AI model development (Earth2, weather and climate models)


Audience – Unidentified audience member who asked about insurance and climate risk


Additional speakers:


Dr. Shiv Kumar – Mentioned in introduction as NRF CEO and supporter of AI for science (appears to be the same person as Shivkumar Kalayanaraman)


Dr. Kartik – Mentioned in introduction as director of Center for Excellence for Data Sciences, distinguished scientist at NVIDIA (appears to be the same person as Karthik Kashinath)


Professor Seth – Referenced in transcript but appears to be referring to Amit Sheth


Full session reportComprehensive analysis and detailed insights

This panel discussion at an AI Summit brought together leading experts from government agencies, academia, venture capital, and technology companies to explore how artificial intelligence can address India’s critical challenges in weather, climate, and sustainability. Moderated by Akshara Kaginalkar, the session featured representatives from the Ministry of Earth Sciences, NDMA, ANRF, NVIDIA, and the newly established India Research Organisation (IRO), creating a comprehensive dialogue between policy makers, researchers, funders, and technologists.


The Genesis of India’s AI Climate Initiative

Professor Amit Sheth opened the discussion by explaining the origins of IRO, which emerged from a December 2023 meeting with the Prime Minister. The organisation was conceived to develop India’s unique AI capabilities rather than following Western or Chinese approaches. IRO’s strategy focuses on creating small, agile, domain-specific models rather than large foundational models, with earth science, health, and pharmaceuticals as key verticals. This approach aims to avoid the “baggage” of large language models with unknown training data, instead building original research capabilities that can serve India’s specific needs whilst supporting the global startup ecosystem.


The Complexity Challenge: From Elephants to Ants

Dr. M. Ravichandran from the Ministry of Earth Sciences provided perhaps the most memorable metaphor of the discussion, describing how weather prediction has evolved from tracking “elephants” (large-scale weather patterns) to needing to observe “ants sitting on elephants” (hyperlocal phenomena). This vivid illustration captured the fundamental challenge facing modern meteorology: climate change has altered spatial and temporal scales, requiring unprecedented granularity in forecasting to predict both large-scale patterns and hyperlocal phenomena simultaneously.


Traditional physics-based numerical models excel at spatial predictions, whilst AI demonstrates superior capabilities in time-series analysis, necessitating hybrid approaches that integrate both methodologies. The challenge is particularly acute for extreme weather events like cloudbursts, which current models cannot predict effectively. Dr. Ravichandran emphasised that neither purely numerical models nor standalone AI systems can address these phenomena adequately, and stressed the critical importance of trust, validation, and verification in AI/ML forecasting systems for operational deployment.


The solution lies in blending both approaches whilst leveraging India’s strength in data availability—over 150 years of meteorological records from IMD. Crucially, Dr. Ravichandran advocated for opening this data to researchers from diverse disciplines beyond traditional meteorology, recognising that interdisciplinary approaches could unlock new insights for weather prediction challenges.


Technological Innovations and Multimodal Approaches

The discussion revealed significant enthusiasm for emerging AI technologies that could democratise weather prediction. Dr. Shivkumar Kalayanaraman highlighted the potential of generative AI to enable simple camera-based forecasting systems, where cameras pointed at the sky could provide one to four-hour forecasts. This approach, combined with dropping costs of multispectral cameras and expanding low Earth orbit satellite networks, could create comprehensive sensor networks that complement existing infrastructure like Mission Mausam.


A key insight emerged around the distinction between data fusion and insight fusion. Rather than attempting the “mind-bogglingly complex” task of fusing raw data from multiple sources, the focus should shift to fusing insights from different modalities. This approach could significantly simplify the technical challenges whilst improving forecasting accuracy across spatial and temporal scales.


Dr. Karthik Kashinath from NVIDIA emphasised the importance of transfer learning for adapting knowledge between data-rich and data-sparse regions whilst maintaining local uniqueness. The success of global-scale AI weather models at 25-kilometre resolution, driven by benchmark datasets like ERA5 from ECMWF, demonstrates the potential for similar breakthroughs at hyperlocal scales. However, this requires creating appropriate benchmark datasets and metrics for fine-scale applications, similar to how ImageNet revolutionised computer vision.


Early Warning Systems and Disaster Management

Manish Bhardwaj from NDMA provided crucial insights into the practical applications of AI for disaster management. India’s vulnerability to multiple hazards—cyclones, tsunamis, earthquakes, landslides, flash floods, and cloudbursts—across its vast geography and population requires sophisticated early warning capabilities. Whilst India has achieved remarkable success in cyclone prediction and evacuation, achieving zero mortality milestones, other hazards present greater challenges.


The emerging pattern of cascading, multi-hazard scenarios—where cloudbursts lead to landslides and flash floods—requires AI systems capable of analysing multiple data sources simultaneously. Current sensor networks cannot map every vulnerable location, particularly in the Himalayan regions where development continues despite risks. AI offers the potential to enhance granularity and accuracy of early warning signals by integrating terrestrial, satellite, and sensor data for predictive forecasting and improved nowcasting.


The vision extends to creating Digital Public Goods (DPG) that provide trusted early warning for all citizens at low cost. This requires hybrid models connecting AI with physical sensor networks and satellite data from various alert-generating agencies, creating resilient early warning systems that can reach vulnerable populations effectively.


Funding Mechanisms and Market Opportunities

Dr. Shivkumar Kalayanaraman, speaking as ANRF CEO, outlined comprehensive support for AI climate research through multiple funding mechanisms. The organisation provides grant funding for not-for-profit research entities and operates a one lakh crore RDI fund specifically for private sector applications. Key programmes include “AI for Science and Engineering” with a dedicated Weather and Climate track, and the upcoming “Leapfrog Demonstrators for Societal Innovation” programme, which emphasises collaborative proposals addressing real societal problems with demonstrable impact.


ANRF’s strategy explicitly encourages consortium-based applications rather than individual proposals, recognising that complex challenges require interdisciplinary collaboration. The organisation announced partnerships with IBM and IIT Delhi for an AI for Science and Engineering hackathon, along with collaboration with MOES on Mission Mausam, demonstrating practical implementation of these funding strategies.


Sandeep Singhal brought a venture capital perspective, emphasising the critical importance of government partnerships for startups working in climate AI. The economic model recognises that climate represents both a public good requiring government support and a commercial opportunity where businesses affected by weather events are willing to pay for predictive services. This dual approach enables sustainable business models whilst ensuring broad societal benefit, with philanthropic capital increasingly available for large-scale programmes.


Human-Centric AI and Cultural Innovation

Professor Dev Niyogi introduced a uniquely Indian perspective through the concept of “Jugaad”—the cultural practice of innovative problem-solving that enables people to “beat the system” and adapt to challenging circumstances. This human element has been notoriously difficult to incorporate into predictive models, yet represents a crucial factor in how communities respond to weather events.


AI’s capability to map human behavioural elements and societal aspects alongside physical constraints offers the potential to create more accessible and accurate prediction systems. The discussion emphasised moving from weather output generation to decision support systems. As Professor Niyogi noted, “People don’t need weather. They need weather that can help them make a decision.” This paradigm shift from data provision to actionable intelligence represents a fundamental reframing of weather services.


Energy Integration and Sustainability Applications

Dr. Praphul Chandra highlighted the critical intersection between AI weather forecasting and India’s renewable energy transition. As the country shifts from fossil fuels to solar-dominated renewable energy, hyperlocal weather predictions become essential for grid management. He mentioned a demonstration by a local university team combining digital public infrastructure from the Ministry of Power with AI models for energy applications, illustrating the practical potential of these integrated approaches.


This application demonstrates how AI climate solutions can simultaneously address multiple challenges—energy security, grid stability, economic efficiency, and environmental sustainability—whilst creating commercially viable business models that support broader adoption.


Technical Challenges and Collaborative Frameworks

The discussion identified several critical technical challenges requiring continued research. Small data fine-tuning emerged as a key breakthrough requirement—the ability to fine-tune large foundation models to specific use cases with minimal local data. This capability would unlock AI applications in data-sparse regions whilst maintaining the benefits of large-scale model training.


The creation of benchmark datasets and metrics for hyperlocal applications parallels the role ImageNet played in computer vision development. Without standardised benchmarks, progress in hyperlocal weather prediction will remain fragmented and difficult to measure.


The discussion revealed strong consensus around the necessity of collaborative approaches spanning government agencies, research institutions, and private sector partners. IRO’s partnerships with NVIDIA, Google, and Qualcomm, announced at this AI Summit, exemplify this collaborative approach, combining technological capabilities with domain expertise and implementation resources.


Challenges and Future Outlook

Despite the optimistic tone and clear pathways forward, significant challenges remain. Cloudburst prediction continues to elude current modelling capabilities, representing a critical gap in disaster preparedness. The validation and verification of AI-enabled forecasting systems requires substantial work to build operational trust, particularly for life-critical applications.


However, the discussion demonstrated remarkable alignment among participants on fundamental principles and approaches. The convergence around hybrid AI-physics models, collaborative partnerships, hyperlocal applications, and data accessibility suggests a mature understanding of both challenges and solutions.


The emphasis on India-specific solutions whilst contributing to global knowledge represents a balanced approach that leverages local strengths—extensive historical data, young talent, and cultural innovation—whilst participating in international scientific collaboration. This strategy positions India to become a leader in AI climate applications rather than merely adopting technologies developed elsewhere.


Conclusion

This panel discussion revealed a sophisticated understanding of how AI can address India’s climate and weather challenges whilst contributing to global solutions. The convergence of government policy support, research funding, technological capabilities, and market opportunities creates an unprecedented opportunity for breakthrough applications.


The shift from traditional weather prediction to decision-support systems, combined with the integration of human behavioural factors and cultural innovation, represents a uniquely comprehensive approach to climate AI. The emphasis on hyperlocal applications, collaborative partnerships, and sustainable business models provides a realistic pathway for scaling solutions from research demonstrations to operational systems serving millions of citizens.


The success of this initiative will depend on continued collaboration across sectors, sustained investment in research and infrastructure, and the ability to translate technical capabilities into trusted, accessible services that enhance resilience and support sustainable development.


Session transcriptComplete transcript of the session
Akshara Kaginalkar

top -down approaches in terms of finding the AI solutions, India’s critical problems and weather and climate is a major vertical. So welcome, sir. We have Dr. Ravichandran, he doesn’t need any introduction, but he’s the Ministry of Earth Sciences Secretary and everything and anything under weather and climate and sustainability, sir, is heading it and we look forward to your contribution. We have Mr. Singhal, who is a venture capitalist and he will give a very, very important aspect about how funding and economy is going to drive the solutions in AI for climate. Professor Dev Niyogi, he is professor from UT Austin, that is University of Texas at Austin. Also, he’s affiliated to IIT Roorkee and now one of the founding team of IRO.

Again, sir doesn’t need any introduction. We have Dr. Shiv Kumar is NRF CEO and very, very great supporter of now AI for science. And we look forward to your support as well as your inputs on how can we proceed on this. And we have Mr. Manish Bharadwaj, who has a very critical role in India as the secretary of disaster NDMA. And we have Professor Praful Chandra. He’s heading the Center for Excellence for Data Sciences, as well as he’s dean R &D, Atria University, Bangalore. And we have Dr. Kartik, who is the director of the Center for Excellence for Data Sciences. He’s a distinguished scientist and engineer, NVIDIA. And he has played a major role in the very famous AI models, which all of us are hearing.

And they are, you know. changing the scenario of modeling and the way science is going to happen. So welcome. So I look forward to your contribution. Oh, okay. Can we stand just here? Okay. So before we open up the panel, I just wanted to have a very quick question to Professor Seth in terms of what was the objective, what we are looking when you started IRO as a, you know, in India, we wanted to have this type of a research organization. So if you can quickly tell us about what was the thought process behind IRO and what do you foresee?

Amit Sheth

So the idea of IRO kind of. was initiated when I had a chance to meet the PM in December of 2023. I was asked to come and discuss with him. He is always very curious in technology and so he wanted to hear about the ideas on AI. Since I had multiple interactions on research and AI with him during his CM time, this was a fantastic opportunity for me to meet and kind of discuss where India can shine and not necessarily follow the West or China in what we need to do. And so I presented both the core foundational AI focus on enterprises, not necessarily consumer and web, and some of the areas of where we can make big economic and social impact, as well as we can support the startup ecosystem where AI can empower deep AI technology that drives the global products from India.

So that was a broad idea. And so IRO currently is developing original work on building very agile, small, specific models. In this context, for example, if you want to make a model for serving extreme weather related issue that is hyper local, then all the spatial temporal aspects, all the relative modeling aspects, all the prediction algorithms, those are the things that we will bring in. But we will not be building on the top of large language models or so -called foundational model, which come with a lot of baggage. We don’t know what kind of data it has been trained on, many other things. So original research in creating new, small, agile models. And so it will be a platform on the top of India AI structure to be able to create models.

And one area in which we would love to create models, we have technology. expertise here, Dave and many other people. And we can, you know, so earth science, including disaster, including, you know, sustainability issues is one of the vertical. Other two are health and pharma. Pharma, we have very strong partnership with Indian Pharma Alliance and the 23 major pharma, which is 80 % of India’s pharma, you know, kind of output. And similarly, we are working with some health partners and all. But here you see the potential partners that we could have in making impact into the sustainability and health area. So thank you.

Akshara Kaginalkar

We would like to now start with one open questions and then we’ll have an individual question because I’m very sorry, the time is very short. The whole format is actually we had a one day full workshop and we had to squeeze it in. to start. Yeah, so one disclaimer that it’s not my personal thing, but I may request you to finish it in time. Definitely would like to hear a lot from all of you, but due to constraint of time. So first one, opening questions, what we’d like to have is all of us would like you to say is what would be one AI application or a discovery that would excite you about AI helping in this domain of climate as well as extreme events and sustainability as a broader thing, because everything is driven by weather and climate.

We have energy, we have health, we have economics and we have agriculture, many, many aspects of it. So we’d like to see what do you foresee and how do you would like to say that which one development will help us. And we’ll start with you, sir.

M. Ravichandran

When you talk about the weather, of course, it is now depends on various applications. So when we are doing the weather forecast, earlier we just to tell that in suppose how the elephant is going, I’m able to see that elephant, how it is going. I’m able to tell that tomorrow it will come here. But now the problem is whether because of the climate change and other things, the space and time has changed. Now, we have to see on the elephant some ant is sitting. That ant, how it is going, we want to know. So we want to see the elephant plus ant. So I want to see two things. One is time series. Other one is a spatial.

If anything on spatial, I think the physics based numerical model is doing better job. But if you want to go for time series, local rhythm, then A is better. So we need to do. Integrate or we need to fuse both together in order to understand the local weather in a fine scale. And you want to go suppose cloudburst is there. So you cannot do only with. numerical model and with AI also. So, we need to blend both. That is more important. So, we want to go for high impact weather events, how to predict, especially cloud burst and other thing. We do not know how to predict. So, that is why we are looking at whether AI can help or not.

That is one of the objectives. Thank you.

Manish Bhardwaj

I fully agree with what Ravichandran sir has just said. From the early warning point of view, the idea is to have DPG sort of asset for the public so that we are able to disseminate early warning for all. So, idea is to have trusted early warning for all to be given to the citizens. at low cost and this is where AI can definitely play a supporting role. It cannot be purely an AI. It has to be a hybrid model which has to be connected with the physical systems of the various sensor fabric and the satellite data which is available to us from various alert generating agencies but to have a source of a trusted and reliable and resilient early warning systems wherein I definitely foresee AI playing a great, great role.

Thank you. Yeah, I

Shivkumar Kalayanaraman

think I’ll just double down on the multimodal models that are coming out. I mean one is the time series model. There are special models and I will also mention that today with generative AI you can just put a camera pointed to the sky and then you can actually not only see the patterns of clouds, you can forecast one hour ahead. Two hours ahead, even four hours ahead. make it an IR camera or make it some other multispectral camera when all the costs are dramatically dropping. So you can imagine a network of sensors that complements also the great work that’s being done in Mission Mouse and so on. And plus now with the low Earth orbit satellites going up and also having much more Earth observability, I think the opportunity to fuse insights as opposed to fusing data.

I mean, data fusion is a painfully, you know, mind -bogglingly complex, unnecessary and complex as a thing. But now there’s an opportunity to take insights from A, insights from B and fuse it across modes and also forecasting across these modes. I think that’s a wonderful opportunity. I think that’ll have a huge thing. And once you integrate that into, you know, sort of now casting and other systems, I think we can have a great amount of impact. The other dimension is, of course, AI helping in discoveries and of new materials and you know, sort of simulations and so on. I think these have wonderful opportunities. And of course, as you know, the Nobel Prize for Chemistry went to somebody from an AI background.

Sure.

Sandeep Singhal

So I will put a consumer lens to this. Sirs have brought up the point around what is the technology needed. I think with what is happening with the voice agents right now, I think there is a need to have a simple voice framework or a voice sort of app which allows you to send not just information, but actually create a resilience approach for the person who is who can literally just click a button and say, OK, in the next week, these are the things that you need to do to survive the whatever is happening from a climate perspective. Right. Or what do you need to do in the next month? So there is a there is a forecasting aspect to it.

But more importantly, how it integrates with my life. Do I need to stay at home? If I’m a farmer, what do I do? If I’m a, you know, liberal, what do I do? So that ability to bring that to my day to day life and allow me to actually act a certain way because of what I expect, what I expect to see in the environment around me. And that includes daily air. I’ll

Dev Niyogi

just add one term you guys know this word Jugaad so this is a very India thing Jugaad we can so there is a framework that is mathematically feasible that we can model very well that follows equations that follows laws of nature and then there is a human element that we always beat the system and make that happen mapping that has been very difficult in a predictive models and this is where I think AI is coming into play that it brings the human dimensions and it brings the societal aspect with the physical constraints and this is what is most exciting about it into a way that it will be becoming much more accessible is where I think we’ll be going we had heard also about the agentic AI now I heard about the ant AI thanks to you so

Praphul Chandra

I’m going to pick up where Professor Neogi and Dr. Shiv Kumar left you know we work across several AI foundation models in biology in materials and we have looked at foundation models in weather I think the breakthrough that I am most anxious to look for is what we call small data fine tuning. What that means is that when you look at these large foundation models they are fairly general in their applicability and as Professor Sheth was saying when you have to fine tune them for a specific use case you still need data. How small can that data be? Can you use small data to fine tune large foundation models? I think if you are able to have that breakthrough it has applications across multiple domains that we talked about.

Karthik Kashinath

I think a lot has already been shared which is very exciting on many different fronts. One thing I would like to see more used in practice is transfer learning which of course some regions of the world are data rich and some others are data sparse. Problems are shared across the planet. The physics of weather and climate are the same no matter where you are in the planet. But at the same time, there’s uniqueness at hyperlocal scales. But if we can transfer learn efficiently from one region to another with constraints of what exactly we’re trying to transfer learn, I think that would be very impactful.

Akshara Kaginalkar

Thank you, Dr. Kalpik. I think we have a mic here. We saw right from the spatial, as sir said, it’s like from Akashse and Tak, we can see everything. And I think that matters. I remember once I think I was discussing with sir, he said even the dew effect you have on the immediate temperature and that can affect your surrounding and everything. So from small to big is definitely there. And AI also from small to big we should see. And that leads to now I will ask the next round of question is very, very specific to. areas in which all of you are working as well as having a lot of influence and that’s where we would like to hear from you and to have a direction in what way we can go.

At the end of this panel, that’s what, you know, can we all consolidate and can we look at, you know, what are the three to four immediate things which we can do it. And with that respect, I would like to ask Dr. Avichandran, how can India’s national capabilities in AI research, technology development, and very importantly, human resource also, evolve to enable the transition from current physics driven prediction system to AI enabled user specific decision systems. What are the bottlenecks in that and how can we overcome?

M. Ravichandran

So as pointed out, basically we have a capability, basically one of the strengths what we have is basically the data. The data volumes are huge nowadays because we have hundreds of 150 years old old IMD’s things that legacy as well as data available. Now how to utilize this data? And we have young brains of so many young people but we have not fully utilized that one because each one can interpret the data different way. But finally it has to come out into concrete solution. When you talk about AI and weather, if you are talking about, why we want to go for AI first of all because the numerical model, we have a lot of assumptions. Because of that assumptions, the error grows.

Now that error grows whether with the AI we can reduce that is number one. When you are going for initial condition is better, you can predict better. So we have to have a initial condition in better way by reducing the error. So I think many people, even some of the people, many people are working in AI, different people. I think we need to pull the many resource people in our domain so that they can look at data differently and also they can use how to minimize the error and also how to reduce the uncertainties. And also there are various techniques to improve the forecast. So that’s what I, because nowadays the downscaling is one of the important things.

In the large scale model, it defiles. So the AI can downscale better way in the localized, suppose one kilometer resolution weather forecast, we want to forecast how we can do. So we need to have more and more minds and more and more people have to work on it. And I think we need to open up the data so that we have to, that means different people can, can come back and work on that. I have only one important thing is basically this, when you are talking about EIML, the trust is more important, as you pointed out. I think we need to have a better trust in the forecast system. I think where there’s a need for validation and verification, that also very important in EIML can make it.

So our capabilities are huge, but we need to, what is called, utilize them with the data’s strongness. Because now the biology people, even biology people are working in EIML. That same people we can do. One more important point is our people, we are always addicted to the same set. We are thinking only this is the way, but there are multiple ways. That’s why some other discipline people also look at this because this is data driven. Other discipline can look at it differently. We can have some. pathway or way forward. That may be one of the things we can look at.

Akshara Kaginalkar

That’s a very, very important point because we look only weather from maybe only physics angle or weather angle. So, looking at that is very important. And that leads to, you know, what is important for the disaster management service, we would like to ask because highly dependent on the extreme events and the managing that is very difficult. So, how do you foresee adoption of AI for infrastructural preparedness for disaster management and especially reducing the severity impacts on vulnerable population because cities and all maybe and those who have access to many good things they can handle, but we have large vulnerable population. So, how do you see AI helping in the last mile application?

Manish Bhardwaj

Very apt questions. As you all are aware that India is vulnerable to multi multiple hazards, not only cyclones, tsunamis, earthquakes, landslides, flash floods, even gloves, soap, and looking at the vast geography and the population which can be impacted. It is very essential that from the disaster management point of view, we have a system of adequate preparedness and early warning capabilities. Nonetheless, the disaster, and secondly, though the country has made, we have made as a whole of government approach undertaken various mitigation measures to mitigate the disasters, but disasters, we can only mitigate the effect of the disasters. So how do we keep the population? We have to keep the population in a way so that, you know, the early warning system capabilities are of the highest order.

that we are able to minimize lots of lives. Now, this is a very important challenge. And various agencies, particularly, as Ravi Chandran sir has rightly said, the IMD and from several, we have, over the period of time, we have developed enormous capability to predict, say, cyclone path and trajectory very clearly, five days ahead of its landfall. So, in a way, we are able to do timely evacuation, repositioning of the response teams, which helps in minimizing and even achieving zero mortality milestones. But there are other hazards. And secondly, the way the hazard scenario has unfolded in the last few years, it has become a multi -hazard, cascading hazard sort of scenario in which one hazard leads to other hazards.

So, there are incidents of cloudburst. Which are currently cannot be predicted. because there are various technical issues also behind it, but cloud bursts leading to landslides, leading to flash floods are a serious concern. So how do we prepare ourselves given the current state of resources and the developments? This is where AI can definitely pitch in. So the idea is actually to get the various, from the alert generating agencies, all the data which are coming from our terrestrial, the satellite data, the sensory data, and then to be able to use it for predictive forecasting or also to better the now casting to increase the granularity of even the early warning signal because there are limitations of how many satellite systems we can put into place.

It is not possible to map each and every, the hill in the vulnerable areas. So this is where the complications arise. And since development also has to take place in the vulnerable, particularly in the Himalayan zone, so the challenge is here to use technology at the maximum. What I foresee is that the availability of the data from various multiple sources can definitely be analyzed and used for even with the current set of sensor network capabilities to predict or rather to pinpointedly and accurately predict the forecast, the early warning signals for the targeted population. And then it will help the district authorities, the state authorities for timely evacuation and response and relief operations to be carried out.

So this is one field where NDMA particularly is collaborating with multiple national agencies and IMD. And Mr. War Sciences are playing a very major contributory role in that development of the such DPG. I am very sure that the startup ecosystem in our country definitely carries the agility to provide, to do a collaborative support the efforts of the NDMA and the national agencies in taking this mission forward. So, and this is where I believe that we can definitely reduce the, we can definitely increase our, the early warning capabilities, particularly regarding flash floods and the glacial lake outburst floods, the lightning and the landslides. And we are very hopeful that with the support of the IMD and the Ministry of Earth Sciences, we can definitely also take major and change.

Take different steps towards even predicting or identifying the most vulnerable or the potential cloudburst type situation so that we are able to timely warn the public.

Akshara Kaginalkar

Thank you, sir. And it’s an important point, as Dr. Avichandran has said, and which you have taken into the need of the data and the infrastructure also linking that to and the setup which we have and we have seen it in the expo. So many people are working on climate and sustainability. How can we put that together and how can we have the best out of it? So that leads to a question to Dr. Shokumar. NRF is enabling the research ecosystem as well as the product ecosystem. So we would like to see how NRF is helping in terms of creating AI funds, what advice you can give it to the community and how to be making and developing products and what sort of support we can expect from ANRF on that.

Shivkumar Kalayanaraman

Okay. So for folks who may not be, how many of you know about ANRF? Maybe just I can get a show of hands. Okay. All right. Not too many, but so ANRF is a statutory body of government of India and Dr. Avichandran is on my board as well. So this is a body which is, you know, sort of meant to catalyze research and development funding in India. So we have grant funding, oops, and also we have, you know, a capital fund called RDI, which is a one lakh crore fund, which is meant only for the private sector. The grant funding is typically for the, you know, not -for -profit research sector, which includes academia, labs.

you know, Section 8 companies and others, right? So research entities are recognized by SARU, DSIR and so on. So our thinking is that we not only have broad -based funding for, you know, like what National Science Foundation does, but we also have more focused funding in a mission mode. So we have a couple of programs that might be of interest. One is our AI for Science and Engineering is a program we have currently underway. And one of the tracks of that is AI for Weather and Climate. So it’s already there. And in addition, we are going to be launching a major program in about a month called Leapfrog Demonstrators for Societal Innovation. Leapfrog Demonstrators for Societal Innovation.

So the idea is that you take a societal problem, then rather than talking about it, let’s do something about it, okay? And then not do just incremental thing. It should be a leapfrog demonstrator. And it should be a demonstrator, not just a theoretical thing. So these are kinds of things we’re doing. And alongside it, we are also doing challenges, sir. We’ll be introducing more challenge mode. you know sort of things that we don’t see come bottom up in our proposal formats. So as part of that we are also collaborating deeply. Our AI for Science and Engineering, the Weather and Climate, we are actually collaborating with MOES and with their Mission Morrison program. So we are linking, we are getting you know both the expertise as well as the data and you know so that we can put together the AI expertise along with the sensor expertise and data and we hope to similarly collaborate with other parts of the government and you know I would strongly urge collaboration from NDMA also at this stage.

So that’s the general approach and then in the, so that accelerates and also I just want to mention that just two days back we have announced a hackathon also, AI for Science and Engineering hackathon for you know Weather and Climate actually. So it’s currently open it’s done in partnership with IBM and IIT Delhi. So we put out data set and also in partnership with MOAS and others. So we have data sets and we are encouraging some of the work there. But in addition, we’ll be doing more, as I said, there’s a societal innovation program, which can also admit of newer types, where you bring together disciplines. We actually then go to solve real problems and so on.

So I think that’s the nature of what we’ll try to do. And then the RDI fund is meant for translation and scaling. In addition, we also have translation centers. We have a program that is open right now and so on. So these are various programs and mechanisms we plan to do. But the goal of all of this is to always focus on impact and working backwards, rather than doing some undirected research. So we want to drive research in a more directed way towards impact. But at the same time, we do support curiosity -based, broad -based research as well. So that’s the balance we’re trying to strike.

Akshara Kaginalkar

we are doing, if we would like to have consistent solutions and not only as a demonstration product, but as operational, where we have every day some services coming out of it. How do you see the public -private partnership coming out? In all our mission mode programs, the goal is to accelerate things from a lower technology readiness, like TRL 1 or 2, to its mid -range, like 506 and so on. That is the purpose of that. And as part of all of those programs, we are supporting programs at a critical scale. So we are encouraging consortiums to come and bid for it, or a hub -and -spoke type setup. We are explicitly saying, don’t make it individual proposals. It has to be collaborative proposals.

In some of our programs, we have put out open IP licensing so that when you have a company or a startup and so on, they can actually partner with academia, pick up the IP and quickly translate. That will also encourage rapid translation. So we are introducing, you know, IP and other innovations to drive translation. So we are going to be doing this in a few more programs. Plus, we have this Translational Research Centers program, which has mandates partnership with industry as well. So we are using different mechanisms. All of them are driving collaborations. Plus the RDI fund, which is a one lakh crore fund. By the time it hits the market, it will become three or four lakh crores.

It is only for industry, but the industry, if they don’t have capabilities, they must collaborate with academia and so on. So there’ll be a demand for industry academic collaboration coming from that side as well. So we are attacking the problem from multiple directions. And, you know, all of these are meant to encourage collaboration for impact, collaboration for impact. So that collaboration leads us to, you know, industry. As we know, NVIDIA is… very much into and pioneering in terms of many models coming in and Dr. Karthik is part of the model development. So foundational AI weather models and climate models such as Earth 2, GraphCast and AIFS and many more are now demonstrating good performance at a global scale.

So what further development do you see basically the physics, how can we interpret the physics coming in the AI models and the validation is very, very important as sir has said that very, very local scale. We are talking about even air quality at a 400 meter or floods at 10 meters or something like that we are talking about. So how do you see what is more to be done in terms of models operationally robust at a hyper local scale? Thank you.

Karthik Kashinath

Yeah, that’s a rich question but I’m going to keep it fairly brief because it could take the next 30 minutes to get through that. So I’ll touch on three things. One is I think creating the benchmark data sets and the benchmark metrics that are needed to achieve operational quality. And if you look at what has led to the developments at the global scale at 25 kilometer resolution is the ERIF data set from ECMWF and the benchmark problems that they’ve defined on that data set, like the weather bench for example. So I think if we want to get down to the hyperlocal scales, which of course depends on the region that you’re talking about and the types of metrics that you care about, it would be very helpful to create the benchmark data sets and the associated benchmark metrics that can drive towards that.

And if we just wind the clock back, the whole AI revolution in deep learning began because of ImageNet. And that was 15, well 12 years ago. And they defined benchmark data sets and benchmark metrics that drove the revolution in AI. So I think we can do the same thing if we take it down to the hyperlocal level. The second is to leverage the superization techniques that AI has shown to be very powerful. We’re already doing that right now in the Earth2 program with taking 25 kilometer data and super resolving it to one kilometer. Also, we’ve been doing this in weather and climate for decades with downscaling the process of taking coarse resolution simulations and making high resolution. So if we can stretch that even further to go down to these hyperlocal scales, I’m fairly confident that the technologies needed in generative AI to get us to that scale either already exist or will be invented in the next two to three years.

So I’m hopeful that that will help us get there. Thank you.

Akshara Kaginalkar

I think that’s important. We look forward to and that’s where public -private partnership comes in picture because when we see it very specific to India and within India also very specific to a region which we’ll have to, you know, because we have a very different climate all across. Right from north, south, east, west. So I think having maybe small models for a region also can be a future maybe in the. so that once we have this system in terms of you know what is to be done and we have the modeling in place we need a computational power for that because all these models still we need a lot of so that comes to the investments and that’s where we would like to ask Mr.

Sandeep Singhal your investment portfolios have energy transition mobility because see when we speak weather and climate it’s not just weather and climate it’s broadly everything in terms of cloud in terms of energy in terms of health all those things so when you look at your portfolios what advice would you like to give to startups to be able to successfully scale up all these individual domains as well as integrated domains

Sandeep Singhal

so I think in terms of scale up the first thing that at least in the climate space is very clear is that partnership with the government is critical because that’s where all the discussion we are having on data all the discussion we are having on deployment the government is the one that’s driving it. So I think any of our portfolio companies that are working in this space, we end up involving government institutions that they would work with, and we build those relationships with ministries at the fund level also so that we can introduce them to the various government programs. Beyond that, the other advice is that you have to start thinking about segmenting the market that you’re targeting.

So there is the general population, and that goes to the government. There is that funding, I think, as Dr. Shukman said, has to come in a public -private partnership because collaboration, I think, is an important word you used. And I think that collaboration is both on the deployment side but also on the funding side. So it’s great to see what the government has done with ANRF, with RDI, and that capital that is becoming available. And there’s also philanthropic capital that is actually now becoming available in this space. So there are philanthropists that are looking at… programs at scale and saying okay if this program can scale we’ll put money behind it so that’s one part but the other segment is that you have to also think about where is monetization possible and there are enough segments where core business is getting impacted because of weather or other events right and that core business is willing to pay so you have to therefore segregate the two in some ways if you think about it you are building for a public good but the distillation of that allows you to build something for private good and charge for it

Akshara Kaginalkar

because now climate is linked very much to the economics absolutely climate and economics is one and the same thing and it’s not just short term we have to worry about next 10 years 20 years 30 years you know everything so that’s a very very important point so that leads to like how are we preparing ourselves and that comes to Dr. Praful, a key challenge for India is balancing economic growth while protecting our natural ecosystem. So can you give an example of real world application where AI can enable this transition as well as the creation of solutions which balance

Praphul Chandra

I am going to pick up on something that Dr. Karthik said and Manish also mentioned which is the intersection of weather and energy. You know India is transitioning from a fossil fuel based economy to a renewable energy power based economy and renewable energy is dominated by solar right. Now if you look at the kind of models that are becoming available for hyper local forecasting they are also giving us much more predictive power in terms of how much energy will one rooftop solar panel generate which is critical for managing the grid right. India’s grid needs to be digitized and in fact we have a team from the University here which is doing a demo on combining digital public infrastructure from the Ministry of Power, which is India Energy Stack, combined with AI models, which use weather forecasting and do forecasting about grid loads to be able to trade energy between consumers and producers.

Or to do demand flexibility. Now, demand flexibility is, again, something that I see critically important as we talk about sustainable AI. When you move to a data center economy, which is huge consumption of energy, you need to be able to support dynamic demand flexibility using a combination of AI and public infrastructure. So I think the intersection of AI energy is something that deserves quite a bit of attention, and I think we are there to kind of address that.

Akshara Kaginalkar

Thank you. See, we have data in place. We have policies in place. We have science in place. Now, what? Money in place. So what is important is how do you give these solutions to the stakeholders and end users, and that leads the question to Professor. Professor Dave, because he. He has an experience of connecting the science to the governance to the actual stakeholders. And you have been leading the digital twin and AI driven modeling frameworks. So what opportunities do you see? You have done it in Austin, but in India, we are all aware of our different types of cities we have. So what opportunities do you see in building digital twins that support climate extremes and disaster management goals, goals which all of us have just now deliberated upon?

The challenges are there. The solutions are there. How do you link it?

Dev Niyogi

Right. I have two minutes, looks like, before we end the session. So this is a course I take over two semesters. But what I’ll say is that weather is the tragedy of commons. Everyone is affected by it, but no one can pay for it. And the same way when we have to have institutional investments, the question comes up, how do you make this into a monetizable product? And this is where the issues like, you know, today morning, the Director General Mahapatra mentioned that We can create some box models which are very simple, scalable, and transferable. And we can create digital twins which are very decision -specific. We don’t need to predict every variable at every scale for everything to try to do that.

So if we define why we are creating models, what decision we are going to guide based on that data -to -decision framework, we can make that into a very intelligent, scalable modeling system. And that, I think, is where the joy of bringing AI and physics and human decisions and dimensions come into picture. People don’t need weather. They need weather that can help them make a decision. And this is where we need to move from simply creating the weather output to adding something which is going to help me make an intelligent decision, whatever that may be. It could be a long -term hedging against something or a short -term decision of whether I walk inside or in the shade.

And if we achieve that, I think we are going to make this into something. Which could transform the manner in which we are predicting, which is not for a variable of interest, but a decision. that we want to make. That is where I think digital twins come into picture. I’ll stop there.

Akshara Kaginalkar

So I think digital twin can be one of our first you know, we can look into the complete AI spectra right from monitoring to processing to modeling to reaching out to the end users. We can have a complete you know, portfolio of AI applications. So this leads to now the end of the session and we would like to open just for half a minute. I’m very sorry for this format. Disclaimer, it’s not my doing. Yeah.

Audience

One word I didn’t hear too much of was insurance and climate risk typically climate risk typically reflects in insurance rates either becoming so high or just your house goes uninsured which is happening. In Northern California and Florida. I’m not sure in India how . predominant this is, but how can you kind of marry, I mean, ultimately people have to stay there, it’s difficult to move. So how do you marry the two?

Sandeep Singhal

Yeah, so that I sort of somehow sort of refer to it in this notion of translating the work that you’re doing on the DPI side and bringing that technology into sort of more monetizable projects. And insurance actually ends up being one of the first monetizable product that comes out of this.

Akshara Kaginalkar

We can have just one question maybe and we can always discuss it outside because this is a very good opportunity. We have the experts here and we would definitely like I have a few questions, but I’ll ask you outside. I just want to quickly also mention that we’ve announced in this AI Summit partnerships with NVIDIA, with Google and Qualcomm as well as we’re doing other things at the Gates Foundation. So there’s many things happening so I invite my colleagues here to work with us more and focus on India as well in addition to the world. thank you sir so would like to thank and it was a great great listening to all of you and we forward to you yeah and see I will tell you don’t get me wrong I was thinking you know there are 8 people and I am the only one then I was thinking it should be equal number and I was disturbed you Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
M. Ravichandran
2 arguments169 words per minute744 words263 seconds
Argument 1
Need to integrate physics-based models with AI for spatial and temporal predictions, especially for extreme events like cloudbursts
EXPLANATION
Ravichandran argues that weather prediction has evolved from seeing the ‘elephant’ (large-scale patterns) to needing to see both the ‘elephant and the ant’ (fine-scale details). He emphasizes that physics-based numerical models work better for spatial predictions while AI excels at time series and local patterns, requiring integration of both approaches for predicting high-impact weather events like cloudbursts.
EVIDENCE
Uses metaphor of elephant and ant to illustrate scale differences; mentions that cloudburst prediction is currently not possible with either numerical models or AI alone
MAJOR DISCUSSION POINT
Integration of physics-based models with AI for weather prediction
AGREED WITH
Karthik Kashinath, Akshara Kaginalkar
DISAGREED WITH
Shivkumar Kalayanaraman
Argument 2
India’s strength lies in 150+ years of weather data that needs to be opened up for diverse researchers to reduce forecast errors and uncertainties
EXPLANATION
Ravichandran highlights India’s data advantage through IMD’s 150-year legacy dataset and emphasizes the need to utilize young talent from different disciplines. He argues that opening up data access would allow diverse perspectives to minimize errors in initial conditions and reduce forecast uncertainties through various AI techniques including downscaling.
EVIDENCE
Mentions IMD’s 150+ years of historical data; notes that people from biology and other disciplines are working in AI/ML and could contribute differently to weather prediction
MAJOR DISCUSSION POINT
Data accessibility and interdisciplinary collaboration
AGREED WITH
Karthik Kashinath
S
Shivkumar Kalayanaraman
3 arguments178 words per minute924 words311 seconds
Argument 1
Multimodal models combining time series and spatial data with generative AI can forecast weather patterns using simple camera networks
EXPLANATION
Kalayanaraman argues that generative AI enables simple camera systems pointed at the sky to not only observe cloud patterns but also forecast weather 1-4 hours ahead. He suggests that with dropping costs of multispectral cameras and low Earth orbit satellites, a network of sensors can complement existing systems like Mission Mouse by fusing insights rather than raw data.
EVIDENCE
References Mission Mouse program; mentions low Earth orbit satellites and dropping costs of multispectral cameras; notes Nobel Prize for Chemistry went to someone from AI background
MAJOR DISCUSSION POINT
Multimodal AI models for weather forecasting
DISAGREED WITH
M. Ravichandran
Argument 2
ANRF provides grant funding for research entities and RDI fund for private sector, focusing on mission-mode programs like AI for Weather and Climate
EXPLANATION
Kalayanaraman explains that ANRF operates as a statutory body providing grant funding for not-for-profit research sectors and a one lakh crore RDI fund exclusively for private sector. The organization focuses on both broad-based funding and mission-mode programs, including a specific AI for Weather and Climate track within their AI for Science and Engineering program.
EVIDENCE
Mentions one lakh crore RDI fund; references collaboration with MOES and Mission Morrison program; announces AI for Science and Engineering hackathon with IBM and IIT Delhi
MAJOR DISCUSSION POINT
Research funding mechanisms and institutional support
Argument 3
Leapfrog Demonstrators for Societal Innovation program will support collaborative proposals that address real societal problems with demonstrable impact
EXPLANATION
Kalayanaraman describes a new program launching within a month that focuses on taking societal problems and creating leapfrog demonstrators rather than incremental solutions. The program emphasizes collaborative proposals, consortiums, and hub-and-spoke setups rather than individual proposals, with explicit requirements for industry-academia partnerships.
EVIDENCE
Program launching in about a month; mentions open IP licensing mechanisms; references Translational Research Centers program with industry partnership mandates
MAJOR DISCUSSION POINT
Collaborative research programs for societal impact
AGREED WITH
Sandeep Singhal, Akshara Kaginalkar
K
Karthik Kashinath
3 arguments166 words per minute436 words157 seconds
Argument 1
Transfer learning can help apply weather models across different regions while accounting for hyperlocal variations
EXPLANATION
Kashinath argues that while weather and climate physics are universal across the planet, there are unique characteristics at hyperlocal scales. Transfer learning can efficiently move knowledge from data-rich regions to data-sparse areas, but this must be done with careful constraints about what exactly is being transferred to maintain accuracy and relevance.
EVIDENCE
Notes that physics of weather and climate are the same globally but with hyperlocal uniqueness; mentions data-rich vs data-sparse regions
MAJOR DISCUSSION POINT
Transfer learning for regional weather model adaptation
Argument 2
Benchmark datasets and metrics at hyperlocal scales are needed to drive AI development, similar to how ImageNet revolutionized computer vision
EXPLANATION
Kashinath emphasizes that achieving operational quality at hyperlocal scales requires creating benchmark datasets and metrics specific to regional needs and problem types. He draws a parallel to ImageNet, which drove the AI revolution in deep learning 12 years ago by providing standardized benchmarks, suggesting the same approach could work for hyperlocal weather prediction.
EVIDENCE
References ERIF dataset from ECMWF and weather bench as examples of successful benchmarks at global scale; cites ImageNet as the catalyst for AI revolution in deep learning
MAJOR DISCUSSION POINT
Standardized benchmarks for hyperlocal weather prediction
AGREED WITH
M. Ravichandran
Argument 3
Superresolution techniques can downscale global models from 25km to 1km resolution and potentially to hyperlocal scales
EXPLANATION
Kashinath explains that AI has demonstrated powerful superresolution capabilities, which are already being applied in the Earth2 program to take 25-kilometer resolution data and super-resolve it to one-kilometer resolution. He suggests that generative AI technologies needed to extend this to hyperlocal scales either already exist or will be developed within 2-3 years.
EVIDENCE
References Earth2 program’s current work on 25km to 1km super-resolution; mentions decades of downscaling work in weather and climate; notes generative AI capabilities
MAJOR DISCUSSION POINT
AI-powered downscaling for hyperlocal weather prediction
AGREED WITH
M. Ravichandran, Akshara Kaginalkar
P
Praphul Chandra
2 arguments156 words per minute373 words142 seconds
Argument 1
Small data fine-tuning of large foundation models could enable specific use case applications with minimal data requirements
EXPLANATION
Chandra argues that while large foundation models are general in applicability, the key breakthrough needed is the ability to fine-tune them for specific use cases with very small datasets. This capability would have applications across multiple domains beyond weather and climate, making specialized applications more accessible and practical.
EVIDENCE
References work across AI foundation models in biology, materials, and weather; notes that foundation models are fairly general but require data for fine-tuning
MAJOR DISCUSSION POINT
Small data fine-tuning for specialized AI applications
DISAGREED WITH
Amit Sheth
Argument 2
AI weather forecasting can enable renewable energy grid management by predicting solar panel output for demand flexibility and energy trading
EXPLANATION
Chandra highlights the intersection of weather and energy, particularly as India transitions from fossil fuels to renewable energy dominated by solar power. He explains that hyperlocal weather forecasting models can predict individual rooftop solar panel energy generation, which is critical for grid management, energy trading between consumers and producers, and demand flexibility in data centers.
EVIDENCE
References team demo combining India Energy Stack with AI models for weather forecasting and grid load prediction; mentions Ministry of Power’s digital public infrastructure
MAJOR DISCUSSION POINT
Integration of AI weather prediction with renewable energy systems
D
Dev Niyogi
2 arguments193 words per minute448 words139 seconds
Argument 1
AI can map human behavioral elements and societal aspects with physical constraints for more accessible predictions
EXPLANATION
Niyogi introduces the concept of ‘Jugaad’ (Indian innovation) to explain how humans always find ways to beat or work around systems, which has been difficult to incorporate into predictive models. He argues that AI’s most exciting capability is bringing together human dimensions and societal aspects with physical constraints, making predictions more accessible and realistic.
EVIDENCE
Uses the term ‘Jugaad’ as a uniquely Indian approach to problem-solving; mentions agentic AI and references the ‘ant AI’ concept
MAJOR DISCUSSION POINT
Integration of human behavior with physical modeling
Argument 2
Digital twins should focus on decision-specific modeling rather than predicting every variable, transforming weather data into actionable intelligence
EXPLANATION
Niyogi argues that weather prediction should move from being a ‘tragedy of commons’ where everyone is affected but no one can pay, to creating monetizable, decision-specific products. He emphasizes that digital twins should be designed around specific decisions rather than trying to predict every variable at every scale, focusing on transforming weather output into actionable intelligence for specific user needs.
EVIDENCE
References Director General Mahapatra’s mention of simple, scalable, transferable box models; uses examples ranging from long-term hedging decisions to short-term decisions like walking in shade
MAJOR DISCUSSION POINT
Decision-focused weather prediction systems
M
Manish Bhardwaj
2 arguments125 words per minute804 words384 seconds
Argument 1
Trusted early warning systems for all citizens require hybrid models connecting AI with physical sensor networks and satellite data
EXPLANATION
Bhardwaj emphasizes the need for Digital Public Good (DPG) assets that provide trusted, reliable, and resilient early warning systems to all citizens at low cost. He argues that this cannot be purely AI-based but must be a hybrid model that integrates with physical sensor networks and satellite data from various alert-generating agencies.
EVIDENCE
References India’s success in cyclone prediction with five-day advance warning and zero mortality achievements; mentions various alert-generating agencies and satellite data sources
MAJOR DISCUSSION POINT
Hybrid early warning systems for disaster management
AGREED WITH
M. Ravichandran
Argument 2
Multi-hazard cascading scenarios need AI to analyze multiple data sources for better granular forecasting and targeted population warnings
EXPLANATION
Bhardwaj describes the challenge of multi-hazard, cascading disaster scenarios where one hazard leads to others, such as cloudbursts leading to landslides and flash floods. He argues that AI can help analyze data from multiple terrestrial, satellite, and sensor sources to improve granular forecasting and provide targeted warnings to specific populations, especially in areas where comprehensive sensor coverage is not feasible.
EVIDENCE
Mentions specific examples like cloudbursts leading to landslides and flash floods; references limitations in mapping every hill in vulnerable Himalayan areas; notes collaboration with IMD and Ministry of Earth Sciences
MAJOR DISCUSSION POINT
AI for complex multi-hazard disaster scenarios
S
Sandeep Singhal
4 arguments171 words per minute580 words203 seconds
Argument 1
Voice-based consumer applications should provide personalized resilience guidance for different user types during climate events
EXPLANATION
Singhal argues for developing simple voice framework applications that go beyond just providing information to actually creating resilience approaches for individuals. These applications should provide personalized guidance for different user types (farmers, urban dwellers, etc.) on what specific actions to take during climate events, integrating forecasting with daily life decisions.
EVIDENCE
Mentions current developments in voice agents; provides examples of different user scenarios like farmers and urban residents; emphasizes integration with daily air quality and life decisions
MAJOR DISCUSSION POINT
Consumer-focused climate resilience applications
Argument 2
Startups need government partnerships for data access and deployment, while segmenting markets between public good and monetizable private applications
EXPLANATION
Singhal advises that climate-focused startups must establish partnerships with government institutions for data access and deployment, as government drives most climate-related initiatives. He emphasizes the need to segment markets between general population services (requiring government collaboration) and specific business applications where core operations are impacted by weather events and companies are willing to pay.
EVIDENCE
References portfolio companies working in climate space; mentions fund-level relationships with ministries; notes impact of weather events on core business operations
MAJOR DISCUSSION POINT
Business models for climate technology startups
Argument 3
Public-private partnerships are essential for scaling solutions, with philanthropic capital also becoming available for large-scale programs
EXPLANATION
Singhal argues that collaboration is crucial both for deployment and funding of climate solutions. He highlights the importance of public-private partnerships and notes that philanthropic capital is increasingly available for programs that can demonstrate scale, complementing government initiatives like ANRF and RDI funding mechanisms.
EVIDENCE
References ANRF and RDI capital availability; mentions philanthropists looking at scalable programs; emphasizes collaboration in both deployment and funding
MAJOR DISCUSSION POINT
Funding mechanisms for climate technology scaling
AGREED WITH
Shivkumar Kalayanaraman, Akshara Kaginalkar
Argument 4
Insurance represents one of the first monetizable products that can emerge from climate risk prediction technology
EXPLANATION
In response to an audience question about insurance and climate risk, Singhal identifies insurance as a primary monetizable application that can emerge from climate prediction technology developed for public good. He suggests that the technology developed for Digital Public Infrastructure can be translated into commercial insurance products.
EVIDENCE
Responds to audience question about insurance rates in Northern California and Florida; connects to earlier discussion about translating DPI work into monetizable projects
MAJOR DISCUSSION POINT
Insurance as a commercial application of climate prediction
A
Amit Sheth
1 argument141 words per minute394 words166 seconds
Argument 1
IRO focuses on building small, agile, specific models rather than large language models, with earth science as a key vertical
EXPLANATION
Sheth explains that IRO was initiated after discussions with the PM in December 2023, focusing on areas where India can shine independently rather than following Western or Chinese approaches. The organization develops original research on small, agile, specific models for applications like hyper-local extreme weather prediction, avoiding large language models that come with unknown training data and other baggage.
EVIDENCE
References meeting with PM in December 2023; mentions focus on enterprise rather than consumer applications; cites partnerships with Indian Pharma Alliance representing 80% of India’s pharma output
MAJOR DISCUSSION POINT
India-specific AI research organization strategy
DISAGREED WITH
Praphul Chandra
A
Audience
1 argument154 words per minute75 words29 seconds
Argument 1
Climate risk increasingly affects insurance rates and availability, requiring integration of prediction technology with risk assessment
EXPLANATION
An audience member raises the issue of climate risk manifesting in extremely high insurance rates or complete unavailability of insurance coverage, as seen in Northern California and Florida. They question how to address the challenge of people who must remain in high-risk areas but face unaffordable or unavailable insurance, suggesting the need to integrate climate prediction technology with insurance and risk assessment.
EVIDENCE
References specific examples from Northern California and Florida where houses become uninsured due to climate risk
MAJOR DISCUSSION POINT
Insurance challenges in climate-risk areas
A
Akshara Kaginalkar
5 arguments146 words per minute2193 words897 seconds
Argument 1
AI solutions for India’s critical weather and climate problems require a top-down approach with diverse expertise from various domains
EXPLANATION
Kaginalkar emphasizes that addressing India’s weather and climate challenges through AI requires bringing together experts from multiple fields including earth sciences, venture capital, disaster management, data sciences, and AI modeling. She advocates for a comprehensive approach that considers weather and climate as drivers of broader sectors like energy, health, economics, and agriculture.
EVIDENCE
Assembles panel with experts from Ministry of Earth Sciences, venture capital, UT Austin/IIT Roorkee, NRF, NDMA, data sciences, and NVIDIA; mentions weather and climate affecting energy, health, economics, and agriculture
MAJOR DISCUSSION POINT
Interdisciplinary collaboration for AI climate solutions
Argument 2
Time constraints in policy and implementation discussions require efficient formats while maintaining comprehensive coverage of complex topics
EXPLANATION
Kaginalkar acknowledges the challenge of condensing what should be a full-day workshop into a shorter panel format, emphasizing the need to balance thorough discussion with practical time limitations. She stresses the importance of hearing from all experts while managing constraints effectively.
EVIDENCE
Mentions having to squeeze a full-day workshop format into shorter time; repeatedly references time constraints and need to finish on schedule
MAJOR DISCUSSION POINT
Balancing comprehensive policy discussion with practical constraints
Argument 3
Operational AI solutions require integration across the complete spectrum from monitoring to end-user delivery, with digital twins as a potential comprehensive framework
EXPLANATION
Kaginalkar argues that effective AI applications for climate must span the entire pipeline from data monitoring and processing to modeling and reaching end users. She suggests digital twins could provide a complete portfolio of AI applications that addresses all these components in an integrated manner.
EVIDENCE
References the complete AI spectra from monitoring to processing to modeling to end users; mentions digital twins as a comprehensive framework
MAJOR DISCUSSION POINT
End-to-end AI system integration for climate applications
Argument 4
Hyperlocal scale predictions are essential for practical climate applications, requiring resolution down to hundreds of meters for air quality and tens of meters for flood management
EXPLANATION
Kaginalkar emphasizes that practical climate and weather applications need extremely fine-scale predictions, citing specific examples of 400-meter resolution for air quality monitoring and 10-meter resolution for flood management. She argues that this level of granularity is necessary for actionable climate services.
EVIDENCE
Provides specific examples of 400-meter resolution for air quality and 10-meter resolution for floods; references discussion with expert about dew effects on immediate temperature
MAJOR DISCUSSION POINT
Hyperlocal scale requirements for practical climate services
Argument 5
India’s diverse climate regions require region-specific AI models rather than one-size-fits-all solutions
EXPLANATION
Kaginalkar argues that India’s vast climatic diversity from north to south and east to west necessitates developing small, region-specific AI models rather than attempting to create universal solutions. She suggests that public-private partnerships will be essential for developing these localized models.
EVIDENCE
References India’s very different climate across north, south, east, west regions; mentions need for small models for specific regions
MAJOR DISCUSSION POINT
Regional customization of AI climate models for India
Agreements
Agreement Points
Integration of AI with physics-based models is essential for effective weather prediction
Speakers: M. Ravichandran, Manish Bhardwaj
Need to integrate physics-based models with AI for spatial and temporal predictions, especially for extreme events like cloudbursts Trusted early warning systems for all citizens require hybrid models connecting AI with physical sensor networks and satellite data
Both speakers emphasize that purely AI-based solutions are insufficient and that hybrid approaches combining AI with physics-based models and physical sensor networks are necessary for reliable weather prediction and early warning systems
Collaboration and partnerships are crucial for scaling AI climate solutions
Speakers: Shivkumar Kalayanaraman, Sandeep Singhal, Akshara Kaginalkar
Leapfrog Demonstrators for Societal Innovation program will support collaborative proposals that address real societal problems with demonstrable impact Public-private partnerships are essential for scaling solutions, with philanthropic capital also becoming available for large-scale programs Interdisciplinary collaboration for AI climate solutions
All three speakers stress the importance of collaborative approaches, whether through government programs encouraging consortiums, public-private partnerships for funding and deployment, or interdisciplinary expertise integration
Hyperlocal scale predictions are critical for practical applications
Speakers: M. Ravichandran, Karthik Kashinath, Akshara Kaginalkar
Need to integrate physics-based models with AI for spatial and temporal predictions, especially for extreme events like cloudbursts Superresolution techniques can downscale global models from 25km to 1km resolution and potentially to hyperlocal scales Hyperlocal scale requirements for practical climate services
All speakers recognize that moving from large-scale global models to hyperlocal predictions (down to meters or kilometers) is essential for practical climate applications and disaster management
Data accessibility and standardization are fundamental requirements
Speakers: M. Ravichandran, Karthik Kashinath
India’s strength lies in 150+ years of weather data that needs to be opened up for diverse researchers to reduce forecast errors and uncertainties Benchmark datasets and metrics at hyperlocal scales are needed to drive AI development, similar to how ImageNet revolutionized computer vision
Both speakers emphasize that making data accessible to diverse researchers and creating standardized benchmark datasets are essential for advancing AI applications in weather and climate
Similar Viewpoints
Both speakers advocate for application-specific AI solutions that focus on particular decision-making needs rather than general-purpose predictions, emphasizing practical utility over comprehensive modeling
Speakers: Dev Niyogi, Praphul Chandra
Digital twins should focus on decision-specific modeling rather than predicting every variable, transforming weather data into actionable intelligence AI weather forecasting can enable renewable energy grid management by predicting solar panel output for demand flexibility and energy trading
Both speakers recognize the need for differentiated funding and business models that separate public good applications (requiring government support) from commercial applications (that can be monetized)
Speakers: Shivkumar Kalayanaraman, Sandeep Singhal
ANRF provides grant funding for research entities and RDI fund for private sector, focusing on mission-mode programs like AI for Weather and Climate Startups need government partnerships for data access and deployment, while segmenting markets between public good and monetizable private applications
Both speakers focus on techniques to make large AI models more accessible and applicable to specific use cases with limited data, whether through transfer learning or small data fine-tuning
Speakers: Karthik Kashinath, Praphul Chandra
Transfer learning can help apply weather models across different regions while accounting for hyperlocal variations Small data fine-tuning of large foundation models could enable specific use case applications with minimal data requirements
Unexpected Consensus
Need for interdisciplinary collaboration beyond traditional weather science
Speakers: M. Ravichandran, Dev Niyogi, Akshara Kaginalkar
India’s strength lies in 150+ years of weather data that needs to be opened up for diverse researchers to reduce forecast errors and uncertainties AI can map human behavioral elements and societal aspects with physical constraints for more accessible predictions Interdisciplinary collaboration for AI climate solutions
It’s unexpected that a government meteorological official (Ravichandran) would strongly advocate for opening up data to researchers from completely different disciplines like biology, aligning with academic perspectives on interdisciplinary approaches
Focus on small, specialized models rather than large general models
Speakers: Amit Sheth, Praphul Chandra, Dev Niyogi
IRO focuses on building small, agile, specific models rather than large language models, with earth science as a key vertical Small data fine-tuning of large foundation models could enable specific use case applications with minimal data requirements Digital twins should focus on decision-specific modeling rather than predicting every variable, transforming weather data into actionable intelligence
There’s unexpected consensus across different types of organizations (research institute, academia, and industry-academia hybrid) on moving away from the current trend of ever-larger AI models toward smaller, more specialized solutions
Overall Assessment

The speakers demonstrate strong consensus on the need for hybrid AI-physics approaches, collaborative partnerships, hyperlocal scale predictions, and data accessibility. There’s also unexpected agreement on interdisciplinary collaboration and preference for specialized over general AI models.

High level of consensus with complementary rather than conflicting viewpoints. The implications are positive for coordinated action, as speakers from different sectors (government, academia, industry, funding) align on key principles while bringing different expertise to implementation. This suggests a mature understanding of the challenges and realistic pathways forward for AI applications in climate and weather prediction.

Differences
Different Viewpoints
Approach to AI model development – large foundation models vs. small specific models
Speakers: Amit Sheth, Praphul Chandra
IRO focuses on building small, agile, specific models rather than large language models, with earth science as a key vertical Small data fine-tuning of large foundation models could enable specific use case applications with minimal data requirements
Sheth advocates for building original small, agile models from scratch to avoid the ‘baggage’ of large language models with unknown training data, while Chandra sees potential in fine-tuning existing large foundation models with small datasets for specific applications
Data fusion vs. insight fusion approaches
Speakers: Shivkumar Kalayanaraman, M. Ravichandran
Multimodal models combining time series and spatial data with generative AI can forecast weather patterns using simple camera networks Need to integrate physics-based models with AI for spatial and temporal predictions, especially for extreme events like cloudbursts
Kalayanaraman emphasizes fusing insights from different modes rather than raw data fusion, advocating for generative AI with simple sensor networks, while Ravichandran focuses on integrating traditional physics-based numerical models with AI approaches
Unexpected Differences
Role of interdisciplinary collaboration in weather prediction
Speakers: M. Ravichandran
India’s strength lies in 150+ years of weather data that needs to be opened up for diverse researchers to reduce forecast errors and uncertainties
Ravichandran uniquely emphasized bringing in researchers from biology and other non-meteorological disciplines to work on weather data, suggesting that weather experts may be too constrained by traditional thinking. This was unexpected as other speakers focused on technical integration rather than disciplinary diversity
Consumer-focused vs. institutional applications
Speakers: Sandeep Singhal, Other speakers
Voice-based consumer applications should provide personalized resilience guidance for different user types during climate events
Singhal was the only speaker to emphasize direct consumer applications with voice interfaces for individual resilience, while other speakers focused on institutional, research, or infrastructure-level solutions. This represents an unexpected divide between consumer-facing and institutional approaches
Overall Assessment

The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts centered on technical approaches rather than goals. Main areas of disagreement included AI model development strategies (small specific models vs. fine-tuned large models) and implementation approaches (data fusion vs. insight fusion). Most speakers agreed on the need for hybrid AI-physics approaches, public-private partnerships, and hyperlocal applications.

Low to moderate disagreement level with high convergence on goals but some divergence on methods. The implications are positive for the field as speakers complement rather than contradict each other, suggesting multiple viable pathways toward AI-enabled climate solutions. The main challenge will be coordinating these different approaches rather than resolving fundamental conflicts.

Partial Agreements
All speakers agree on the need for hybrid approaches combining AI with traditional methods, but disagree on the specific implementation – Ravichandran emphasizes physics-based model integration, Bhardwaj focuses on sensor network integration, and Kalayanaraman advocates for insight fusion over data fusion
Speakers: M. Ravichandran, Manish Bhardwaj, Shivkumar Kalayanaraman
Need to integrate physics-based models with AI for spatial and temporal predictions, especially for extreme events like cloudbursts Trusted early warning systems for all citizens require hybrid models connecting AI with physical sensor networks and satellite data Multimodal models combining time series and spatial data with generative AI can forecast weather patterns using simple camera networks
Both agree on the importance of public-private partnerships and funding mechanisms, but Singhal emphasizes market segmentation between public good and commercial applications, while Kalayanaraman focuses on institutional funding structures and collaborative research programs
Speakers: Sandeep Singhal, Shivkumar Kalayanaraman
Startups need government partnerships for data access and deployment, while segmenting markets between public good and monetizable private applications ANRF provides grant funding for research entities and RDI fund for private sector, focusing on mission-mode programs like AI for Weather and Climate
Both agree on the need for more targeted, practical applications, but Niyogi emphasizes decision-focused modeling for specific user needs, while Kashinath focuses on creating standardized benchmarks and datasets to drive technical development
Speakers: Dev Niyogi, Karthik Kashinath
Digital twins should focus on decision-specific modeling rather than predicting every variable, transforming weather data into actionable intelligence Benchmark datasets and metrics at hyperlocal scales are needed to drive AI development, similar to how ImageNet revolutionized computer vision
Takeaways
Key takeaways
AI integration with physics-based weather models is essential for accurate hyperlocal predictions, especially for extreme events like cloudbursts that current numerical models cannot predict effectively India’s competitive advantage lies in its 150+ years of weather data and young talent pool, but this requires opening up data access to diverse researchers from multiple disciplines beyond traditional meteorology Trusted early warning systems for disaster management need hybrid AI models that combine multiple data sources (satellite, sensor networks, terrestrial) to provide granular, targeted warnings to vulnerable populations Transfer learning and small data fine-tuning techniques can enable AI weather models to work across different regions while maintaining hyperlocal specificity Digital twins should be decision-focused rather than attempting to predict every variable, transforming weather data into actionable intelligence for specific use cases Public-private partnerships are critical for scaling AI climate solutions, with government providing data access and deployment infrastructure while private sector handles monetization The intersection of AI weather forecasting with renewable energy grid management represents a significant opportunity for India’s energy transition
Resolutions and action items
ANRF announced collaboration with MOES on Mission Morrison program for AI Weather and Climate applications ANRF launched AI for Science and Engineering hackathon for Weather and Climate in partnership with IBM and IIT Delhi ANRF will launch Leapfrog Demonstrators for Societal Innovation program within a month to support collaborative proposals addressing societal problems IRO partnerships announced with NVIDIA, Google, Qualcomm, and Gates Foundation to focus on India-specific AI applications Need to create benchmark datasets and metrics for hyperlocal scale weather prediction similar to ImageNet for computer vision Requirement to open up India’s weather data for broader research community access to enable diverse approaches to error reduction
Unresolved issues
How to predict cloudbursts and other extreme weather events that current models cannot forecast Specific mechanisms for validating and building trust in AI-enabled weather prediction systems for operational use Technical details of how to effectively integrate physics-based models with AI approaches without losing interpretability Scalability challenges of deploying hyperlocal AI weather models across India’s diverse geographic and climatic regions How to balance the computational requirements of advanced AI models with practical deployment constraints Specific frameworks for translating AI weather predictions into actionable guidance for different user segments (farmers, urban populations, etc.) Integration challenges between climate risk prediction technology and insurance industry applications
Suggested compromises
Hybrid approach combining physics-based numerical models for spatial predictions with AI for time series and local patterns rather than purely AI-based solutions Segmented market approach where public good applications are funded through government partnerships while private monetizable applications support business sustainability Collaborative consortium-based research proposals rather than individual efforts to leverage diverse expertise and resources Focus on small, agile, specific AI models for particular use cases rather than attempting to build large general-purpose foundation models Gradual scaling from global 25km resolution models to 1km through superresolution techniques, then further to hyperlocal scales as technology matures Open IP licensing arrangements to encourage rapid translation from academic research to industry applications
Thought Provoking Comments
When you talk about the weather… earlier we just to tell that in suppose how the elephant is going, I’m able to see that elephant, how it is going. I’m able to tell that tomorrow it will come here. But now the problem is whether because of the climate change and other things, the space and time has changed. Now, we have to see on the elephant some ant is sitting. That ant, how it is going, we want to know. So we want to see the elephant plus ant.
This metaphor brilliantly captures the fundamental challenge in modern weather prediction – the need to understand both macro and micro-scale phenomena simultaneously. It illustrates how climate change has made weather systems more complex and unpredictable, requiring unprecedented granularity in forecasting.
This comment set the tone for the entire discussion by establishing the core challenge that AI must address. It influenced subsequent speakers to focus on multi-scale, multi-modal approaches and the need for hybrid physics-AI models. The metaphor became a reference point for discussing the complexity of modern weather prediction.
Speaker: M. Ravichandran
I think I’ll just double down on the multimodal models that are coming out… today with generative AI you can just put a camera pointed to the sky and then you can actually not only see the patterns of clouds, you can forecast one hour ahead… the opportunity to fuse insights as opposed to fusing data. I mean, data fusion is a painfully, you know, mind-bogglingly complex, unnecessary and complex as a thing.
This comment introduces a paradigm shift from traditional data fusion to insight fusion, which is a more sophisticated approach. It also highlights how accessible technology (simple cameras) can now provide sophisticated forecasting capabilities, democratizing weather prediction.
This shifted the conversation from discussing complex technical infrastructure to more accessible, scalable solutions. It influenced the discussion toward practical, deployable technologies and reinforced the theme of making AI weather solutions more democratized and cost-effective.
Speaker: Shivkumar Kalayanaraman
I’ll just add one term you guys know this word Jugaad so this is a very India thing Jugaad… there is a framework that is mathematically feasible that we can model very well that follows equations that follows laws of nature and then there is a human element that we always beat the system and make that happen mapping that has been very difficult in a predictive models and this is where I think AI is coming into play that it brings the human dimensions and it brings the societal aspect with the physical constraints
This comment brilliantly connects Indian cultural innovation (‘Jugaad’) with AI’s capability to model human behavior and societal factors alongside physical phenomena. It recognizes that weather prediction isn’t just about physics but about how humans adapt and respond to weather systems.
This comment introduced a uniquely Indian perspective to the global AI-weather discussion and emphasized the importance of incorporating human behavioral patterns into predictive models. It influenced the conversation to consider cultural and social factors in AI model development, making the discussion more holistic and locally relevant.
Speaker: Dev Niyogi
Weather is the tragedy of commons. Everyone is affected by it, but no one can pay for it… People don’t need weather. They need weather that can help them make a decision… we need to move from simply creating the weather output to adding something which is going to help me make an intelligent decision
This comment reframes the entire value proposition of weather prediction from a technical exercise to a decision-support system. It addresses the fundamental economic challenge of weather services and proposes a solution-oriented approach that focuses on actionable intelligence rather than raw data.
This comment fundamentally shifted the discussion from technical capabilities to user-centric value creation. It influenced the conversation toward practical applications and monetization strategies, connecting with earlier points about public-private partnerships and making weather services economically sustainable.
Speaker: Dev Niyogi
The breakthrough that I am most anxious to look for is what we call small data fine tuning… when you have to fine tune them for a specific use case you still need data. How small can that data be? Can you use small data to fine tune large foundation models?
This addresses a critical practical challenge in AI deployment – the data requirement paradox. While foundation models are powerful, they often require substantial data for fine-tuning, which may not be available for specific local applications. This comment identifies a key technical breakthrough needed for widespread adoption.
This comment focused the technical discussion on a specific, solvable problem that could unlock broader AI applications in weather and climate. It influenced subsequent discussions about transfer learning and the practical deployment of AI models in data-sparse regions.
Speaker: Praphul Chandra
One thing I would like to see more used in practice is transfer learning which of course some regions of the world are data rich and some others are data sparse. Problems are shared across the planet. The physics of weather and climate are the same no matter where you are in the planet. But at the same time, there’s uniqueness at hyperlocal scales.
This comment addresses the global-local paradox in weather modeling and proposes transfer learning as a solution to bridge data inequality between regions. It recognizes both the universality of physical laws and the uniqueness of local conditions.
This comment provided a technical pathway for addressing data scarcity issues raised earlier and connected with the discussion about hyperlocal forecasting. It influenced the conversation toward collaborative, global approaches to AI model development while maintaining local relevance.
Speaker: Karthik Kashinath
Overall Assessment

These key comments collectively transformed the discussion from a technical showcase of AI capabilities to a nuanced exploration of practical challenges and solutions. The elephant-ant metaphor established the complexity challenge, while the Jugaad concept introduced cultural and human dimensions. The ‘tragedy of commons’ reframing shifted focus from technology to value creation, and the technical insights about small data fine-tuning and transfer learning provided concrete pathways forward. Together, these comments created a comprehensive framework that balanced technical innovation with practical deployment, economic sustainability, and social relevance – making the discussion uniquely valuable for understanding how AI can address India’s specific weather and climate challenges while contributing to global solutions.

Follow-up Questions
How can we predict cloudbursts and other extreme weather events that are currently unpredictable?
This is a critical gap in current forecasting capabilities that affects disaster preparedness and early warning systems
Speaker: M. Ravichandran
How can we effectively blend physics-based numerical models with AI for better local weather prediction?
Integration of traditional modeling approaches with AI is essential for improving forecast accuracy at fine scales
Speaker: M. Ravichandran
How small can the data be for fine-tuning large foundation models for specific use cases?
Small data fine-tuning is a breakthrough needed to make foundation models more applicable to specific regional or local problems
Speaker: Praphul Chandra
How can we create benchmark datasets and metrics for hyperlocal scale weather prediction?
Standardized benchmarks are needed to drive AI development for hyperlocal applications, similar to how ImageNet drove computer vision advances
Speaker: Karthik Kashinath
How can we effectively transfer learning from data-rich regions to data-sparse regions while maintaining local specificity?
This addresses the global challenge of uneven data availability while leveraging shared physics across regions
Speaker: Karthik Kashinath
How can AI help predict glacial lake outburst floods and improve early warning for cascading multi-hazard scenarios?
These complex, cascading disasters require advanced prediction capabilities that current systems cannot provide
Speaker: Manish Bhardwaj
How can we develop voice-based AI frameworks that provide personalized resilience guidance for different user types (farmers, urban dwellers, etc.)?
Consumer-facing applications need to translate complex weather information into actionable guidance for different user segments
Speaker: Sandeep Singhal
How can we map human behavioral elements (Jugaad) into predictive models using AI?
Human adaptation and innovation in response to weather events is difficult to model but crucial for accurate predictions
Speaker: Dev Niyogi
How can we develop decision-specific digital twins rather than comprehensive weather models?
Moving from general weather prediction to decision-support systems requires targeted modeling approaches
Speaker: Dev Niyogi
How can climate risk assessment be integrated with insurance products in the Indian context?
Insurance applications represent a monetizable use case for climate AI that could drive private sector investment
Speaker: Audience member
How can we validate and verify AI/ML weather forecasts to build trust in the system?
Trust in AI-based forecasting systems is crucial for operational adoption and public acceptance
Speaker: M. Ravichandran
How can we open up weather data to enable broader participation from different disciplines in AI/ML development?
Cross-disciplinary collaboration could bring new perspectives and solutions to weather prediction challenges
Speaker: M. Ravichandran

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Scaling Enterprise-Grade Responsible AI Across the Global South

Scaling Enterprise-Grade Responsible AI Across the Global South

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Waves of infrastructure Open Systems Open Source Open Cloud

Waves of infrastructure Open Systems Open Source Open Cloud

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion centered on Proximal Cloud’s launch in India and the future of AI infrastructure, featuring presentations from Renu Raman and several industry partners. Raman, drawing from his experience at Sun Microsystems, outlined how computing transitions occur every 15-30 years, arguing that we are currently experiencing a shift from CPU-only systems to heterogeneous AI computing similar to the distributed systems revolution of the 1990s-2000s. He emphasized that AI will impact 95% of work compared to the surface-level productivity improvements of the SaaS era, creating massive demand for computing infrastructure.


Proximal Cloud’s strategy focuses on bringing compute closer to data through partnerships with AMD for CPU-GPU hybrid systems and collaboration with UC San Diego for research in health sciences and education. The company aims to address India’s need for extremely low-cost, population-scale computing infrastructure. Several partners presented their integration with Proximal’s platform, including PharmEx’s agricultural AI solutions using sensors and autonomous tractors, Divium’s model optimization platform that reduces AI costs by 30-60%, and Instant System’s venture building capabilities for AI startups.


The panel discussion explored whether India could produce major technology corporations like NVIDIA or SAP through this AI transition. Participants noted that India’s planned 10-gigawatt AI infrastructure buildout could generate $250 billion in hardware demand, potentially supporting an entire ecosystem of systems companies. However, they acknowledged the funding gap between Indian startups (receiving crores) versus global AI companies (investing billions per engineer). The consensus was that India’s opportunity lies in combining domain expertise with advanced technology to achieve higher gross margins than traditional service models, requiring sustained long-term investment similar to ISRO’s development approach.


Keypoints

Major Discussion Points:

Technology Infrastructure Evolution and AI Computing Demands: The discussion centers on the massive shift from traditional CPU-based computing to AI-driven heterogeneous systems, with speakers emphasizing the need for new distributed computing architectures to handle the exponential growth in AI workloads. They highlight the transition from training-focused to inference-focused computing systems.


India’s AI and Semiconductor Opportunity: A significant focus on India’s potential to become a major player in the AI infrastructure space, with discussions about the country’s plan to scale from less than 1 gigawatt to 10 gigawatts of AI computing capacity, representing a $250 billion hardware opportunity and the potential for “population-scale computing.”


Proximal Cloud’s Business Model and Partnerships: Renu Raman presents Proximal Cloud’s approach to bringing compute closer to data through sovereign, on-premises AI infrastructure solutions, showcasing partnerships with UC San Diego, AMD, and various Indian companies across agriculture, education, and enterprise sectors.


Practical AI Implementation Challenges: Multiple speakers address real-world barriers to AI adoption, including the “90% of Gen-AI pilots never make it to production” problem, issues with model selection, cost optimization, data security, hallucinations, and the need for reliable inference systems that can operate at sub-second response times.


Investment and Scaling Challenges in India: The panel discusses the significant funding gap between Indian startups (receiving crores) versus global tech companies (investing hundreds of millions per engineer), emphasizing the need for sustained, long-term investment similar to India’s ISRO model to build competitive AI and semiconductor companies.


Overall Purpose:

The discussion serves as a launch event for Proximal Cloud’s AI infrastructure offerings in India, combining company introduction with broader industry analysis. The goal is to position the company within the context of major technology shifts while demonstrating practical applications through partner showcases and addressing the strategic opportunity for India to develop sovereign AI computing capabilities.


Overall Tone:

The tone is predominantly optimistic and forward-looking, with speakers expressing excitement about India’s potential in the AI space. The discussion maintains a professional, technical atmosphere throughout, with Renu Raman setting an educational tone through historical technology parallels. While acknowledging significant challenges (funding gaps, technical hurdles, market realities), the overall sentiment remains bullish on India’s prospects. The tone becomes more interactive and collaborative during the Q&A segments, with industry participants sharing practical insights and reinforcing themes of opportunity and innovation potential.


Speakers

Speakers from the provided list:


Renu Raman – Main presenter, hardware/software expert with background at Sun Microsystems, founder/leader at Proximal Cloud, focuses on distributed systems and AI infrastructure


Jensen Huang – CEO of NVIDIA (quoted in video/audio clip about data processing and accelerated computing)


Michael Dell – CEO of Dell Technologies (quoted in video/audio clip about AI factories and enterprise customers)


Lalit Bhatt – Director heading India office for PharmEx, works in agricultural AI technology with sensors, imaging, and autonomous systems


Sandeep Kumar – From Instant System, Silicon Valley-based venture builder, works on AI conversation software and financial domain solutions


Abhishek Singh – Founder of ZetaVault, specializes in LLM acceleration and custom silicon for AI inferencing


Audience – Multiple audience members asking questions, including one identified as Arya Bhattacharjee from Infosys (Senior VP driving semiconductor and AI vision)


Additional speakers:


Bharat Jain – Director at Divium (mentioned in introduction but actual presentation was given by someone else discussing Divium’s inference optimization platform)


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion centered on Proximal Cloud’s strategic launch in India and the broader transformation of AI infrastructure, featuring detailed presentations from industry leaders and extensive analysis of India’s potential role in the global AI economy. The event served as both a company introduction and a forward-looking examination of how distributed computing architectures must evolve to meet the unprecedented demands of artificial intelligence at population scale.


Technology Evolution and Infrastructure Context

Renu Raman opened the discussion by positioning current AI developments within historical technology cycles, drawing from his extensive experience at Sun Microsystems during the semiconductor boom. He argued that major technology shifts occur predictably every 15-30 years, with semiconductors driving innovation in the 1980s-90s through Moore’s Law, followed by the cloud phenomenon of the last two decades. The current AI revolution represents a convergence of bottom-up innovation from companies like NVIDIA at the silicon level and top-down innovation from language models and higher-order AI functions.


Raman emphasized the philosophical foundation underlying Proximal’s approach, referencing the principle that “people who are serious about software should make their own hardware” and its corollary: “people who are serious about AI should make their own cloud.” This philosophy drives the company’s focus on sovereign, distributed computing solutions that bring AI capabilities closer to where data resides.


The discussion highlighted a critical transition occurring in AI workloads, moving from training-focused systems requiring massive scale-up architectures to inference-focused computing that can be more distributed. Raman noted that while the SaaS era delivered productivity improvements, AI will impact 95% of work, creating economic effects far beyond previous technology waves. This shift has driven infrastructure spending from $50 billion to $300 billion, with projections reaching even higher levels as AI deployment scales globally.


Jensen Huang’s insight that “data processing is a CPU job” was specifically highlighted by Raman as crucial to understanding why hybrid architectures combining CPUs and GPUs will be essential for practical AI deployment, particularly when dealing with enterprise data that requires complex processing before AI analysis.


Proximal Cloud’s Integrated Platform Approach

Proximal Cloud’s strategy emerged as a comprehensive response to identified market needs, focusing on sovereign, on-premises AI infrastructure solutions. The company’s partnership with AMD provides access to both x86 CPU capabilities and GPU roadmaps that enable hybrid architectures supporting traditional data processing alongside AI workloads. Raman noted that AMD’s higher memory capacity GPUs can support substantial workloads for most customers, enabling more distributed systems approaches.


The collaboration with UC San Diego adds research credibility and practical application development across education, research, and industry contexts. This partnership enables work spanning hardware-level optimization, compute kernel development, and real-world application validation. Renu demonstrated an education use case during the presentation, though technical issues prevented a complete showing.


The platform addresses the critical challenge that Michael Dell identified in his video message: with more than 90% of enterprise data remaining on-premises and continuing to be generated locally, the traditional approach of moving data to cloud-based AI becomes impractical. This reality necessitates bringing AI capabilities to where data resides.


Partner Ecosystem and Real-World Applications

The partner presentations demonstrated practical applications across diverse sectors. Lalit Bhatt from PharmEx presented their agricultural solutions, focusing on sensor networks for precision irrigation and farming applications. Their work addresses cost pressures in agriculture where efficiency improvements must maintain farmer affordability while delivering measurable value.


Bharat from Divium addressed a critical deployment challenge: 90% of generative AI pilots never reach production, not due to poor demonstrations or weak models, but because of undefined quality standards, unpredictable costs, and constantly changing model selection criteria. Divium’s platform demonstrates 30-60% cost reductions through intelligent routing and quality-based model selection, directly addressing these production deployment challenges.


Sandeep Kumar from Instant System clarified their role as a venture builder rather than an incubator, focusing on creating companies that can reach mid-market and enterprise customers. He described their work on financial AI systems requiring 99% reliability without hallucinations, with advanced architectures managing data privacy and access control. These requirements demonstrate that production AI systems must meet enterprise-grade reliability standards far exceeding typical pilot project expectations.


India’s Strategic AI Infrastructure Opportunity

The panel discussion positioned India’s AI infrastructure development as both a unique challenge and unprecedented opportunity. Arya Bhattacharjee, driving semiconductor and AI vision for Infosys, argued that India’s success lies not in competing directly with premium players like Palantir or accepting traditional service margins, but in achieving higher value through domain expertise combined with advanced technology.


India’s requirement for extremely low-cost computing at population scale—serving 1.4 billion people—presents engineering challenges that could drive breakthrough innovations with global applicability. Abhishek Singh from ZetaVault, working on LLM acceleration, posed the specific challenge of serving 1.5 billion people at ₹200 per month, highlighting the extreme cost constraints that could drive innovation.


Arya provided crucial context from semiconductor manufacturing, noting that modern fabs represent $10 billion facilities where every day saved represents $10 million in value. With fabs generating massive amounts of data requiring real-time AI processing to optimize yields, the intersection of AI and semiconductor manufacturing presents specific use cases where India’s software expertise can deliver immediate, measurable value.


The discussion emphasized India’s opportunity to follow a “make in India for global” model rather than China’s more closed approach, potentially enabling Indian companies to compete globally on innovation rather than just cost.


Technical Performance and Economic Challenges

Renu proposed ambitious performance targets, suggesting a 120-millisecond response time standard for any query, drawing parallels to Google’s historical 20-millisecond standard that drove massive infrastructure investment and innovation. Achieving this target at population scale would require breakthrough advances in both computing resources and algorithmic efficiency.


The economic scale of India’s AI ambitions became clear through specific projections. India’s planned expansion to 10 gigawatts of AI computing capacity represents a massive hardware opportunity that could support multiple system companies and sustain an entire semiconductor ecosystem, potentially enabling India to move beyond traditional outsourcing models to capture higher-value segments of the AI value chain.


However, significant structural challenges exist in building world-class AI and semiconductor companies from India. The contrast between available startup funding and the hundreds of millions to billions required for competitive AI infrastructure companies highlights the scale mismatch between available capital and global competition requirements.


Investment and Ecosystem Development

The funding discussion revealed both challenges and opportunities. While traditional venture capital may be insufficient for the scale required, India’s public markets value technology companies favorably for certain types of infrastructure businesses. The discussion emphasized that going public should be viewed not as an exit strategy but as a mechanism for raising capital to scale businesses over the long investment horizons required for infrastructure companies.


The ISRO model emerged as a template for sustained technology development, demonstrating how continuous support over decades can build world-class capabilities despite initial failures. This suggests that similar sustained investment in AI and semiconductor technologies could yield comparable results.


Future Implications and Strategic Questions

Several critical questions emerged that will shape India’s AI infrastructure development trajectory. The technical challenge of achieving sub-second response times at population scale while maintaining extreme cost efficiency remains unsolved, requiring breakthrough innovations in computing architecture and algorithmic efficiency.


The economic development question of how India can capture projected hardware opportunities locally, rather than simply importing solutions, requires coordinated policy and investment approaches. Success will likely require new funding models that combine government support, private investment, and public market access in ways that sustain long-term technology development cycles.


The discussion concluded with recognition that India’s AI infrastructure opportunity extends beyond domestic markets. Success in solving population-scale computing challenges at extreme cost points could establish technological leadership with global applications, potentially enabling Indian companies to compete not just on cost but on innovation and capability. This transformation from service provider to technology leader represents both the opportunity and the challenge facing India’s AI ecosystem development.


Session transcriptComplete transcript of the session
Renu Raman

Announcements and a lot of activities going on here this week. Excited about it. We are excited about introducing what we do and what we do more in the context of India. We just launched our offering and we’ll be talking more of what we do with our partners in the coming weeks and months. But today, I’d like to introduce ourselves. But before we introduce, we want to set the context of where we fit in, both in the industry trends and the ecosystem and what category we go after from an enterprise private cloud infrastructure. And then we’ll get into sharing some of our partners that we work with and then a Q &A at the end of it with a presentation from Bharat Jain and from Zeta Bolt.

We’ll have an interactive Q &A on some key top three questions or end questions that we think need to be answered. With that, let me start with the first. I want to… thank our sponsors and our collaborators and partnerships at UC San Diego, where they have an initiative for public -private partnership at UC San Diego for AI for education, AI for research, and AI for industry. And we are one of the early industry partners. There’s a newly constituted data science and data center institute called School of Computing, Information Sciences, and Digital Sciences. And we’ll talk a little bit more about it downstream. But this collaboration enables us to not only work on technologies, but also look at key use cases, particularly on health sciences, because San Diego has got one of the largest health science, both hospital system as well as clinical research and variety of health and biotech research.

With the thesis that fundamentally computing is going to be driven by biology and health, it’s a very key partnerships that we hope to work with. going forward. With that, let me step back. This is my standard slide I use in any presentations in terms of long -term reminders, what happens in technology. So where we fit in, we’ll just walk through for the next 20, 30 minutes about what we are doing from a systems innovation, but the systems innovation is going to be punctuated or represented in the context of where the technology shifts that have occurred and will occur as we go forward. So simple reminders are we, as humanity, underestimate. We overestimate what can be done in two years, but we underestimate what can be done in 10 years.

You can go back in history, look at self -driving cars, look at neural link. I remember a slide I had put at UC Berkeley, a conference about programming languages and productivity languages and kind of a very tongue -in -cheek thought, and you just have to think and write and get confused. And I thought, well, I’m going to record out. that was in 2014. I’ll put the slides out later. I thought it would be science fiction, never happened for hundreds of years. But guess what? You can think, you can put a neural link, and probably have cursors generate code for you today. That I never thought about in 2014. So never underestimate what will happen. The big technology shifts that occur every 30 years, 15 years, 7 years.

But the key thing is semiconductors drove the technology innovation in the 80s and 90s, thanks to Moore’s Law. And the cloud phenomenon happened in the last two decades. I do see the pattern now as you are innovating, as you can see, where NVIDIA is innovating tremendously from the silicon side up. And of course, there are innovations going from the top -down, from the use case, from the language models, and higher order functions in AI. And both are coming at the same time, together. A third bullet I would say is, people who are serious about software should make their own hardware. The corollary is, people who are serious about hardware should also make their own software.

So I’m a hardware guy who’s done software, and this venture, I should be doing software first, going to the hardware later, kind of reverse model. this is the last one day one thing I’ll say about myself my professional life has been shaped by luckily I didn’t realize where between 1980s and 2000 there was a peak of Moore’s law there was an exponential part but happened to be lucky to have been part of the semiconductor innovation cycle having developed and delivered a number of world class microprocessors so today we talk about models there are only 4 or 5 guys who could do microprocessors there’s a difficult very small teams 150 person if you look at model foundation models today it’s the same characteristic there are hard problems of course it’s a lot more money you need a billion dollars and lots of GPUs but you still need the same 150 people to do the models it’s not like everybody can do the models so there is a similarity between what happened in the 90s about microprocessors and what I see today in model building it’s the same level of complexity where you need the best and brightest roughly about it’s not me it’s some altman coding that it’s 120 people and I think that’s the difference and you need to have them with the right resources computing.

We also need to have a lot of computing resources to go build the models. So with that let me start I think the next wave and we hope to drive the innovation and disrupt in terms of systems building going forward but the context is why it’s economically interesting and valuable is I think everybody knows if you look at GDP we’ve gone in the last 20 years from 33 trillion dollars to almost 100 trillion dollars by all accounts the GDP could improve by 2x or 4x in the next 20 years but the SAS era was really a productivity improvement so it really scratched the surface about productivity whereas AI is going to impact 95 % of work so the time is much bigger the impact is much bigger, the blast radius is much bigger than the last 20 years.

That’s why the computing also is needed much more. We have 300 billion dollars of infrastructure we’re in from 50 billion dollars I believe in 2000 So we’re going to be in from 50 billion dollars in the next 20 years. So we’re going to be in from 50 billion dollars in the next 20 years. So we’re going to be in from 50 billion dollars in the next 20 years. to now about $250 to $300 million of capital expenditure spent for infrastructure. So power in, capital spent. So every dollar of power you spend ends up being $3 to $5 of capital for compute, memory, network storage. And from there, you do the upper layers of software and then applications. So that $50 billion was $300 billion, but if you look at all the spending, we’re already at $400, $500 billion, and all accounts in the next 5 to 10 years will be almost $2 trillion of spend.

That creates, obviously, there’s a big demand -supply gap. The great thing about programming is every time there is a layer of abstraction, the programming gets simpler, which means it brings more people to the party to be able to compute. I think what LLMs and natural and transform models have done is bring everybody to be able to program. We all are logical. We can algorithmically think. We can program makers, but not everybody could program. finally we have a tool to be able to program in the natural language your mother, your grandmother can also talk to the computer and tell what steps to take and it will do the steps for you or it will tell you what steps to do so that’s the fundamental shift which means at population scale you’re going to have computing for everybody, that creates a huge gap, it’s not even 1000x as Jensen would say it’s a billion x absolutely true, but it creates a big technology gap, supply gap and increasingly because of model and languages and data the sovereignty gap also that’s appeared, that’s the theme of the conference that continues to drive tremendous amount of demand now we have seen a little bit of this before, I have been through the first two cycles of innovation in semiconductors in my first job as at Sun Microsystems and then the dot com era and then now and there was always a demand supply gap in one of these transitions and But we solved it in one way.

It doesn’t mean you can solve it the same way, but we are at the crux of solving it also in a similar way, but with a different set of boundary conditions, if you will. So what we solved between 1990 and 2000, if you look at, we went from clock rate, single CPU, to fundamentally shifting to multicore threaded and distributed systems, and that was the cloud phenomena. I have a slide later to show what the transition was. I’ll probably skip this slide. I think everybody knows we need lots of power, and one interesting point is I think India is going from almost nothing less than a gigawatt to about 10 gigawatt buildup, while the U .S. is going from 25 to 125 gigawatt in other regions, and China.

EMEA is going to be on a comparative basis, on a relative basis, a lot less. But the need for AI -ready geolocal data centers we already see. Everybody is building out. And what is the infrastructure? What is the architecture to support that? there’s certainly reference architectures inside the hyperscalers Google has got a TPU based and AWS has got Tranium based infrastructure Tranium plus general purpose computing and of course Microsoft has got Maya and of course NVIDIA so those are probably and of course AMD but increasingly over time you want to have an open multi -vendor strategy that’s probably where we’ll check we’ll talk more about so why do I believe these transitions and distributed systems are drivers of new innovation up and down the stack this is not new this has happened in history starting from VAX 11 780 was disrupted by of course at that time PC but more so in the enterprise side was Sun and the workstation if you think of the first distributed system in the modern era it was Sun Microsystems where Ethernet was used to build a distributed system network file system and that was version one and over time it’s like evolution you gain more mass, more momentum more weight in your capabilities and you end up building big monstrous machines in E10K that drove the internet and the dot com era but that was also was an Achilles heel because that was not going to enable the scale that people had to go build at much bigger so Google was probably the epitome of the next big shift I’ll talk about that and similar thing we see today is we’ve gone from CPU only dual socket x86 memory clusters to heterogeneous compute but also gone to a fairly large scale up now the interesting transition today was then is training and inferencing as you’ve seen the news lately with the Grok acquisition by Nvidia and others there’s clearly a separation between the training kind of workloads and inference type of workloads and what kind of systems you want to support because inference is going to drive a lot more of the compute so the one way I think about is inference and biology or workloads related to biology and healthcare are going to be the drivers of computing like it was for graphics in the 1990s.

So this is back again to reinforce the point that between 1994 and 2005, we saw the shift going from the version 1 .0 of distributed system to version 2 .0, which is open source. So the first one was open systems in the first 20 years. And open source came and enabled a new way to build distributed system because from an economics, it removed the cost of middle age of software. Everybody got access. In this case, Linux. This is Solaris. But that also enabled to build truly hundreds of thousands of machines in a single cluster. And out of it came Borg, Kubernetes, a whole bunch of other distributed file systems, all kinds of innovations that happened. So the proposition here is I think we are at the cusp of similar things on the infant scale computers.

And I think we are at the cusp of similar things on the infant scale computers. And I think we are at the cusp of similar things on the infant scale computers. And I think we are at the cusp of similar things on the infant scale computers. So just a reminder. And the punctuation that happens every time turns out to be, if you look back in history, it’s Ethernet. Yes, the network is the computer, but more important is 10 megabit was the onset of replacing big mainframes or miniframes like VAX to workstations and network of workstations. Then right at the point of 10 gigabit Ethernet coming around 2000 to 2002 timeframe, along with it was multi -core, was enabled the new distributed building block.

We are at the same point. We have got 800 gigabit Ethernet going to 800 gigabit Ethernet and probably a terabit Ethernet networks. And that’s hopefully, and that will be the enabler, and that’s a bet we are making. So the other element of the system is its network and then the memory. And do you build a full scale -up system at data center -wide? Certainly you need for training for backpropagation and forward. but inference can be much more distributed, shardable and it’s time to rethink what kind of systems you want for inference only dominating infrastructure. The other dimension to think about is we’ve gone from a single memory type to multiple memory types so do we need four light types of memory to deal with a variety of layers or just two or one?

That’s a lot of debate in the technical community but that’s a critical decision that will happen. So a way to think about this, we think of the entire system not from flops and GPU and compute. GPU compute, CPU compute are needed but really what does the memory hierarchy or memory system look like because there is a physical view because that dominates the cost function and the power function but equally at the same time you have to represent that especially from a performance standpoint you are caching lots of different data. for computing. So think of the KV caches for the LLM side, the in -memory representations of many of the data from a performance standpoint. So that’s a layer that is continuous, is rich in innovation and technical innovation that we hope to have an influence as well as probably make a mark.

And then the large part is the logical view of memory, especially deep context. You want to go from session to session, location to location, and you want to have your memory state. You want to be able to switch models and have some state of the memory state. All of these consume various layers of the logical and the physical layers of memory. So that’s what we think about. So net, putting all this together, we think of taking a bet with interrnet, taking a bet with memory, and build an infant -scale compute for population scale, like in this case India, but also in certain key verticals like health sciences and others. So… So there’s another important element we want to highlight.

I can’t take a quote from Jensen.

Jensen Huang

One of the applications that my favorite is just good old -fashioned data processing, structured data and unstructured data, just good old -fashioned data processing. And very soon we’re going to announce a very big initiative of accelerated data processing. Data processing represents the vast majority of the world’s CPUs today. It still completely runs on CPUs. If you go to Databricks, it’s mostly CPUs. If you go to Snowflakes, mostly CPUs. SQL processing at Oracle, mostly CPUs. Everybody’s using CPUs to do SQL, structured data.

Renu Raman

So taking a cue from what he’s saying, historically, databases, SQL, all run on CPUs. And that will remain the case for a variety of reasons. so that’s an important metric in terms of why we believe the new systems that we compose going forward needs to have a happy blend especially the ways to design systems for the hyperscalers but also the whole category of use case and customers in the private side where they don’t need to have 100 ,000 machines but smaller scale machines but it needs to have a happy blend of CPUs and GPUs that’s the main point in terms of so in that context we have taken a position to start working in partnership with AMD because they’ve got the x86 CPU assets and a compelling GPU roadmap as well as an architecture that supports both from the network side as well as the memory side they have higher memory capacity for LLM so it started with 256GB of HPM which supports 128 billion parameter models at least now it’s going to go to 288 and 512 and no time which means we can fit fairly sizable models so that enables one to do more kind of classical distributed systems principles of single node that captures most of the workload for most customers and be able to optimize it on that.

So coming back, before we get into what we do in Proximal, I want to emphasize the partnership with UC San Diego. They have a data center, as well as I told, it’s a supercomputing data center for research for NSF and DARPA, where we are doing some of the work in terms of the hardware level at the middle layer, in terms of the compute kernels, as well as in the inference engines, as well as the use cases, as I said, because there’s a data science institute, AI for education, to transform the undergrad and graduate level programs using the same tools to have advanced research capability, as well as for health sciences. So with that, I think that is a part to set the motivation for the future of the field.

Thank you. what we’re doing in Proximal Cloud. The next phase we want to go into specifically what we are launching in the four layers, the key components of what we are building and delivering to many cloud partners in India, starting with. There’s also a why India question. I think I’ll say one aspect is India demands an extremely low -cost infant -scale compute at population scale, and that’s a challenge. So we really are excited to work on that problem to start with. So the first thesis, why do we need compute other than the cloud? I think the best way to quote is Michael Dell telling you what he sees. To the beginning here.

Michael Dell

Yeah so we in the last year you know delivered a little over 3 ,000 of these still AI factories and you know those are increasingly to enterprise and commercial customers that want to bring the AI to their data not the data to the AI and you know there’s just a ton of data that is still on -prem and being generated on -prem and it turns out to the beginning here

Renu Raman

If you have a particular question in the domain that you understand, we can try it out after this. So we enable with this interactive learning for the students, contextualized intelligence, and, of course, instructor empowerment in that. And the way it will look and feel will be like a Jupyter notebook on the extension side will be the research content, the archive papers for them to use. It’s an add -on thing. It doesn’t have to be integrated. It’s a commercial AI chat, if you will. Then the next example would be MRI images. Unfortunately, I’m not able to log in remotely onto that right now. The other one I had a local copy. I’m not able to show the MRI images right now.

So at this point, I want to summarize saying that what is Proximal? The word Proximal brings compute closer to your data. The word Proximal means it’s sovereign to the nation or the region or the business that cares about it. And the word Proximal also means we bring compute closer to memory. We bring compute closer to where the business is. so that was the thesis we are not doing this alone we are doing it with some technology partners as well as we have some key customers and partners so with that let me give an example for a given education use case. Let’s go to I’ll bring Lalit Bhatt here to talk about who is director here heading the India office for PharmEx the key partner

Lalit Bhatt

Thanks Renu So what I’ll do is and thanks to Proximal Cloud for giving the stage out here what we’ll do is that basically first I’ll little bit talk about what PharmEx stands for and then why in this space and different space why local compute and all these things are becoming important so PharmEx is basically a comprehensive AI stack so if you see on the left hand side we have lot of infield sensors and So we have a complete comprehensive platform in terms of not only soil moisture sensor but dendro meters and multiple sensors. We also have imaging capabilities where we can take images using satellite, using drones. And we also have an autonomy stack and we just now have acquired an autonomous electric tractor.

Basically these are pretty big machines. They might look like transformers but they are like almost 70, 80 horsepower machines. And we are putting our autonomy stack here so that they will go completely as autonomous ones. So what I’ll do is that I’ll just run a small clip. And I’ll just run a small clip. Thank you. Thank you. Thank you. again I think this is probably very standard everyone understands this you need to do AI you need data these two things we need that one then what becomes important is how efficient you are in terms of running those inferences using those data and we are also dealing with huge amount of data and that’s where we are looking into this technologies where we can reduce our cost everyone understands that in agriculture it is very difficult to ask lot of money from the farmer so where we can really make our operation more efficient if we start like making sure that we are very efficient very effective in terms of dealing with a large amount of data and able to run inferences on top of that one but essentially that’s what happens we get a lot of data both from the imaging side both from the sensor side and then we have our all engines running which basically leads to diagnostic and recommendations and this is just an example of the kind of thing that we do with our customers.

You would see here like complete or autonomous irrigation scheduling. A lot of data points would go into those models basically to create those schedules, anomaly identifications, crop stresses, yield predictions, frost predictions, and even we have worked on this soil percolations model as well. It depends on what all sensors you take. So in India I can tell you like we sell one, there is two feet four sensing probe, which is like four sensing, it goes two feet one, and with the whole controller unit it says 45 ,000 per unit basically. Usually in India we recommend one unit in one hectare, but again it will change based on the variation of the soil and these things like that, but this has been a good ballpark basically.

So yeah, I guess that’s it. And I think the whole theme is that we also are looking for really reduce our inference cost and that’s where Proxima Cloud comes into picture. Thank you.

Renu Raman

Thank you, Bharat. Okay, next we’ll have Bharat, Director at Divium, who is a key partner, and as I mentioned earlier, about model selection and runtime optimization that is integrated or will be integrated into a stack. So, Bharat.

Lalit Bhatt

Hey, good afternoon, everyone. So, let’s address the… hard truth out there. 90 % of Gen -AI pilots never make it to production. Not because the demo was bad or the models were weak or bad. It’s primarily because of three reasons. Number one, quality is undefined. What’s good for one use case is not necessarily good for another one. There’s no standardized way for evaluation or regressions. Number two, the costs are unpredictable. Be the cheapest model or the best model, you can see the price of these models ranging different from like 10 to 50x. The moment your application goes into production and hits real traffic, the costs spike up. There are AI engineers who are running experimentations and trying to tune this.

But model selection is always a moving target. There are always new models coming which are trying to fix something and are breaking something else. So without addressing all three, it’s very difficult for an enterprise to take their pilots to production. And that’s why we built Divium. So Divium is the only inference layer built on quality. Thank you. Divium defines measurable evals aligned against each use cases And it optimizes every incoming query to select the model Which is giving you the best quality per dollar The other part is that Divium automates the entire model selection process By continuously evaluating new models Deprecating the previous old ones And migrating you to new ones If we find something better, we auto -upgrade without breaking production Evals first, routing second And that’s what makes Divium different from every other routing platform out there Divium is the only inference layer with customer -specific intelligence Your apps can be AI agents, rack pipelines, or multi -agent workflows And the LLMs can be from the standard OpenAI, Anthropic Be your own fine -tuned models or deployed open -source models We sit right in between We provide you a single API.

We are continuously evaluating each and every incoming request, routing it to the model which is giving you the most optimal performance and also giving you detailed visibility on what models are working, how is your agent performing and what’s the overall quality. Remember, DVM is trained on your data, your agents and your quality. There’s nothing generic out there. And this is just a theory. We’ve already proven it across multiple deployments. For the India’s largest travel aggregator, which runs a conversational shopping assistant in their application homepage, we were able to cut down the cost by more than 60%. For one of the leading e -pharmacies of India, the customer support chatbot had a little bit lower latency. So we ended up reducing the cost by 30%, reducing the latency, latency by 30%.

leading to a case resolution improvement of 95%. As you can see, different use cases, different industries, but the result is the same. Lower cost and better outcomes. And we understand enterprise realities. You can keep your data secure. We have flexible deployment options, be it SaaS, privately hosted, or on -prem clusters. You stay in control. If you’re trying to take your AI pilots to production, feel free to reach out to us. Thank you.

Renu Raman

Thank you, Bhatt. So we talked first about application use case, one in education and agriculture. Second one, how we are bringing optimization to the system stack. Some of it we do and some of it with our partners. Third, we want to bring in how do we get customers, many of them mid -market, small, as well as large ones, enabled on our platform. I’m happy to introduce Sandeep Kumar. Coming is part of… venture builder instances. It’s a company that we partnered with in Delhi here to take this to a variety of customers, small, medium, large, with a higher velocity. Let him describe what they can do and how we partner.

Sandeep Kumar

Hello, everyone. I’m Sandeep Kumar from Instant System. We are a Silicon Valley -based venture builder. We do not just build startups. We grow them. We are partners in every domain of a startup, be it engineering, be it product, be it marketing. We just give them full blueprint to be a successful startup. We co -invest in the startup so that we are there in every journey of them for them to be a successful startup. We are a venture builder. Sometimes it’s often confused with the incubator, but we are a venture builder. We are a venture builder where we actually help in every step of your startup to be a successful startup. Part of the engine system I am mostly responsible for a company called VanEye though we usually do not disclose the name of the company that we are partners with to protect the IP and the confidentiality but just to give you a use case that you know what our capabilities are and what we have been able to build so far so this is a use case that I am taking this company has got nearly around 200 million dollars funding from the top investors including South Bank we are building an AI conversation software here and we are dealing with real use case real challenges of you know for a mostly like financial domain or financial based industries but all of these solutions are also generic for the analytical based industries as well so I am going to talk about you know some of the challenges that are actually common to every problem or every AI -based solution.

But we’ve been able to identify these challenges and we’ve been able to solve these challenges for this particular use case. So one of the most challenged that every AI -based software face is hallucinations. So LLMs always try to answer to your question irrespective of how much of the context it does have. We’ve been able to solve this problem up to a very good extent and our system is almost 99 % reliable. They do not hallucinate. That was the biggest problem that we’ve been able to solve. Next challenge that we face is disambiguation. So in spite of providing the context, sometimes the system is not able to understand how to disambiguate between some specific terms which may exist in different domains.

So that’s also the problem that we’ve been able to solve. As the theme of the system, and it’s very closely related to the theme of the system, because data security and the data privacy is one of the major industry concerns that we’ve been able to solve. So we’ve been able to address and challenge this problem so that the data privacy and the data access control is being managed at the raw level or in a very technical term I would say at the object level. We’ve been able to tackle that problem and solve that problem efficiently and that’s already running there and working fine. Evaluation and the quality management, that’s also one of the key areas.

That we need to solve as part of the venture that we are building. That’s also that we’ve been able to solve very efficiently. Another thing is the reliability because since we are talking about the financial system, the system has to be reliable. It has to be reliable every time. You cannot send a million dollars to someone’s account by mistake. That doesn’t work in the financial world. Or you cannot report data where you could show losses. instead of revenue or vice versa because you cannot survive in that world with hallucinated or the data which is not correct or factual. So being able to, with our advanced architecture system, we’ve been able to solve that problem as well.

There’s a long list of the pointers that we’ve been able to solve, but I’ll just cut down short. The system that we’ve been building, our performance, the reliable, we’ve been able to keep a check on the cost and efficiency of the system. That’s how we’ve been able to serve to the different audience, different customers from the different niche. So that’s what our theme is. We are a venture builder. Please feel free to reach out to us, and we’d love to talk to you about your startup. We don’t pick a selected startup to work with, but you all are free. You’re welcome to reach out to us, and we can discuss all the stuff that we are doing.

working on. Thank you so much.

Renu Raman

Thank you, Sandeep. I think that ends what we’re doing in Proximal and what our partners and our customers are working with in the early phases. We have partners in the U .S. like UCSD and Life Sciences, Health Sciences, Education, and here in Agriculture and soon to other, particularly we’re going to focus more on, it turns out to be that the Government of India initiative of Education, Health and Agriculture coincidentally aligned. It was not planned. It turned out to be that way. With that, I can go back to any questions. We’ve had a small panel session we can go to. I don’t know if Piyush has come here or not, but I think there’s a question here.

I’m here from ARIA, from Infosys, Senior VP at Infosys. please

Audience

Hello excuse me my name is Arya Bhattacharjee I am from entrepreneur from Silicon Valley so right now like Renu said I am driving this semiconductor and AI vision for Infosys from the United States and India also so the reason I am here is because like Renu said very correctly a very important question that what’s in future for India how can India capitalize or make a mark in this journey so not a small answer but I can tell you what we are trying to do at Infosys because if India is going to win this Semicon 3 .0 or 4 .0, 2 .0, I don’t know, it has to be in software, it has to be in AI. The chip building -wise is going to take some time.

So Renu said that 80 % of the data is on -premise. And what we are working on together is to see on the semiconductor price, this is true, absolutely true, more than 90 % of the data is on -premise. Yes. So the whole journey of how to take the data and how to create solutions through agentic AI approach, through distributed computing, and actually by owning the architecture to lower inferencing cost is a main challenge. So to answer the question which Renu asked me, what’s the future of India? I think that India… what we’re going to do is we’re going to look at a domain. So at Infosys at least we have selected domain and semiconductor, I was talking to him also, that is a large domain and we have taken the leadership with some major clients right now, I can’t talk about details, we’re using an agentech AI on premise and delivering productivity solutions, AI solutions and by cutting our productivity for chip making at least by 25 % and every day in a semiconductor fab you save $10 million, benchmark for a 7 nanometer type of technology, not even 1 .9.

So with that, good luck to Renu and I look forward to collaborating. Thank you.

Renu Raman

Thank you, Arya. Now we welcome Abhishek Venjan but before, just come on board. To summarize what we do, graph, that is underneath what I think is the most important AI factors. Organizing the data layer turns out to be probably the most complicated thing, which spans the enterprise such that it can meet the intelligence. And so that’s the stuff that I think we’ll probably do a lot of. We still don’t really have deep research in a corporate context. We do. That’s what Copilot is about. But most people day -to -day do not have this. So are they just underusing AI that exists? Yes. In fact, it’s interesting you brought that up because to me that is the killer feature.

So the biggest thing we did was we took this graph that is underneath what I think is the most important database in any company, which is underneath your email, your documents, your Teams calls, what have you. It’s the relationships that, by the way, own AI factors. Organizing. Organizing the data layer. so that’s a best summary obviously Satya wants to do it in the cloud and that will happen but also you need to have it in your on -prem, near -prem isolated from other sovereign as well and have the same capabilities, in a sense that’s what we bring to the enterprise if you will any other questions before we go to a panel session

Abhishek Singh

Thanks for having me here this is Abhishek, founder of ZetaVault we did a lot of work on the LLM acceleration, what it means is that we offload the large language models to the specific chips and custom silicon and thereby get the inferencing states and all we have Renu here who has wealth of experience on the distributed computing side and we were supposed to have a panel discussion but I thought I would pick his brain on what the challenges and what kind of changes he has seen in the industry. So, Renu, like, you have been part of, like, Sun Microsystems and early sort of, like, pioneers of distributed computing. So from Sun, which was maybe the distributed systems 1 .0 to Linux, which pretty much democratized the entire competition space and brought the Linux and x86 and now almost every embedded device, every competition pretty much happens on Linux.

So that was the distributed systems 2 .0. And now coming to the distributed computing space with the open models, right? Open source has played a lot of sort of role in the proliferation of the distributed computing. What do you foresee or what do you envision the open models are going to do for the distributed computing? Are we going to see a distributed computing 3 .0?

Renu Raman

Hello. Yeah. So that’s a fundamental thesis in that. I mean, we are, in a way, in part of that continuum to some extent. If you look at… not to take anything away from how NVIDIA designs, but there is a clear bifurcation going on right now as we speak on, as we said, training versus inferencing. And then there is open source models, and a variety of customers’ use cases would use and need the open source models. There’s always been the history of open and close in every transition. I mean, if you go back in the 80s and you go back, I mean, if you look at what enabled the cloud was hypervisors. There was KVM and VMware.

The same thing will apply. There will be open models and closed models. But the way I like to think about it is models is a new abstraction layer that separates between the underlying computing needs and everything above. Hypervisors separated the physical machine to a virtual machine, and then operating systems unix at that time also did that. The same thing is models are the abstraction layer that provides a higher degree of innovation both from closed and open models. The closed one will probably be innovated within OpenAI and Google, but the rest of the world will take the open source model, like what happened with Linux, and innovate. It’s not just going to be an NVIDIA GPU or AMD GPU, or there could be a plethora of GPUs, country -specific, region -specific, domain -specific.

Anything can happen over time.

Abhishek Singh

That’s a very wonderful take. One of the things that we have been wondering is about the latency you talked about in the various scientific and other applications you’re working. When we build the solutions for our customers, we build a lot of, actually, natural language to query processing kind of solutions. Like, we have been able to do maybe a sub -minute kind of a solution, which is acceptable to the customer because from weeks or days, he is able to answer or get the answers to their queries in less than a minute, right? But even a minute is not sufficient, right? When you talk about, like, really interactive queries and all, you want, like, sub -millisecond. Or maybe, like, sub -second kind of response.

what are your thoughts on that? Is it even possible that to a population or to a large customer base that we have in India about 1 .5 billion people at a very low cost, maybe like 200 rupees per month, you can provide query processing at a scale which is like within sub second?

Renu Raman

I think that’s a very good question. Sometimes scaling the problem is more important than the answer. This is an interesting way to frame the problem is, if you go back and look in history, why did Google succeed? A fundamental decision that was made by them on the toolbar is every query response has to be in 20 milliseconds. Now, nobody thought about it prior to that. It’s obvious today. But that key proposition or definition or the question that was asked, maybe by Larry Page or Sergey Brin, whoever it was, led to what we see as Google today in the back end, which is a huge amount of infrastructure to satisfy the 20 millisecond response to any query. so to me the same thing applies today, maybe 20 is too hard I’m just going to arbitrarily pick I have a simple demo or animation thing I was trying to show every 120 milliseconds you want to have the answer today if you go ask a question it will take seconds sometimes longer than that we are all impatient, we want the answer in quick order, when I ask you a question you don’t say let me think and come back, you want to give the answer if you don’t know how to think and come back, ok, you can but that’s a very deep question you can think and come back but we can throw more computing resources to get that so what it tells you is you can throw a lot more computing to get to the answer, it’s not just hardware, it’s going to be algorithmic improvements other improvements, but to me that’s the benchmark, get 120 milliseconds to any query for anybody so there’s a global context and India context, India provides an ample opportunity for 1 .4 billion people if you can deliver at a cost point and you can deliver at a cost point like 200 rupees a month at 120 seconds and any query to be handled which is a long road to go but if you can meet the objective in 10 to 20 years it serves a lot of people but it also will drive tremendous amount of innovation that’s why when somebody says population scale unique India has a unique thing about the population scale problem and the cost problem so hopefully there are enough people within as Arya said here semiconductor 3 .0 and other innovations that can drive to build India’s own sovereign lowest cost, shortest answer to any language to the question that you asked

Abhishek Singh

interesting take on that one of the things that keeps coming is about the scale of the global corporations or the size that they have been able to reach with AI gaining mainstream in India. And there is a parallel actually theme going on, which is on the semiconductor side, right? Like we are putting, the government is putting a lot of focus. Private players are putting a lot of focus. We have an audience like esteemed, like our Infosys guest. And the question that everybody keeps wondering about is with the AI and AI speeding up things, he’s talking about productivity gains and all. Like, can we, like what kind of corporations can come out of India? Can we see like NVIDIA is coming out or, I don’t know, like Palantir or Supermicro or even a new version of some microsystems coming out just because there is so much emphasis on AI or the Semicon side.

I’ll let Renu talk and then maybe you can also have your take on this particular question, right? What kind of corporations can come out, right? Your take.

Renu Raman

transition, can there be an SAP coming out of this transition in the AI? Why not? To give you some raw numbers, every gigawatt of power will require $25 billion worth of compute memory network storage. So if India is going to do 10 gigawatts, that’s $250 billion of hardware. That brings multiple super micros, or that sustains a semiconductor ecosystem at that scale. So certainly the investments going in for power, which is a long lead item, is important, but the next layer provides the economic value to host the hardware systems company, the HP, the Dells, and Supermicro can emerge. You can go each layer of the stack. The next layer is the application in your tier. Proximal is that. Maybe we could become the SAP of tomorrow.

That could be a Palantir, which is the application tier, not just Palantir, any other company. So both the scale and if you can solve the technology, the scale, and the cost economic it’s not just restricted India it can be global unlike the China model where it ended up being a very close garden wall I think India has the opportunity to be make in India and make for global it’s much better but you just have to think bigger take more important is take bolder bets and go for the long haul not just work for it for 5 years 7 years these are 10 20 years cycles to change very interesting take and thanks a lot for actually and I would like to have your opinion on this particular question do we see NVIDIA SAP’s and Oracle’s coming out thanks sir so I think the semiconductor data for example recently working with more than large companies I want to give some specific examples they are just ingesting data right now just I’m talking about a fab not design they have got 7 petabytes of data ingested and they don’t know what to do with it And like I said, a typical fab manufacturing facility is worth at least $10 billion.

And it’s got thousands of steps. It takes about 120 days to make a chip. So $10 billion for 120 days producing a wafer, and there are defects, design issues, things. So if you take the data just from basic information, run -time, real -time data, defects, soft defects and hard defects, because, you know, just because a chip fails doesn’t mean it’s slow. Slow means no money. That’s a failure. So collecting all that data, classificating the data, understanding and using agents in an edge computing way. You cannot solve this in a server. And then feeding it back to design infrastructure. So the design time also has shrunk a lot. And the yields are going up. 30 years ago when I was in Intel, we were talking about die sizes of like maybe one centimeter by one centimeter.

Today, in a 300 millimeter wafer, NVIDIA’s latest wafer level ship is about 20 centimeter by 20 centimeter. That level of yield and reliability is unimaginable without the use of AI. So I can go on and on, but I think if India has to win, I don’t think India needs to become a Palantir. And India does not want to become a slave shop. So the way I explain that in a one level page, the Palantir’s gross margin is 95%. Indian company’s gross margin is 30%. Can we build a business at 50 % gross margin where the amount of domain expertise India provides with the amount of data is available to take these technologies we talk about to implement them in real practice? That’s what India can win.

it is the execution with the best technology thank you thank you

Abhishek Singh

thanks everyone I have one question for the venture side so like all these like technologies require a lot of investment before they can actually become fruitful right I heard somewhere that the government of Karnataka and I don’t want to demean them by the way I put like 20 crores of fund for actually funding the startups and all that right the single engineer which actually Meta is hiring right now they are throwing how much 100 million dollars at that engineer 20 crore for funding like hundreds of startups versus 100 billion dollar like being given to one engineer right there is a huge mismatch now the question is do we like for Indian companies do the venture capitalists or this or the private equity do they have such deep pockets to fund them for like continuously fund them for hundreds of millions of billions of dollars so that an Nvidia or AMD or I don’t know like the Sun Microsystems can

Renu Raman

I want you to take. Answer this question. Actually, I would like to take your. I don’t want to answer. I want you to answer the question. Answer your own question.

Abhishek Singh

I want to answer my own question. Yes, it will require that kind of investment, right? I think this was one topic I touched upon a long time back. ISRO has been funded like continuously, right? Initially, the ISRO rockets would all land up in the ocean, right? Or the sea or whatever. But over a period of time, they gained competence. They are among the top four in the world right now, right? I think that kind of like continuous and continued support is needed for whatever industry we are picking, whether it is AI or whether it is Semicon. Like we need the private players. We need the government to support it like till the end, right? And that’s when maybe the key players and the winners will emerge.

Renu Raman

I think your question has got two parts. I think the first part is that the government is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going Sorry, there was a public announcement.

There’s an interruption here. I think there are two parts. One is there’s a mismatch between what a demand supply gap and skills in the model companies, if you

Abhishek Singh

Hopefully, yeah.

Renu Raman

So why did you do it and what do you think? That’s why I asked the question back to you.

Abhishek Singh

It’s good. It’s fun to build for India, by the way, and build from India, right? Build for India, build from India. That’s why we are here. And that’s why all this conference is here. All the discussions are happening.

Renu Raman

But thanks a lot, Renu, for all the wonderful insights. Last call. Anybody from the audience wants to ask any question to Renu?

Audience

Yes, sir. Thanks, sir. So my question is that you shared that if 10 gigawatt business comes to India, that means $250 billion worth of equipment will be purchased or something will happen in India. so how can we ensure that 10 gigawatts just leave 10 let’s start with 1 so how will that business come to India how will that I am just sharing that if 1 gigawatt business you said will come so how will that business come how will that business come

Renu Raman

today we already see most of the hardware is either broadened by the hyperscalers who got some capacity and then Dell HP are the largest OEMs as I understand super micro is behind I guess most of the hardware level systems are manufactured in Taiwan and other places and brought here and there are emergent players like VVDN Sanmina has got a manufacturing plant in Chennai who is going to come and do make in India I don’t want to steal the thunder so there are emergent ones seeing the economic value of that scale to start designing but we have already seen I don’t know the details of all the phone manufacturing that’s happened So the ecosystem of building chassis systems, board design, design capability was there, but manufacturing and operations support and all that was not there.

So I do expect that to start happening. That’s why we started working with CDAC and VVDN to some extent. We do see the opportunity that there’s at least a $300 to $500 million opportunity. If you look at the interesting aspect is the Indian public market is also valuing these things fairly high. Look at NetWeb and others. So you can’t go and raise money in the public market for these kinds of businesses in NASDAQ. You can certainly do that in India. So it’s an interesting point in time where there’s a demand, there’s a need, there needs to be enough people willing to invest. And there’s also probably a way to scale the business. I don’t view going public as an exit.

But really? I’m viewing going public as a way to raise money to scale the business. So there’s enough financial muscle that’s getting built at all stages. But the question is, are there enough people funding at the early phases to fund some of these, right? That I think they have to come together. I’m on the entrepreneur side, not the venture side. I’ve played both, but that has to come. That’s my point of answer to Abhishek’s question is, at a 10 gigawatt, it’s going to be multi -hundreds of billion dollars worth all the layers of the stack, and there should be enough investments going in. And if you look at what has happened in China, there’s a different way to drive that capitalistic structure, right?

They have taken a centralized model, but enabled a lot of districts and regional people to go invest. Look at the cars. How many car companies are there? I’m not saying you should follow the same model, but there should be enough early stage at various layers of the stack to be invested. So, the opportunity, the exit

Abhishek Singh

Thank you. Thank you, Renu, for all that, and everybody who has participated. Thanks for coming, guys. We’ll close the session here, so thanks a lot. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Renu Raman
14 arguments174 words per minute6044 words2073 seconds
Argument 1
Technology shifts occur in predictable cycles (15-30 years) with semiconductors driving innovation in 80s-90s, cloud in last two decades, and now AI creating billion-fold demand increases
EXPLANATION
Renu argues that major technology transformations follow predictable patterns every 15-30 years. He explains that semiconductors powered innovation in the 80s-90s through Moore’s Law, cloud computing dominated the last two decades, and now AI is creating exponentially larger demand increases.
EVIDENCE
References Moore’s Law driving semiconductor innovation, mentions NVIDIA’s silicon innovation and language models converging, cites Jensen’s statement about 1000x demand but argues it’s actually billion-fold
MAJOR DISCUSSION POINT
Technology evolution cycles and their economic impact
Argument 2
AI will impact 95% of work compared to SaaS era’s surface-level productivity improvements, creating much bigger economic impact
EXPLANATION
Renu contends that AI’s impact will be far more comprehensive than previous technology waves. While the SaaS era only scratched the surface of productivity improvements, AI will fundamentally transform 95% of all work activities.
EVIDENCE
Compares SaaS era productivity gains to AI’s broader impact scope, mentions GDP growth from $33 trillion to $100 trillion in last 20 years with potential for 2x-4x growth in next 20 years
MAJOR DISCUSSION POINT
AI’s transformative economic potential
Argument 3
Infrastructure spending has grown from $50 billion to $300 billion, projected to reach $2 trillion in next 5-10 years due to AI demands
EXPLANATION
Renu presents the massive scale of infrastructure investment required for AI computing. He explains that every dollar spent on power translates to $3-5 in capital expenditure for compute, memory, network, and storage infrastructure.
EVIDENCE
Cites specific figures: $50 billion to $300 billion current spending, projected $2 trillion in 5-10 years, power-to-capital ratio of 1:3-5
MAJOR DISCUSSION POINT
Infrastructure investment requirements for AI
Argument 4
Natural language programming through LLMs democratizes computing by enabling everyone to program, bringing computation to population scale
EXPLANATION
Renu argues that large language models have fundamentally changed programming by allowing natural language interaction with computers. This breakthrough means that anyone who can think algorithmically can now program, dramatically expanding the potential user base.
EVIDENCE
States that ‘your mother, your grandmother can also talk to the computer and tell what steps to take’, emphasizes that everyone is logical and can think algorithmically
MAJOR DISCUSSION POINT
Democratization of programming through AI
Argument 5
Most enterprise data (80-90%) remains on-premises, requiring AI solutions that bring compute to data rather than data to cloud
EXPLANATION
Renu emphasizes that the vast majority of enterprise data still resides on-premises rather than in cloud environments. This reality necessitates a fundamental shift in approach – bringing AI computing capabilities to where the data already exists instead of trying to move all data to centralized cloud systems.
EVIDENCE
References Michael Dell’s quote about delivering AI factories to enterprise customers who want to ‘bring the AI to their data not the data to the AI’
MAJOR DISCUSSION POINT
Enterprise data location and AI deployment strategies
AGREED WITH
Michael Dell
DISAGREED WITH
Michael Dell
Argument 6
Data organization and creating intelligent enterprise graphs from emails, documents, and communications is the most critical AI infrastructure challenge
EXPLANATION
Renu identifies the organization of enterprise data as the most complex and important challenge in AI implementation. He argues that creating intelligent graphs from various communication channels and documents is fundamental to enabling effective AI systems in corporate contexts.
EVIDENCE
References Satya Nadella’s approach with Copilot, mentions organizing graphs underneath email, documents, Teams calls, and relationships
MAJOR DISCUSSION POINT
Enterprise data organization for AI
Argument 7
India demands extremely low-cost infant-scale compute at population scale, presenting unique engineering challenges and opportunities
EXPLANATION
Renu argues that India’s massive population of 1.4 billion people creates a unique requirement for computing infrastructure that must be both extremely cost-effective and capable of serving population-scale demands. This presents both significant engineering challenges and unprecedented opportunities for innovation.
EVIDENCE
References India’s 1.4 billion population, mentions the challenge of serving at population scale with cost constraints
MAJOR DISCUSSION POINT
India’s unique computing infrastructure requirements
AGREED WITH
Abhishek Singh
DISAGREED WITH
Audience
Argument 8
10 gigawatt power infrastructure in India could drive $250 billion in hardware systems, creating opportunities for multiple system companies and semiconductor ecosystem
EXPLANATION
Renu calculates that India’s planned power infrastructure expansion will create massive economic opportunities in the hardware and semiconductor sectors. He argues that every gigawatt of power requires $25 billion worth of computing equipment, creating space for multiple large system companies to emerge.
EVIDENCE
Provides specific calculation: 10 gigawatts × $25 billion per gigawatt = $250 billion in hardware opportunities, mentions this could sustain multiple ‘super micros’ and semiconductor ecosystem
MAJOR DISCUSSION POINT
Economic opportunities from India’s power infrastructure expansion
AGREED WITH
Abhishek Singh, Audience
Argument 9
Current transition from training-focused to inference-focused workloads requires rethinking system architecture for distributed, shardable inference computing
EXPLANATION
Renu argues that the AI industry is experiencing a fundamental shift from training-dominated workloads to inference-dominated ones. This transition requires completely different system architectures that can distribute and shard inference workloads efficiently, rather than the large-scale systems optimized for training.
EVIDENCE
References recent news about Grok acquisition by Nvidia, mentions clear separation between training and inference workloads, notes inference will drive more compute
MAJOR DISCUSSION POINT
AI workload evolution and system architecture implications
AGREED WITH
Jensen Huang
Argument 10
Open source models will drive distributed computing 3.0, similar to how Linux democratized computing, enabling country-specific and domain-specific innovations
EXPLANATION
Renu draws parallels between the historical impact of Linux on distributed computing and the potential impact of open source AI models. He argues that just as Linux enabled widespread innovation in the 2000s, open source models will drive a new wave of distributed computing innovation tailored to specific countries and domains.
EVIDENCE
Compares open source models to Linux’s historical impact, mentions the pattern of open vs closed systems in every transition, references hypervisors (KVM vs VMware) as precedent
MAJOR DISCUSSION POINT
Open source models as drivers of distributed computing innovation
Argument 11
Memory hierarchy design with multiple memory types and caching strategies (KV caches, in-memory representations) dominates cost and performance functions
EXPLANATION
Renu argues that the design of memory systems has become the critical factor determining both cost and performance in AI systems. He emphasizes that managing different types of memory and caching strategies, particularly for AI workloads like KV caches for LLMs, is where the most important technical innovation is happening.
EVIDENCE
Mentions transition from single memory type to multiple memory types, discusses KV caches for LLM side, in-memory representations for performance, physical vs logical memory views
MAJOR DISCUSSION POINT
Memory system design as critical performance factor
Argument 12
800 gigabit to terabit Ethernet networks will enable new distributed computing paradigms, similar to how 10 gigabit enabled cloud computing
EXPLANATION
Renu draws historical parallels between network speed improvements and computing paradigm shifts. He argues that just as 10 gigabit Ethernet enabled the cloud computing revolution around 2000-2002, the current transition to 800 gigabit and terabit Ethernet will enable new forms of distributed AI computing.
EVIDENCE
Provides historical context: 10 megabit Ethernet enabled workstation networks, 10 gigabit Ethernet enabled cloud computing with multi-core systems, now 800 gigabit to terabit will enable new paradigms
MAJOR DISCUSSION POINT
Network infrastructure as enabler of computing paradigm shifts
Argument 13
Target of 120 millisecond response time for any query (similar to Google’s 20ms standard) requires massive computing resources and algorithmic improvements
EXPLANATION
Renu sets an ambitious performance benchmark for AI query response times, drawing inspiration from Google’s historical 20-millisecond search response standard. He argues that achieving 120-millisecond responses to any query will require both massive computing infrastructure and significant algorithmic innovations.
EVIDENCE
References Google’s historical decision to target 20-millisecond query responses as key to their success, notes current AI queries take seconds, emphasizes need for both computing resources and algorithmic improvements
MAJOR DISCUSSION POINT
Performance benchmarks for AI query response
Argument 14
Educational AI systems need contextualized intelligence and interactive learning capabilities integrated with research content and archives
EXPLANATION
Renu describes the requirements for AI systems in educational contexts, emphasizing the need for systems that can provide contextual intelligence and enable interactive learning. These systems should integrate seamlessly with research content and academic archives to enhance the learning experience.
EVIDENCE
Mentions partnership with UC San Diego for AI in education, describes Jupyter notebook-like interface with research content and archive papers integration
MAJOR DISCUSSION POINT
AI applications in education
L
Lalit Bhatt
3 arguments123 words per minute1070 words519 seconds
Argument 1
90% of Gen-AI pilots never make it to production due to undefined quality standards, unpredictable costs, and constantly changing model selection
EXPLANATION
Lalit identifies three critical barriers preventing AI pilots from reaching production deployment. He argues that without standardized quality evaluation methods, predictable cost structures, and stable model selection processes, most AI initiatives fail to scale beyond the pilot phase.
EVIDENCE
Cites specific 90% failure rate statistic, explains three reasons: quality is undefined and varies by use case, costs are unpredictable with 10-50x price variations, model selection is constantly changing as new models emerge
MAJOR DISCUSSION POINT
Barriers to AI production deployment
Argument 2
Agriculture sector needs efficient AI inference for sensor data, imaging, and autonomous systems while maintaining low costs for farmers
EXPLANATION
Lalit explains that agricultural AI applications must process vast amounts of data from multiple sources including soil sensors, satellite imagery, and autonomous machinery. However, the economic constraints of the agricultural sector require these solutions to be extremely cost-effective to be viable for farmers.
EVIDENCE
Describes PharmEx’s comprehensive AI stack including soil moisture sensors, dendrometers, satellite and drone imaging, autonomous electric tractors (70-80 horsepower), mentions ₹45,000 per unit cost for sensing systems
MAJOR DISCUSSION POINT
AI applications in agriculture with cost constraints
Argument 3
Cost reduction of 30-60% achieved through intelligent model routing and selection based on quality-per-dollar optimization
EXPLANATION
Lalit presents evidence that intelligent routing systems can significantly reduce AI operational costs while maintaining or improving quality. His approach focuses on optimizing the quality-per-dollar ratio rather than simply choosing the cheapest or best-performing models.
EVIDENCE
Provides specific case studies: 60% cost reduction for India’s largest travel aggregator’s conversational shopping assistant, 30% cost and latency reduction for leading e-pharmacy customer support with 95% case resolution improvement
MAJOR DISCUSSION POINT
AI cost optimization through intelligent routing
M
Michael Dell
1 argument126 words per minute73 words34 seconds
Argument 1
Most enterprise data (80-90%) remains on-premises, requiring AI solutions that bring compute to data rather than data to cloud
EXPLANATION
Michael Dell emphasizes that despite the cloud revolution, the vast majority of enterprise data still resides in on-premises systems and continues to be generated locally. This reality requires a fundamental shift in AI deployment strategy, focusing on bringing AI capabilities to where the data already exists rather than attempting to move all data to centralized cloud systems.
EVIDENCE
States they delivered over 3,000 AI factories in the last year, increasingly to enterprise and commercial customers, emphasizes ‘bring the AI to their data not the data to the AI’, notes ‘ton of data that is still on-prem and being generated on-prem’
MAJOR DISCUSSION POINT
Enterprise data location and AI deployment strategies
AGREED WITH
Renu Raman
DISAGREED WITH
Renu Raman
J
Jensen Huang
1 argument149 words per minute86 words34 seconds
Argument 1
Traditional data processing (SQL, databases) still runs primarily on CPUs and will continue to do so, requiring hybrid CPU-GPU systems
EXPLANATION
Jensen Huang points out that despite advances in GPU computing for AI, traditional data processing workloads including SQL databases and structured data processing continue to run predominantly on CPU systems. This reality necessitates hybrid architectures that effectively combine both CPU and GPU capabilities.
EVIDENCE
Mentions upcoming announcement of ‘very big initiative of accelerated data processing’, specifically notes that Databricks, Snowflake, and Oracle SQL processing are ‘mostly CPUs’, emphasizes that ‘data processing represents the vast majority of the world’s CPUs today’
MAJOR DISCUSSION POINT
Hybrid CPU-GPU architecture requirements
AGREED WITH
Renu Raman
A
Audience
2 arguments136 words per minute428 words188 seconds
Argument 1
India can win in semiconductor 3.0/4.0 through software and AI rather than chip manufacturing, focusing on domain expertise with 50% gross margins versus traditional 30%
EXPLANATION
The audience member from Infosys argues that India’s competitive advantage in the semiconductor industry will come through software and AI applications rather than chip manufacturing. They propose targeting a middle ground with 50% gross margins, leveraging India’s domain expertise and data availability, rather than competing at traditional 30% margins or trying to match Palantir’s 95% margins.
EVIDENCE
Contrasts Palantir’s 95% gross margins with Indian companies’ typical 30% margins, mentions working with major clients using agentic AI on-premise delivering 25% productivity improvements, notes $10 million daily savings in semiconductor fabs for 7 nanometer technology
MAJOR DISCUSSION POINT
India’s strategic positioning in semiconductor and AI industries
AGREED WITH
Renu Raman, Abhishek Singh
DISAGREED WITH
Renu Raman
Argument 2
Semiconductor manufacturing generates 7 petabytes of data requiring real-time edge AI processing to improve yields and reduce defects in $10 billion facilities
EXPLANATION
The audience member describes the massive data processing requirements in modern semiconductor manufacturing, where facilities worth $10 billion generate petabytes of data that must be processed in real-time to optimize yields and reduce defects. This requires edge computing solutions rather than centralized server processing.
EVIDENCE
Cites specific example of 7 petabytes of ingested data, mentions $10 billion facility value, 120-day chip manufacturing cycle, thousands of manufacturing steps, discusses soft and hard defects classification, notes NVIDIA’s latest wafer-level chips are 20cm x 20cm on 300mm wafers
MAJOR DISCUSSION POINT
AI applications in semiconductor manufacturing
A
Abhishek Singh
2 arguments157 words per minute955 words364 seconds
Argument 1
India needs sustained long-term investment similar to ISRO’s model, with continuous government and private support over decades
EXPLANATION
Abhishek argues that India’s success in AI and semiconductor industries will require the same type of sustained, long-term investment approach that made ISRO successful. He emphasizes that despite initial failures, continuous support over decades eventually led to ISRO becoming one of the world’s top four space agencies.
EVIDENCE
References ISRO’s journey from early rocket failures landing in the ocean to becoming top-four globally, emphasizes the need for ‘continuous and continued support’ from both government and private players ’till the end’
MAJOR DISCUSSION POINT
Long-term investment strategies for technology development
AGREED WITH
Renu Raman, Audience
Argument 2
Sub-second query processing at population scale (1.4 billion people) for ₹200/month requires breakthrough innovations in cost and latency
EXPLANATION
Abhishek poses a challenging technical and economic question about whether it’s possible to provide sub-second AI query processing to India’s entire population at an extremely low cost point. This would require fundamental breakthroughs in both technical performance and cost optimization.
EVIDENCE
Specifies the challenge: sub-second response times for 1.4 billion people at ₹200 per month cost, contrasts current minute-level response times that are acceptable for some use cases but insufficient for interactive queries
MAJOR DISCUSSION POINT
Technical and economic feasibility of population-scale AI
AGREED WITH
Renu Raman
S
Sandeep Kumar
2 arguments158 words per minute769 words290 seconds
Argument 1
Financial systems require 99% reliability without hallucinations, with advanced architecture solving data privacy and access control at object level
EXPLANATION
Sandeep emphasizes that AI systems for financial applications must achieve near-perfect reliability and completely eliminate hallucinations, as errors could result in catastrophic financial mistakes. He describes implementing advanced architectures that manage data privacy and access control at the most granular object level.
EVIDENCE
Mentions working with a company that received $200 million funding from top investors including SoftBank, emphasizes that ‘you cannot send a million dollars to someone’s account by mistake’ or ‘show losses instead of revenue’, describes solving hallucination problems to 99% reliability
MAJOR DISCUSSION POINT
AI reliability requirements in financial systems
Argument 2
Venture building approach provides full engineering, product, and marketing support with co-investment model for startup success
EXPLANATION
Sandeep describes a comprehensive venture building model that goes beyond traditional incubation by providing complete support across all aspects of startup development. This includes co-investment to ensure aligned incentives and sustained support throughout the startup’s journey.
EVIDENCE
Distinguishes venture building from incubation, mentions providing ‘full blueprint to be a successful startup’, describes co-investment model ensuring partnership ‘in every journey’, offers support in engineering, product, and marketing domains
MAJOR DISCUSSION POINT
Comprehensive startup support models
Agreements
Agreement Points
Most enterprise data remains on-premises requiring AI solutions to come to the data
Speakers: Renu Raman, Michael Dell
Most enterprise data (80-90%) remains on-premises, requiring AI solutions that bring compute to data rather than data to cloud Most enterprise data (80-90%) remains on-premises, requiring AI solutions that bring compute to data rather than data to cloud
Both speakers strongly agree that the vast majority of enterprise data still resides on-premises and continues to be generated locally, necessitating a fundamental shift in AI deployment strategy to bring computing capabilities to where data exists rather than moving data to centralized cloud systems
Hybrid CPU-GPU architectures are necessary for enterprise AI systems
Speakers: Renu Raman, Jensen Huang
Current transition from training-focused to inference-focused workloads requires rethinking system architecture for distributed, shardable inference computing Traditional data processing (SQL, databases) still runs primarily on CPUs and will continue to do so, requiring hybrid CPU-GPU systems
Both speakers recognize that enterprise AI systems cannot rely solely on GPU computing but must incorporate hybrid architectures that effectively combine CPU and GPU capabilities, as traditional data processing workloads continue to run on CPUs while AI workloads benefit from GPU acceleration
India has unique opportunities for population-scale computing innovation
Speakers: Renu Raman, Abhishek Singh
India demands extremely low-cost infant-scale compute at population scale, presenting unique engineering challenges and opportunities Sub-second query processing at population scale (1.4 billion people) for ₹200/month requires breakthrough innovations in cost and latency
Both speakers acknowledge that India’s massive population creates unprecedented opportunities for computing innovation, requiring breakthrough solutions that can serve 1.4 billion people at extremely low cost points while maintaining high performance standards
Long-term sustained investment is crucial for technology development success
Speakers: Renu Raman, Abhishek Singh, Audience
10 gigawatt power infrastructure in India could drive $250 billion in hardware systems, creating opportunities for multiple system companies and semiconductor ecosystem India needs sustained long-term investment similar to ISRO’s model, with continuous government and private support over decades India can win in semiconductor 3.0/4.0 through software and AI rather than chip manufacturing, focusing on domain expertise with 50% gross margins versus traditional 30%
All speakers agree that achieving success in AI and semiconductor industries requires sustained, long-term investment approaches spanning decades, with both government and private sector commitment, similar to successful models like ISRO
Similar Viewpoints
Both speakers emphasize the critical importance of cost optimization in AI systems while maintaining high performance, recognizing that economic constraints in sectors like agriculture require innovative approaches to make AI viable
Speakers: Renu Raman, Lalit Bhatt
Agriculture sector needs efficient AI inference for sensor data, imaging, and autonomous systems while maintaining low costs for farmers Target of 120 millisecond response time for any query (similar to Google’s 20ms standard) requires massive computing resources and algorithmic improvements
Both speakers recognize that managing and processing massive amounts of enterprise data is fundamental to successful AI implementation, whether in general enterprise contexts or specialized manufacturing environments
Speakers: Renu Raman, Audience
Data organization and creating intelligent enterprise graphs from emails, documents, and communications is the most critical AI infrastructure challenge Semiconductor manufacturing generates 7 petabytes of data requiring real-time edge AI processing to improve yields and reduce defects in $10 billion facilities
Both speakers emphasize that production-ready AI systems require extremely high reliability standards and sophisticated quality management approaches, far beyond what is typically achieved in pilot projects
Speakers: Lalit Bhatt, Sandeep Kumar
90% of Gen-AI pilots never make it to production due to undefined quality standards, unpredictable costs, and constantly changing model selection Financial systems require 99% reliability without hallucinations, with advanced architecture solving data privacy and access control at object level
Unexpected Consensus
Open source models as drivers of distributed computing innovation
Speakers: Renu Raman, Abhishek Singh
Open source models will drive distributed computing 3.0, similar to how Linux democratized computing, enabling country-specific and domain-specific innovations India needs sustained long-term investment similar to ISRO’s model, with continuous government and private support over decades
The unexpected consensus emerges around the idea that open source AI models will have a democratizing effect similar to Linux, enabling smaller countries and specialized domains to innovate independently of major tech companies. This represents a shift from centralized AI development to distributed innovation ecosystems
Memory hierarchy as the dominant factor in AI system design
Speakers: Renu Raman, Jensen Huang
Memory hierarchy design with multiple memory types and caching strategies (KV caches, in-memory representations) dominates cost and performance functions Traditional data processing (SQL, databases) still runs primarily on CPUs and will continue to do so, requiring hybrid CPU-GPU systems
There’s unexpected alignment on the idea that memory architecture, rather than pure compute power, has become the critical design factor in AI systems. This challenges the common focus on GPU compute capabilities and highlights the importance of data movement and storage optimization
Overall Assessment

The speakers demonstrate strong consensus on several key areas: the need for on-premises AI solutions due to data locality, the requirement for hybrid computing architectures, India’s unique position for population-scale innovation, and the necessity of long-term investment strategies. There’s also alignment on the critical importance of cost optimization, data management challenges, and quality/reliability standards for production AI systems.

High level of consensus with significant implications for AI infrastructure development, particularly regarding the shift from cloud-centric to edge/on-premises AI deployment strategies, the recognition of India as a unique testing ground for population-scale computing solutions, and the understanding that successful AI implementation requires sustained, multi-decade investment approaches rather than short-term initiatives.

Differences
Different Viewpoints
Approach to AI deployment – centralized cloud vs distributed edge computing
Speakers: Renu Raman, Michael Dell
Most enterprise data (80-90%) remains on-premises, requiring AI solutions that bring compute to data rather than data to cloud Most enterprise data (80-90%) remains on-premises, requiring AI solutions that bring compute to data rather than data to cloud
While both speakers agree on the data location reality, there’s an implicit disagreement on deployment strategy. Renu advocates for distributed, sovereign computing solutions while Michael Dell focuses on enterprise AI factories, suggesting different architectural approaches to the same problem.
India’s competitive positioning strategy in global markets
Speakers: Renu Raman, Audience
India demands extremely low-cost infant-scale compute at population scale, presenting unique engineering challenges and opportunities India can win in semiconductor 3.0/4.0 through software and AI rather than chip manufacturing, focusing on domain expertise with 50% gross margins versus traditional 30%
Renu focuses on India’s unique population-scale computing challenges requiring breakthrough cost innovations, while the Infosys representative argues for a middle-ground approach targeting 50% gross margins through domain expertise rather than competing on pure cost or trying to match premium players.
Unexpected Differences
Investment and funding approach for technology development
Speakers: Renu Raman, Abhishek Singh
Open source models will drive distributed computing 3.0, similar to how Linux democratized computing, enabling country-specific and domain-specific innovations India needs sustained long-term investment similar to ISRO’s model, with continuous government and private support over decades
While both support India’s technology development, there’s an unexpected disagreement on approach – Renu emphasizes open source democratization and market-driven innovation, while Abhishek advocates for sustained government-led investment models. This represents different philosophies on how technological breakthroughs should be achieved.
Overall Assessment

The discussion shows relatively low levels of direct disagreement, with most speakers sharing common goals around AI infrastructure development, India’s technological advancement, and the need for cost-effective solutions. The main disagreements center on strategic approaches rather than fundamental objectives.

Low to moderate disagreement level. The speakers generally align on the opportunities and challenges but differ on implementation strategies, economic models, and technical approaches. This suggests a healthy diversity of perspectives within a shared vision, which could lead to complementary rather than competing solutions.

Partial Agreements
Both agree that hybrid computing architectures are necessary, but they emphasize different aspects – Renu focuses on memory hierarchy optimization while Jensen emphasizes the continued importance of CPU-based data processing alongside GPU acceleration.
Speakers: Renu Raman, Jensen Huang
Memory hierarchy design with multiple memory types and caching strategies (KV caches, in-memory representations) dominates cost and performance functions Traditional data processing (SQL, databases) still runs primarily on CPUs and will continue to do so, requiring hybrid CPU-GPU systems
Both agree on the need for extremely fast AI response times, but differ on the specific targets and economic constraints – Renu sets 120ms as the benchmark while Abhishek focuses on sub-second responses at extremely low cost points for population scale.
Speakers: Renu Raman, Abhishek Singh
Target of 120 millisecond response time for any query (similar to Google’s 20ms standard) requires massive computing resources and algorithmic improvements Sub-second query processing at population scale (1.4 billion people) for ₹200/month requires breakthrough innovations in cost and latency
Both see massive opportunities in India’s semiconductor and AI infrastructure development, but approach it from different angles – Renu focuses on the hardware systems opportunities from power infrastructure while the audience member emphasizes the data processing and manufacturing optimization aspects.
Speakers: Renu Raman, Audience
10 gigawatt power infrastructure in India could drive $250 billion in hardware systems, creating opportunities for multiple system companies and semiconductor ecosystem Semiconductor manufacturing generates 7 petabytes of data requiring real-time edge AI processing to improve yields and reduce defects in $10 billion facilities
Takeaways
Key takeaways
AI represents a fundamental technology shift occurring every 15-30 years, with current demand increases of billion-fold scale requiring massive infrastructure investment ($2 trillion projected in 5-10 years) Enterprise AI faces critical adoption barriers with 90% of pilots failing to reach production due to undefined quality standards, unpredictable costs, and model selection challenges India has a unique opportunity to lead in AI/semiconductor 3.0 through software and domain expertise rather than chip manufacturing, leveraging population-scale computing demands The shift from training-focused to inference-focused workloads requires new distributed computing architectures, with open source models driving innovation similar to Linux’s impact Most enterprise data (80-90%) remains on-premises, necessitating solutions that bring compute to data rather than moving data to cloud Memory hierarchy design and network infrastructure (800GB to terabit Ethernet) are critical enablers for next-generation distributed computing systems Target performance requirements include 120-millisecond query response times at population scale for ₹200/month, requiring breakthrough cost and latency innovations
Resolutions and action items
Proximal Cloud launched their offering in India focusing on enterprise private cloud infrastructure with partnerships in education, agriculture, and health sciences Partnership established with UC San Diego for AI research in education, health sciences, and industry applications Collaboration agreements in place with technology partners including PharmEx (agriculture), Divium (model optimization), and Instant System (venture building) Focus on Government of India initiatives in Education, Health, and Agriculture as primary market segments Integration of AMD-based systems for CPU-GPU hybrid architecture to support both traditional data processing and AI workloads Development of domain-specific solutions including agricultural sensor systems, educational AI tools, and financial sector applications
Unresolved issues
Funding gap between Indian startup investment (₹20 crores for hundreds of startups) versus global AI investment scales (hundreds of millions per engineer) How to achieve 10 gigawatt power infrastructure buildout in India and ensure associated $250 billion hardware business benefits domestic companies Technical challenge of achieving sub-second query processing at population scale (1.4 billion people) while maintaining ₹200/month cost point Scaling from pilot projects to production deployment across various industry verticals while maintaining quality and cost effectiveness Long-term sustainability of venture funding for hardware and infrastructure companies requiring 10-20 year development cycles Integration challenges between multiple memory types and caching strategies for optimal performance and cost balance
Suggested compromises
Hybrid CPU-GPU systems approach rather than GPU-only infrastructure to balance traditional data processing needs with AI workloads Gradual scaling approach starting with 1 gigawatt infrastructure before targeting 10 gigawatt buildout 50% gross margin business model for Indian companies (between traditional 30% and Palantir’s 95%) to balance competitiveness with profitability Public market funding strategy in India rather than NASDAQ for raising capital to scale infrastructure businesses Partnership model combining international technology expertise with local domain knowledge and cost optimization Flexible deployment options (SaaS, privately hosted, on-premises) to address varying enterprise security and sovereignty requirements
Thought Provoking Comments
We overestimate what can be done in two years, but we underestimate what can be done in 10 years… I thought it would be science fiction, never happened for hundreds of years. But guess what? You can think, you can put a neural link, and probably have cursors generate code for you today. That I never thought about in 2014.
This comment provides a profound framework for understanding technological progress and sets the philosophical tone for the entire discussion. It challenges linear thinking about innovation and introduces the concept that breakthrough technologies often emerge faster than expected once foundational elements align.
This opening insight established the discussion’s forward-looking perspective and justified the ambitious scope of their AI infrastructure vision. It primed the audience to think beyond current limitations and consider transformative possibilities, setting up the entire presentation’s credibility for discussing seemingly ambitious goals.
Speaker: Renu Raman
People who are serious about software should make their own hardware. The corollary is, people who are serious about hardware should also make their own software.
This challenges the traditional separation between hardware and software development, advocating for vertical integration. It’s particularly insightful given the current AI landscape where companies like NVIDIA are succeeding precisely because they control both layers.
This comment justified Proximal’s approach of building integrated solutions rather than focusing on just one layer. It influenced the subsequent discussion about their partnerships with AMD and their full-stack approach, making their comprehensive strategy seem necessary rather than overly ambitious.
Speaker: Renu Raman
AI is going to impact 95% of work… whereas the SaaS era was really a productivity improvement so it really scratched the surface about productivity
This comment reframes AI not as an incremental improvement but as a fundamental transformation of work itself. It provides economic justification for massive infrastructure investments by positioning AI as qualitatively different from previous technology waves.
This insight shifted the discussion from technical capabilities to economic transformation, providing the business case for the infrastructure investments being discussed. It elevated the conversation from ‘how to build AI systems’ to ‘how to prepare for economic transformation,’ influencing subsequent discussions about scale and investment needs.
Speaker: Renu Raman
90% of Gen-AI pilots never make it to production. Not because the demo was bad or the models were weak or bad. It’s primarily because of three reasons: quality is undefined, costs are unpredictable, and model selection is always a moving target.
This comment cuts through the AI hype to identify the real practical barriers to AI adoption. It’s particularly insightful because it focuses on operational rather than technical challenges, revealing why AI success requires more than just good models.
This observation shifted the discussion from theoretical capabilities to practical implementation challenges. It validated the need for the optimization and management layers that the partners were building, and introduced a more realistic perspective on AI deployment that influenced subsequent discussions about enterprise adoption.
Speaker: Lalit Bhatt (Divium)
Every query response has to be in 20 milliseconds… so to me the same thing applies today, maybe 20 is too hard I’m just going to arbitrarily pick… 120 milliseconds you want to have the answer
This comment draws a powerful parallel between Google’s success and AI infrastructure requirements, suggesting that user experience constraints (response time) should drive infrastructure design rather than technical capabilities driving user experience.
This insight reframed the entire infrastructure discussion around user experience requirements rather than technical specifications. It provided a concrete performance target that influenced how the audience thought about the scale and sophistication of infrastructure needed, making the ambitious infrastructure investments seem not just justified but necessary.
Speaker: Renu Raman
India does not want to become a slave shop. So the way I explain that… the Palantir’s gross margin is 95%. Indian company’s gross margin is 30%. Can we build a business at 50% gross margin where the amount of domain expertise India provides with the amount of data is available?
This comment addresses a critical strategic question about India’s positioning in the global AI economy. It challenges the traditional outsourcing model and proposes a middle path that leverages India’s strengths while capturing more value.
This observation elevated the discussion from technical implementation to national economic strategy. It influenced the conversation about what kinds of companies could emerge from India and how they should be positioned, adding a geopolitical and economic development dimension to the technical discussion.
Speaker: Arya Bhattacharjee (Infosys)
If India is going to do 10 gigawatts, that’s $250 billion of hardware. That brings multiple super micros, or that sustains a semiconductor ecosystem at that scale.
This comment provides concrete economic scale that transforms abstract infrastructure discussions into tangible business opportunities. It demonstrates how infrastructure investments can create entire ecosystems of companies and economic value.
This quantification shifted the discussion from whether India could compete in AI infrastructure to how it could build an entire ecosystem around that infrastructure. It influenced subsequent questions about funding, manufacturing, and the potential for creating major technology companies, making the ambitious vision seem economically viable.
Speaker: Renu Raman
Overall Assessment

These key comments shaped the discussion by progressively building a comprehensive vision that moved from philosophical foundations to practical implementation to economic transformation. Renu Raman’s opening insights about technological progress and hardware-software integration established credibility and ambition. The practical challenges identified by partners like Divium grounded the discussion in real-world implementation issues. The economic scale discussions and strategic positioning comments elevated the conversation to national competitiveness and ecosystem building. Together, these comments created a narrative arc that justified ambitious infrastructure investments not just as technical necessities, but as economic and strategic imperatives for India’s position in the global AI economy. The discussion evolved from a product presentation into a broader conversation about technological sovereignty and economic development strategy.

Follow-up Questions
How to achieve 120 millisecond response time for any query at population scale (1.4 billion people) at a cost point of 200 rupees per month?
This represents a fundamental technical and economic challenge that would require significant algorithmic improvements and computing resources to solve, potentially driving major innovation in India’s AI infrastructure
Speaker: Abhishek Singh and Renu Raman
What is the optimal memory hierarchy design for AI systems – do we need four different types of memory or just two or one?
This is described as having ‘a lot of debate in the technical community’ and represents a critical decision that will impact cost and performance of AI systems
Speaker: Renu Raman
How can India ensure that the projected 10 gigawatt AI infrastructure business actually comes to India and benefits local companies?
This addresses the practical implementation of India’s AI infrastructure ambitions and how to capture the associated $250 billion hardware opportunity locally
Speaker: Audience member
Can Indian companies achieve 50% gross margins (between Palantir’s 95% and typical Indian company’s 30%) by combining domain expertise with advanced AI technologies?
This explores a potential business model for Indian AI companies to compete globally while leveraging India’s strengths in domain knowledge and execution
Speaker: Arya Bhattacharjee (Infosys)
How to solve the 90% failure rate of Gen-AI pilots making it to production, particularly around quality definition, cost predictability, and model selection?
This addresses a critical industry problem that prevents AI adoption at scale and represents a significant market opportunity for solutions
Speaker: Bharat (Divium)
What kind of venture capital funding structure and depth is needed in India to support AI and semiconductor companies that require hundreds of millions to billions in investment?
This highlights the funding gap between what’s available in India versus what’s needed to build world-class AI and semiconductor companies, using the example of Karnataka’s 20 crore fund versus Meta’s $100 million engineer hiring
Speaker: Abhishek Singh
How to effectively utilize the 7 petabytes of manufacturing data being ingested by semiconductor fabs to improve yields and reduce defects?
This represents a specific use case where AI can provide significant value in semiconductor manufacturing, with potential savings of $10 million per day in a typical fab
Speaker: Arya Bhattacharjee (Infosys)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.