Secure Finance Risk-Based AI Policy for the Banking Sector

20 Feb 2026 17:00h - 18:00h

Secure Finance Risk-Based AI Policy for the Banking Sector

Session at a glance

Summary

This discussion focused on the governance of artificial intelligence in India’s financial services sector, emphasizing the need for embedded governance frameworks rather than reactive regulatory approaches. The panel featured key policymakers and industry leaders examining how AI governance should be integrated into financial systems from inception rather than applied as an afterthought.


Ajay Kumar Chaudhary opened by highlighting India’s opportunity to lead in AI development while managing associated risks through embedded governance across the AI lifecycle. He emphasized the concept of “MANO” (humanity) as introduced by India’s Prime Minister, suggesting a shift from “responsible AI” to “human AI” that encompasses moral, ethical, and inclusive considerations. Chaudhary outlined four foundational pillars for embedded governance: proportionality, fairness and non-discrimination, explainability and transparency, and clear accountability.


Economic Advisor Sanjeev Sanyal challenged traditional risk-based regulatory approaches, arguing that AI’s emergent and evolving nature makes it impossible to predict risks accurately in advance. He advocated for a system similar to financial market regulation, emphasizing transparency, explainability, audit systems, compartmentalization, and clear accountability with “skin in the game” for those developing AI systems. Sanyal warned against the “AI of everything” approach, favoring bounded, compartmentalized AI applications that are easier to control and more energy-efficient.


The discussion highlighted India’s unique position in developing AI governance frameworks that balance innovation with prudential oversight. Participants emphasized the importance of building trust through transparent, explainable systems while maintaining the flexibility to adapt as AI technology evolves. The panel concluded that effective AI governance requires interdisciplinary collaboration and continuous monitoring rather than static regulatory frameworks.


Keypoints

Major Discussion Points:

Embedded AI Governance Framework: The central theme focused on integrating governance into AI systems from inception rather than as an afterthought. Speakers emphasized that AI governance must be built into the entire lifecycle – from design and data acquisition to deployment and monitoring – rather than applied as a compliance overlay after implementation.


Risk-Based vs. Emergent Technology Challenges: A significant debate emerged around traditional risk-based regulatory approaches versus the unpredictable, emergent nature of AI. Sanjeev Sanyal argued that AI’s evolving characteristics make it impossible to predict risks ex-ante, advocating instead for compartmentalized systems with clear accountability, transparency requirements, and “skin in the game” mechanisms similar to financial market regulation.


India’s Strategic AI Positioning: Discussion centered on how India should position itself globally in AI governance, leveraging its successful digital public infrastructure experience (UPI, digital identity) while avoiding the pitfalls of overly restrictive European models, laissez-faire American approaches, or state-controlled Chinese systems. The emphasis was on creating an “India-first” approach that is context-aware but globally coherent.


Trust and Inclusion in Financial AI: Panelists explored how AI can enhance financial inclusion through better credit assessment and fraud detection while ensuring fairness and transparency. The discussion highlighted the need for AI systems to be explainable “glass boxes” rather than opaque “black boxes,” particularly when affecting access to financial services.


Cybersecurity and Infrastructure Resilience: The conversation addressed how generative AI both enhances cybersecurity capabilities and creates new attack vectors. Emphasis was placed on the need for robust, compartmentalized infrastructure and the importance of maintaining human oversight while leveraging AI as a tool rather than a replacement.


Overall Purpose:

The discussion aimed to explore how governance frameworks can be embedded within AI systems used in financial services, ensuring that innovation proceeds responsibly without stifling progress. The goal was to chart a path for India’s AI governance approach that balances innovation with safety, inclusion, and systemic stability.


Overall Tone:

The discussion maintained a thoughtful, forward-looking tone throughout, characterized by cautious optimism about AI’s potential while acknowledging significant challenges. The tone was collaborative and solution-oriented, with panelists building on each other’s insights. There was a notable shift from theoretical frameworks in the opening keynote to more practical, implementation-focused discussions during the panel, culminating in specific suggestions for regulatory sandboxes and industry collaboration. The atmosphere remained professional and constructive, with speakers demonstrating mutual respect for different perspectives on this complex topic.


Speakers

Speakers from the provided list:


Moderator – Role: Discussion moderator for the AI governance panel


Ajay Kumar Chaudhary – Role: Keynote speaker; appears to be a senior policy official discussing AI governance and financial services


Priyanka Jain – Role: Panel moderator and discussion facilitator; mentioned as being from 5Money and having experience with RBI sandbox programs


Sanjeev Sanyal – Role: Economic Advisor to the Prime Minister; described as a macro thinker, historian, and strategic geopolitical analyst


Praveen Kamat – Role: Official from GIFT City IFSC (International Financial Services Centre); expertise in financial regulation and innovation


Vikram Kishore Bhattacharya – Role: Cloud service provider representative; expertise in cybersecurity and cloud infrastructure


Murlidhar Manchala – Role: RBI (Reserve Bank of India) official; expertise in AI frameworks and financial regulation


Audience – Role: Audience member asking questions; identified as Aditya, founder of First Tile (customer data platform company)


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

This comprehensive discussion on artificial intelligence governance in India’s financial services sector brought together senior policymakers, regulators, and industry leaders to examine how governance frameworks can be embedded within AI systems from inception rather than applied as regulatory afterthoughts. The panel explored India’s unique positioning in the global AI landscape and the critical balance between fostering innovation whilst maintaining systemic stability and public trust.


Opening Framework: Embedded Governance as Strategic Imperative

Ajay Kumar Chaudhary opened the discussion by establishing that AI governance must be embedded throughout the entire technology lifecycle rather than appended as a compliance overlay. Drawing upon Prime Minister Modi’s concept of “MANO” (humanity), Chaudhary proposed shifting from “responsible AI” to “human AI” that encompasses moral, ethical, accountable, and inclusive considerations.


Chaudhary emphasized that AI has evolved from an analytical tool to infrastructure that shapes financial outcomes, requiring treatment as systemically relevant infrastructure. He outlined key governance principles including proportionality in risk-based intensity, fairness and non-discrimination, explainability and transparency, and clearly defined accountability. The keynote highlighted India’s unique position in deploying digital public infrastructure at population scale whilst maintaining inclusion and trust, noting that AI’s adaptive characteristics require governance frameworks that evolve alongside the technology.


Challenging Conventional Regulatory Wisdom

Economic Advisor Sanjeev Sanyal fundamentally challenged prevailing approaches to AI regulation, arguing that traditional risk-based governance frameworks are inadequate for emergent technologies. Drawing historical parallels, he noted that technological dominance often goes not to inventors but to those who master and strategically deploy innovations—citing how Europeans dominated the world using Chinese-invented printing press and gunpowder alongside Indian mathematics.


Sanyal’s critique of risk-based regulation proved particularly provocative, arguing that AI’s emergent and unpredictable nature makes ex-ante risk assessment nearly impossible. He contended that European-style risk categorisation systems would either strangulate innovation through excessive stringency or fail to control risks due to their unpredictable evolution.


Central to Sanyal’s argument was deep skepticism about interconnected “AI of everything” approaches, which he characterized as potentially disastrous. He advocated for deliberately compartmentalized AI systems that solve bounded problems—using examples like chess (bounded) versus career planning (unbounded)—more safely and efficiently. This compartmentalization strategy would function like forest fire breaks, preventing systemic failures from cascading across interconnected systems.


Sanyal proposed specific mechanisms including mandatory AI audits for systems above certain thresholds, predetermined responsibility chains that assign accountability before failures occur, and circuit breaker mechanisms. His approach emphasized ex-post punishment systems rather than attempting to predict and prevent all possible risks in advance.


Regulatory Innovation and Experimentation

Praveen Kamat from Gift City IFSC provided insights into how new regulatory jurisdictions can serve as laboratories for AI governance innovation. He highlighted Gift City’s advantages as a jurisdiction built from scratch with a clean regulatory slate, enabling greater experimentation without legacy system constraints. Drawing from his SEBI experience (2008-2010), he noted how algorithmic trading evolved from being viewed with suspicion to becoming standard practice.


Kamat acknowledged the fundamental regulatory challenge of balancing innovation with stability, noting that over-regulation repels innovation whilst under-regulation repels serious long-term capital. He described existing interoperable sandbox mechanisms across regulators (RBI, SEBI, IRDAI, and IFSCA) that enable testing of cross-sector AI solutions, though legal frameworks present more significant challenges than technological or financial barriers.


Central Bank Perspective on AI Governance

Murlidhar Manchala from the Reserve Bank of India outlined the RBI’s principles-based approach to AI governance, emphasizing frameworks that build upon existing regulatory architecture. The RBI’s framework focuses on ensuring AI systems remain transparent “glass boxes” rather than opaque “black boxes,” particularly when affecting customer access to financial services.


Manchala highlighted the RBI’s recognition that AI technology is inherently probabilistic and may experience lapses despite robust governance frameworks. The central bank’s approach includes provisions for supervisory relief for entities that implement comprehensive controls, conduct proper root cause analysis, and maintain transparent incident reporting mechanisms. The framework also includes provisions for recognizing entities that demonstrate excellence in AI governance, suggesting a carrot-and-stick approach that incentivizes best practices.


Infrastructure and Cybersecurity Considerations

Vikram Kishore Bhattacharya addressed AI’s dual role in cybersecurity—simultaneously enhancing defensive capabilities whilst providing new tools for malicious actors. He emphasized that whilst generative AI has lowered barriers for threat actors, it hasn’t fundamentally changed the nature of attacks, meaning existing cybersecurity principles remain valid.


Bhattacharya advocated for a paradigm shift from “human-in-the-loop” to “AI-in-the-loop,” positioning AI as a tool that enhances human decision-making rather than replacing human oversight. His perspective emphasized trust-but-verify approaches with cloud service providers, validated through standards like ISO and NIST certifications.


Strategic Positioning and Sovereignty Concerns

The discussion extensively explored India’s strategic positioning in the global AI landscape, particularly regarding data sovereignty and supply chain resilience. Chaudhary highlighted concentration risks in the AI stack, noting that one firm controls over 90% of advanced chips, three dominate cloud capacity, and a handful command foundational models—creating potential vulnerabilities for financial stability and economic sovereignty.


Sanyal particularly emphasized India’s need to develop domestic AI processing capabilities, noting recent budget provisions including substantial tax holidays for data center development as strategic investments in the “oil rigs” of the data economy. He argued that India’s large population provides significant advantages for AI training data, but only if accompanied by domestic processing capabilities and clear data ownership rights.


A significant audience question from Aditya, founder of First Tile, addressed sovereign data assets and how India can leverage its data advantage. The panel emphasized that India must move beyond being merely a data source to controlling data processing and deriving value domestically.


Inclusion and Fairness Imperatives

Chaudhary outlined AI’s potential to deepen financial inclusion through granular, dynamic risk assessment that reduces reliance on collateral-heavy models. He cited specific examples from NPCI’s high-value payment environments where AI has reduced fraud by 25-30 percent. However, speakers acknowledged significant risks that AI could perpetuate existing inequalities if trained on historically skewed datasets.


The panel explored how AI governance frameworks must account for India’s linguistic diversity, demographic heterogeneity, and income variability—factors that heighten model risk if not properly addressed. This requires governance frameworks that are context-aware whilst remaining globally coherent.


Practical Implementation Challenges

The discussion revealed several practical challenges in implementing embedded AI governance, including the need for interdisciplinary capabilities and AI literacy at board and senior management levels. Speakers emphasized that leaders must understand model architecture, validation methodologies, vendor dependencies, and ethical implications.


Copyright and intellectual property frameworks emerged as critical areas requiring reform. Sanyal posed fundamental questions about ownership of AI-generated innovations and emphasized the need for new legal frameworks and potentially new judicial capabilities to handle AI-related disputes.


Global Regulatory Landscape and India’s Approach

The panel examined different global approaches to AI regulation, contrasting innovation-led American models, compliance-heavy European frameworks, and state-controlled Chinese systems. The discussion suggested India should chart a distinctive path that leverages its unique advantages whilst avoiding the pitfalls of other approaches.


Speakers emphasized that India’s approach should be “context-aware” rather than merely adopting international frameworks wholesale, maintaining the balance India has successfully achieved with previous digital infrastructure initiatives.


Unresolved Tensions and Future Directions

The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and emergent-technology governance approaches. The balance between innovation and safety emerged as an ongoing challenge, with the panel suggesting this might be achieved through regulatory sandboxes, supervisory relief for well-governed entities, and clear accountability frameworks.


Questions about the appropriate level of AI system interconnection versus compartmentalization remain unresolved, with implications for both efficiency and safety. Similarly, ongoing debates about where in the AI value chain responsibility should be assigned continue to require attention.


Conclusion

The discussion concluded with recognition that AI governance represents both a technical challenge and a strategic opportunity for India. The panel emphasized that effective AI governance requires moving beyond theoretical frameworks to practical implementation mechanisms, including developing audit systems, accountability frameworks, and continuous monitoring capabilities that can evolve alongside the technology.


The conversation framed AI governance not as a constraint on innovation but as an enabler of sustainable, trustworthy AI deployment that can serve broader goals of financial inclusion, systemic stability, and economic sovereignty. The path forward requires continued collaboration between policymakers, regulators, and industry participants to develop governance frameworks that are both robust and adaptive to the challenges of governing emergent technologies.


Session transcript

Moderator

Thank you. very much in line with the overall theme of the summit. We are looking at the overall aspect of governance of AI, but not as something that will be set aside and looked at through a different lens altogether, but something that can be looked in as an embedded layer of governance that we already govern technologies with. In the interest of time that we have with us, I will request the panelists to be seated on the dais, and I will request AK Chaudhary sir to please begin his keynote.

Ajay Kumar Chaudhary

Good afternoon to everyone. Distinguished policy makers, regulators, industry leaders, members of the FinTech community, and esteemed guests. I will just very closely following last four days how and what are things happening and it was amazing the type of enthusiasm type of excitement and type of budge around AI and this summit and I believe and that whatever is there actually is a real thing which is happening possibly multiple small applications are going to come in coming days which will solve multiple issues and problems in coming days and we’ll have the real leading role actually to play as a country that is the way we look at it we also will have a great role to play on the data side particularly when we are going to train the models for that obviously when we are going to scale up entire thing then possibly there might be some run -throughs some risk also and those risks something is known, something is unknown and for unknown much cannot be done except we need to do take care of the embedding the governance part.

That is the theme of today’s talk, how we need to embed the governance actually the entire life cycle of the AI, the design of the AI. That is the way we have to look at. Yesterday I was again listening our Honorable Prime Minister and the beautiful way that he summarized the entire theme in one word that is called mano, that is called humanity. So possibly in future I am going to use that instead of responsible AI, that is possibly we can talk about human AI because it is going to touch upon moral and ethical systems, accountable governance to sovereign, national sovereignty, accessible and inclusive and valid. All the aspects what we are going to touch upon, everything is covered in this one word that is called Mano.

Now coming back to my address, proposed address, I’m coming back to this now. It’s indeed a privilege to participate in this dialogue at a defining moment in India’s digital evolution. Over the past decade, India has demonstrated how population -scale digital public infrastructure can drive inclusion, efficiency and trust. Systems built with interoperability, transparency and scale at their core have reshaped financial participation by millions. Today we stand at the next inflection point in that journey. A new tech layer is being superimposed upon this digital foundation. AI, artificial intelligence, what we know it, is not arriving in isolation. It is integrating with payment systems, credit and risk management platforms, supervisory frameworks. and cybersecurity architecture that already operate at national scale.

This convergence of scale and intelligence marks a structural shift. Unlike earlier waves of digitalization that automated existing processes, AI introduced adaptive systems, systems that learn, recalibrate, and influence outcome dynamically. In a country as large and diverse as India, such systems do not merely improve efficiency, that see access, opportunity, and systemic resilience. The question before us is not whether AI will transform finance. It already is. The more fundamental question is whether governance will evolve at the same pace as innovation and whether it will be designed into a system from inception rather than appended later as a compliance of the thought. In financial services, trust is foundational. AI system cannot function as opaque black boxes, especially when they influence access to credit or flag financial behavior.

Governance cannot be an overlay applied after innovation has already been scaled. It must be embedded by design. As Peter Drucker observed, quote, management is doing things right, leadership is doing right things, unquote. In the context of AI in finance, governance is not merely about tech correctness. It is about doing the right things at the right time in ways that preserve trust, resilience, and inclusion. Now, looking at AI as infrastructure tool, it has evolved from analytical assistance to shaping financial outcomes. In credit market, machine learning model analyze transaction histories, behavioral signals, and dynamic cash flows to generate granular borrower assessments. In fraud prevention, AI detects anomalous activities within milliseconds, processing volume beyond earlier systems. AI -enabled detection can reduce certain categories of fraud losses by up to 25 to 30 percent at this point of time in high -value payment environment, what we are witnessing in NPCI.

Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to emerging threats in real time. The diffusion of AI across the financial value chain enhances efficiency and precision. Yet, when models operate on a systemic scale, even marginal inaccuracy can produce material consequences. In finance, where stability and trust are public goods, the tolerance for systemic error is limited. India’s financial system adds its own complexities. Its scale of digital participation, linguistic diversity, demographic heterogeneity, and income variability are also important. heightened model risk. Although trained on narrow urban centric or historically squid data sets may inadvertently misclassify, misprice or exclude segments that digital finance is intended to integrate. It is therefore imperative that we do not view AI as a peripheral tech enhancement.

It must instead be understood as a component of financial infrastructure which is systemically relevant and should be subject to the same standard of resilience, governance and accountability what we expect of any critical financial utility. When we talk about embedded governance in AI, historically regulation in financial services have often responded to innovation after risk gets materialized. Governance in the AI era must however be embedded into systems design. Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life cycle. from conceptualization and data acquisition to model development, deployment and ongoing monitoring. It rests on several foundational pillars. I will mention four. One is proportionality, that is the governance intensity should be risk -based.

It should be risk -based intensity. Fairness and non -discrimination. Third is explainability and transparency. And fourth is accountability, which must be clearly defined. While institutions may collaborate with tech providers or leverage shared infrastructure, responsibility for outcomes cannot be outsourced. Potential vulnerability of AI systems that save their operations, board and senior management must understand that logic, limitations, et cetera. Further, and more importantly, in financial AI, algo efficiency should not compromise equitable opportunity. Now, specifically coming to the financial infrastructure, risk -based approach to AI governance, just I’ll touch upon this. A risk -based approach to AI governance acknowledges that innovation and prudence are not opposing forces. They are complementary. Financial authorities globally are converging on principles that emphasize robustness, resilience, transparency, and human oversight.

India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsibility. The objective is not to slow innovation, but to ensure that systemic risk does not accumulate invisibly. Several risk dimensions deserve particular attention as AI becomes integral to financial systems. It may include multiple issues. I will touch upon only four. One is the model integrity. For instance, it can no longer be viewed as a one -time validation exercise. Intelligent systems must be evaluated across economic cycles. And stress against extreme but plausible scenarios. As data patterns evolve and models recalibrate, continuous oversight becomes inevitable to guard against drift, unintended bias, or reinforcing feedback loops. Second is operational concentration risk. I will detail subsequently also. It is an emerging systemic concern.

Diversification and resilience planning are essential to safeguard continuity. Data governance through data integrity, consent management, purpose limitation, and minimization principle is foundational. Financial data is not merely transactional. It reflects livelihood, behavioral choices, and economic participation. And the fourth item is cybersecurity risks that are amplified in the AI environment. As AI strengthens defense mechanisms, it can also be leveraged by adversaries. Institutions must anticipate adversarial AI and strengthen defensiveness. Detection capability accordingly. A risk -based… framework recognizes that governance cannot be static system that learn and evolve demand demand oversight that is equally dynamic as also measured proportionate and forward looking now just touching upon supervisory intelligence as ai permeates financial institution supervisory framework are also evolving supervisors increasingly leverage advanced analytics to monitor systemic pattern identify anomalies and strengthen early warning mechanism this creates a reciprocal dynamic institution embed ai in operation while oversight bodies integrate intelligence into supervision however governance cannot be regulated driven alone institution capability is critical ai literacy at the board and senior management level is no longer optional leaders must understand model architecture validation methodology vendor dependency and ethical limit implications Effective governance requires interdisciplinary capability bringing together tech, risk, compliance and legal experts as well as business leaders together Institutions that integrate AI governance into their ERM framework strengthen resilience Christian Lagarde has noted, innovation and regulations are not adversity, they are partners in progress That partnership must guide the embedding of AI within finance Coming to the inclusion part, what our Honorable Prime Minister has mentioned about the last A in MANO, that is access and inclusion India’s financial transformation has been anchored in inclusion Over the past decades, tech has lowered barriers, reduced transaction costs and brought millions into the formal financial ecosystem AI now offers an opportunity to deepen that trajectory Through granular dynamic risk assessment Thank you It can reduce reliance on collateral heavy models and static credit history.

Transition level data, cash flow analytics and behaviour indicators can provide more nuanced insight into the repayment capacity, particularly for MSME who are presently outside the traditional credit framework. India is expected to account for a significant share of global digital transition growth this decade. If harnessed responsibly, AI can convert this expanding digital footprint into broader formal access to fair financial services and adoption at scale. Yet, inclusion cannot be assumed. It must be intentionally designed. Algo, trained on historically squid dataset, risks perpetuating structural inequalities. In formal sector, income volatility. In terms of the future, of the Gender -based data gas may distort credit outcomes. Without corrective safeguards, technology may reinforce rather than reduce disparities. Inclusive AI thus requires representativeness in training datasets, periodic impact audits, and community -level feedback mechanism.

It calls for institutional mechanisms that allow individuals to seek clarification and redress where automated decisions affect their financial standing. Now coming to the sovereign and resilient AI foundation. AI governance intersects not only with the institutional risk, but with strategic resilience. Concentration in advanced chips and foundational AI models raise critical consideration for economic sovereignty, financial stability, and I can further add, the national security. Dependency on limited supply chains can create systemic vulnerability. If we may look at AI stability. I’m going to go ahead and start with the AI. more granularly. It rests on five interdependent layers. At the base are specialized semiconductor chips we all know. Above this sits the cloud and data -centric infrastructure that provides scalable processing capacity.

And these systems are fueled by vast data sets drawn from public and proprietary sources. On this foundation operate large foundation models adaptable across domain and finally at the top are application and that embed AI into financial services and everyday economic life. In this context we should be conscious of the fact that one firm controls more than 90 % of advanced chips. Three dominate cloud capacity and a handful command foundation models threatening financial stability and economic sovereignty. We must therefore diversify supply chains to the extent possible through domestic innovation and international collaboration to secure resilient AI foundations. Further, if you look at what is the pathway for ecosystem scaling possibly we have to look at the consent based data sharing, shared AI and risk infrastructure investment in AI literacy and governance at all levels including board and senior management and most importantly encouraging home grown tech and AI capable entities.

It may be appreciated that an India first approach is not inward looking. It is context aware. It ensures that governance reflects local realities while remaining global coherent. Now coming to the operationalization of embedded governance, it may involve multiple issues but I am touching upon 5 to 6 one. The life cycle based model governance institutions should embed governance checkpoints from data acquisition to deployment and post deployment monitoring. obviously clear risk classification framework based on the systemic impact that we should have to have independent review and oversight, enhanced oversight on that. It should be auditable and documentation should be there cross functional governance committee will be helpful no doubt on that and continuous monitoring and feedback loop that basically helps in periodic recalibration by way of external audit.

Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mechanism are critical to maintain public trust. These pathway ensure that governance is not episodic but embedded women into operations DNA. Now I will just before concluding I will touch upon the role of India in AI and trust as a corner store of financial AI. Finance rest on confidence that systems are fair, stable and accountable. Deposit trust institution to safeguard asset borrowers’ trust systems to assess risk fairly, and market trust, transparency, and stability. EI has the potential to enhance this trust by improving fraud detection, accelerating compliance and broadening access and inclusion. But if governance is ineducated, EI can erode confidence rapidly.

Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with public interest. And trust endures when leadership anticipates risk rather than reacts to failure. India stands at a pivotal moment, working across all five layers of the EI stack, and demonstrating the ability to deploy application at population scale. It is shaping a global agenda for inclusive EI. The convergence of digital infra, regulatory foresight, and entrepreneurial innovation offers a chance to show that scale and safety can coexist and governace can catalyze inovation.Coming to conclusion artificial intelligance wiil sace the next chapter of financial services. But tech alone does not determine outcomes. Institutional design does. Design choicesmgovernance framework and institutional culture will determine whether AI strenghten finance finacial resilience and inclusion or not.

Embedded governance is not regulatory burden.It is strategic imperative.It ensures that innovation is sustainable, trust is preserved and the system stabillity is protected. If we embed fairness,transparency,anccuntability and proportional oversight into the architecture of financial AI form iception, India can chart distinctive path,one that alligns tech ambition with ethical responsibility. Let as approach rhos moment not with hestitation but with disciplined forseght .Let us ensure that as our financial systems become more intelligent,our governance become more robust, our oversight becomes more anticipatory and our commitment to inclusion more resolute. In doing so, we will not only harness the power of AI ,but we also shape it to serve the broader goals of stability, oppertunity and share prospectively.Thank you.

Moderator

Thank you, sir. That was very insightful and sets the context for the panel discussion to follow. We could also request you, if you would want, you could join us in the audience. That would be great. Over to you, Priyanka, for introduction to the panelists and then taking this discussion forward.

Priyanka Jain

Thank you so much. Thank you. Our panelists need no introduction. I’m going to keep it very fast so that we can make the most of, you know, capturing their thoughts. First, I have with me Mr. Sanjeev Sanyal. Sir is the economic advisor to the Prime Minister. He’s in the Prime Minister’s office and he needs no introduction. If I actually go by what AI has given me as his persona, AI summarized it as a macro thinker, a historian, a historian of structural cycles and a strategic geopolitical lens. Fortunately, today we have the OG himself in the room. And without any further ado, I want to ask him my first question. So historically, countries that have mastered general purpose technologies, right from the steam engine, early electricity, Internet, they’ve gained outsized economic advantage.

Is AI that inflection point for India? And if so, does early well -designed self -governance accelerate trust or does it deny us of any competitive momentum?

Sanjeev Sanyal

Yes, it is important that you are engaging in it, but let me point out that it’s not always the first movers who benefit from it and it’s not the case that even those who invent these technologies know where they’re headed. I mean, just to give you an example, the European Renaissance, which led ultimately to the Western domination of the world for half a millennium, was based on three technologies. One was the printing press, the other was gunpowder, and the third was mathematics. The first two were invented by the Chinese and the third was invented by the Indians, but it is the Europeans that took it, owned it and dominated the world. So, one important thing to recognize in all of this is that do not try and necessarily guess where this is headed.

But of course, we need to engage in it. We need to engage in these technologies and build on them. Otherwise, you know, somebody will take your technology and dominate you. So it is very, very important that India does participate in this AI revolution. But again, in this context, let me say, that does not mean that we should spend time trying to work out exactly where this is headed. For example, when the social media revolution was happening 20 years ago, when Facebook and all these things came about, the marketing tool of the people at that time was, see, now everybody can talk to everybody, we will all move to the golden mean, because we will all have similar views, because we can all talk to each other, and so on.

But in fact, the algorithms went out of their way to put us in buckets and echo chambers. So in fact, we ended up, social media ended up doing exactly the opposite of what the, you know, the technology experts were telling us social media would do. now why does this apply to AI as well and here I am going to talk about this risk based thing that everybody is talking about let me tell you that you cannot actually put AI or any types of AI into any real risk bucket because this is an emergent evolving thing even more so than social media so consequently if you are saying I am going to do risk based it means that you have some assessment of where that thing will go and I am telling you that it is almost impossible to do this so for example in my view the European way in which they are going about and having you know risk they are the pioneers of risk based systems I understand it is pretty obvious that you don’t want AI to take over our nuclear buttons but other than that the risk levels of most of the other things is utterly unknown this is a bad thing because I am not saying that because I am not saying that because I am not saying that something totally innocuous might go and blow up the whole system because these things are emerging they are evolving, they are interconnecting therefore I actually do not think the risk a system that is largely based on perceptions of risk will work because it is not possible ex -ante to work out what is dangerous or for that matter what is beneficial now what should you do if you can’t tell what is going to happen I am telling you the European system is either going to be strangulate the system by being too stringent or it will open things up because it wants progress but will ultimately the risk based system will not be able to take control of it so the other model that is there is of China which is the state knows best but we know from the experience we had with the Wuhan virus that the state can very often lose control of things that are happening and it can spiral out.

The third model that is mostly the American model is to have a laissez -faire and let anybody do whatever they want. Now the dangers of that are obvious. In my view, the way they control it is through tort laws, i .e. if something goes badly wrong, you will then end up with a billion dollar fine or something like that. So in some ways it works better because it’s ex post rather than ex ante system. It depends on those who are running the system having skin in the game, i .e. your company will go down and you will be jailed and you will have a billion dollar fine on it. If things go wrong, that is how they are doing it.

It’s an ex post punishment. But as you can tell, that is some ways, is an ex post system and if something really bad goes wrong, you know, it will you’ll only find, you know you can punish the person after the horse has already bolted you are going to lock it. So all these systems have their downsides but I’m just telling you that whatever system we design in order to control this has got to be based on being agnostic to how this whole thing works going forward. Now, I know I’m taking up their time but give me a minute. There are other systems that we manage where we have no idea where they are going. Take for example the stock market.

You and I don’t know where the stock market will be in a decade’s time. It’s a complex system just like artificial intelligence but we manage it. How do we do it? Well, we do it by creating a framework which does the following thing. It first of all has institutes audits. And enforces transparency and explainability. if you can’t explain your accounts you can’t be in the stock market two it has systems of shutting things down when things go wrong so there are every stock market will have when things spiral out it shuts down three it deliberately creates systems of separation for example this you know there are the same company cannot you know be a bank as well as being a company that so there are conflict of interest so in the same way AI will need to create compartments I am personally very suspicious of any idea of the internet of everything and the AI of everything that would be a disaster I think we need to be willing to allow compartmentalized AI I think it will be more efficient anyway from an energy perspective but I think it’s also safer and most importantly you need to create skin in the game, i .e.

ex ante tell people who will be held responsible when things go wrong. So, in the case of financial markets, the directors of the company are the ones hauled up when things go wrong, or the CEO. In the case of AI, we will have situations where when things go wrong, the person who made the algorithm will blame the data, the data guy will blame the company, the guy who is the user, all kinds of things will happen. We need to ex ante decide who in the system will be hauled up when things go wrong. That will create skin in the game. But we cannot wait for something to go wrong and then this happens, we need to decide this ex ante.

So, all of these things exist in the case of financial regulation. I personally think a similar system.

Priyanka Jain

Rightly put technology moves fast but trust takes time to build and compartmentalization is a great way to de -risk in some form and also look at it with a focused agenda and attention. With that we can actually bring in Mr. Kamath. Mr. Kamath is from the GIF City IFSC, a compartmentalized global financial hub in a way that India has created and we are very fortunate to have you sir here GIF City actually operates at a unique intersection of innovation and global credibility. It competes with the likes of Singapore, Dubai, London. Can GIF City become a lab for AI governance and we wanted to know your view sir and especially a great segue from Sanjeev sir on how we can look at it differently in a compartmentalized manner.

Praveen Kamat

See if you see a Gift IFSC as a jurisdiction, it is just, it was set up in 2015, so it’s just 11 years old. We are building it up from scratch. Now, when you build something from scratch and when you have a brand new regulator, like IFSC which was created in 2020, you start with a clean slate. So that means you have more leg room and you have more space to experiment. So we don’t have baggage of the legacy systems. So if you see the way we have evolved over the last six years, IFSC, the way regulations have evolved, we have all the verticals across finance, capital markets, banking, insurance, pensions. And we have introduced new verticals, ship build, ship leasing, aircraft leasing, ancillary services and so on.

You know, in line with all of the global financial centers. So with respect to experimentation, when you use the word lab, you imply experimentation. So the appetite for experimentation and the appetite for taking risks, is much higher than other, say, domestic regulators or regulators overseas because of the absence of retail investors. so yes gift city has an immense ability to to come across as a lab uh for ai governance however building a financial center is a is is like a 45 kilometer marathon you know it’s not a 8 kilometer dream run so it will take its time uh we are on the growth trajectory on the upward trajectory and there is a certain gestation period for every financial center that that period gestation period cannot be skipped we are in that gestation period so once we reach critical mass we will we’re going to see a lot of things happening and coming out of gift ifsc.

Priyanka Jain

Thank you actually i will go murli sir and the rbi free ai report or the framework on uh you know any enablement of ethical ai i think it’s very forward looking it is actually building on existing regulatory controls and architecture to bring in you know the principal base ai ecosystem so my question to you is If a company has embedded robust controls, model inventories, bias testing, continuous monitoring, should regulators reward and discipline such companies with calibrated supervisory relief? And in other words, is there a safe harbor for somebody who’s, you know, who’s put in risk -based controls but, you know, has been a first -time defaulter?

⁠Murlidhar Manchala:

Yeah. In fact, in the same report, it was suggested that the entities which put in place all the guardrails and then in case of any labs, if they are doing the root cause analysis, trying to address the problem, they should have a, the regulator should have a lenient supervisory approach. And it should be seen as a, it should be seen as an instrument. It should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a over acting risk area so that is something which we recognize so on both friends one is we understand the technology is probabilistic and then it can have lapses but once in terms of governance if you if you put in the guardrails if you put in the processes if you put in the mechanisms across the lifecycle to see that this the the customer doesn’t face the risk so that is the main focus the customer it should be transparent to the customer it should not be a black black box rather than it can be a glass box so so and and it should be understandable to the customers so once all the measures are taken take are taken into consideration by the by the entity in terms of governance as well as the processes you can see that the customer does not face the risk so that is the main focus the customer it should be transparent to the customer it should not be a black black box rather than it can be a glass box so and it should be understandable to the customers so once all the measures are taken take are taken into consideration by the by the entity in terms of governance as well as the processes because of the nature of the technology presently we understand it can lead to some to some aberrations but then as long as it is it is taken in a in a right process you have these incident reporting mechanisms you have the you will have the manual or right so the once you have these controls and the and the right approach the supervision will not should not be used as a as a systemic or or a greater risk you should rather you should allow first time lapse and then in terms of say rewarding it we also suggested that there would be an award for a in finance particularly there are specific works done in in terms

Priyanka Jain

Thank you. I think Vikram your advantage point here because you are a global infrastructure player. You are seeing regulatory trends across the US, UK, Singapore and many other markets. You heard about how the panel has been shaping right from the policy makers to international financial center and also RBI. Want to know as an infrastructure provider how are you looking at cyber security and its evolution in the age of generative AI?

Vikram Kishore Bhattacharya

Thanks so much Priyanka. I would just make one correction as a cloud scientist. I am a cloud scientist and I am a cloud scientist. I am a cloud service provider and not merely an infrastructure provider. I think one of the things is, for good or for worse, we’ve seen the benefits of generative AI, but we’re also seeing bad actors use generative AI for phishing attacks, for credential attacks, for malicious code. So, you know, with the good comes the challenges. But one of the more important elements is that while it’s serving as an accelerant to existing methods, I don’t think it’s foundationally changing the nature of the attacks. And, in fact, there was a report that came out in 2025.

It talks about how generative AI has lowered the barriers for a lot of these threat actors. But I go back to what I said. It’s because it’s not foundationally changed. The same principles and the same foundations of cybersecurity that held true before gen AI still hold true. So, you know, multi -factor authentication, strong passwords, regular updates, scanning your systems. And I think it is imperative for organizations. To fundamentally, especially in the financial services, who are always. being attacked and India is a country where not only the banks but we have a huge citizenry with different levels of financial literacy so therefore how do you use these tools to actually safeguard the financial system so I think that in that respect you know a lot of sort of kudos to the RBI for also thinking about it on you know these principal lines but also the banks for actually leveraging these technologies and I think that one of the elements that you need to always do is you know trust service providers like us but also banks should verify and that is done through standards like ISO or the NIST and you know independent third party reports validate the various controls that are there and I think that now and it’s a point that I was making a little earlier you have to become an active participant in cyber security no longer can you be a passive passenger in it because the landscape is changing and as more and more people are digitizing so are the people who are willing to and are looking to attack any vulnerability.

So GenAI does provide you with the tools, because, again, I’m also a believer in not, you know, human in the loop, but having AI in the loop. So how do you use these technologies to have faster responses? How do you automate scanning? How do you automate getting reports? To be able to make those value judgments at the right time. So that requires skilling. Again, that requires awareness, not just about something like an AWS or the cloud, but also banks and also, you know, the work that, again, regulators as well as cloud service providers are doing is having these awareness programs to make sure that the more people understand the technology, the better the framework and the groundwork will be for them to adopt.

Thank you.

Priyanka Jain

I think I also referred to our earlier discussion today afternoon, wherein rather than AI thinking, about a human in the loop, humans think AI as a loop to move forward and I think that was a great paradigm shift that we can look at. Sanjeev sir I am going to come back to you but I also want to give a backdrop to this question India has never simply adopted technology we have created it, we have adapted it, we have scaled it and we have governed it in our own way. We did it with identity, we did it with payments and we did it with digital public infrastructure If the governance frameworks around AI are beginning to emerge and they are also being divergent globally like US being innovation led, EU being compliance led, China being state led, where is the access that India is going to strategically position itself and how are you looking at it from your lens?

So I Sanjeev Sanyal

think I will continue from what I was saying earlier Now, we need to be very, very careful that we don’t end up with a bureaucratic risk -based system. This is an emergent technology. It will evolve in all different ways, and we’ll have to be very, very creative about this. Now, there is a difference between, say, the systems as an architecture. AI is an emerging thing. It’s not just infrastructure in the sense that, say, you can think of UPI as infrastructure, for example, digital identity as infrastructure. It doesn’t in itself have emergent behaviors. AI has emergent behaviors, i .e., it evolves and interacts with other forms of AI, and which is why I said you need to be fundamentally suspicious of anybody who says that they have a very clear idea where this whole thing is going.

We don’t at all have a clear idea. Nobody on the planet has a clear idea where it’s going. So we do need some regulation. We need to be very, very careful about having humans in the loop. as I said right in the beginning you need to have systems switch off buttons you need to create what are called in finance Chinese walls which separate different tracks as I said earlier I am not a huge fan of the AI of everything I think that’s dangerous and will lead to bad outcomes however AI can be run in compartments rather well and why don’t we use that because in any case that’s less energy using in any case it is better at solving bounded problems when you give AI an unbounded problem it tends to hallucinate because unfortunately it has learnt another human trait that it doesn’t like to tell you that I don’t know it rather make up stuff so consequently I think it is better that we deal we give it bounded problems let it solve those bounded problems and get back to us going for this AI or internet of everything which everything is interconnected sounds very good but just it was last July or July before that we saw when one very small code of a Microsoft program which was by the way static it wasn’t even a fluid one it went wrong and you ended up with causing havoc in airports ATMs all kinds of things around the world now imagine the same thing happening in a system where it has emergent characteristics by the time you fix one bit of it it has flowed into some other part of the system so I personally think we need to create firewalls you know forest fire is also an emergent thing and the way we control it is not by predicting where the fire is coming from and where it will go we just have these firewalls from time to time we do that in finance all the time we don’t try to work out what the conflict of interest is, we simply ban situations where conflict of interest will emerge and the same thing is true of skin in the game I think we need to ex -ante work out where in the chain is the responsibility I personally think that it should be done at the level of where the algorithm is made public to use whoever is making it even if their data is wrong you cannot blame the data you are responsible so somebody else may disagree, whatever point of the matter is we need to have very clear points of punishment when things go wrong we need to have audit systems for explainability there is nothing very deep about this after all every company listed in the stock market has got it itself several times a year why can’t we ask major AI companies to be audited?

If you cannot explain why your results are turning up too bad, you shut it down. We do that even with relatively small companies have to go to a chartered accountant several times and chartered accountant has to sign it off. Maybe we have a chartered AI audit for anything that goes beyond some threshold. And I think given how potentially dangerous it is and lucrative it is as well, I don’t think we should be thinking about this as a problem. Rather than doing what I think many others say, okay, they understand it’s dangerous, they will say, but why don’t we have risk -based? Now, ex ante, you cannot work it out. All you will do is you will have technologies that are just, you will end up with regulations that will become just too stringent and will kill the sector.

Rather, along the way, you have a system of explainability audits. With that, let me hand

Priyanka Jain

Mr. Kamath, I’m going to come to you. Economists worry about tourists under regulation that creates instability and over -regulation that will kill dynamism. Where do you see gift city? Because, again, it’s at an intersection of local and I want to hear your views on it.

Praveen Kamat

That’s the problem facing all regulators worldwide across financial sector. Over -regulation repels innovation. Under -regulation repels serious long -term capital. So now, where do you draw the balancing equilibrium point? Let me explain it with an example, simple example. I joined SEBI, Securities and Exchange Board of India in 2008. I was posted to the surveillance department. In 2008 itself, the financial crisis was in full flow. So in our surveillance systems, which are very, very powerful systems, we noticed 1 ,000 orders being entered in a span of couple of microseconds. So we were wondering how is this possible, how can a human enter so many orders. Then we came to know algorithmic trading terminals have been deployed by certain entities in the stock market.

When we dug deeper we came to know that initially it was deployed in 2004 by one entity and then slowly slowly it was the volumes were increasing. I mean it didn’t reach a critical point but they were slowly increasing. Now in 2010 the inflection point came when it reached a critical mass. SEBI came up with guidelines to protect safeguard the retail investors and to preserve financial stability. So here is a perfect example where an innovation in the capital market which is algorithmic trading was deployed by entities for a good six years. It was not regulated, it was being used and the regulator didn’t do anything to stop it. But when the regulator issued the guidelines the necessary safeguards were put in place.

However at the same time there were no breaks applied on the rollout of the innovation. So algorithmic trading even after the guidelines grew exponentially in the Indian capital market to where it is today. So in the same manner, we hope to facilitate innovation in gift IFSE. We have sandboxes in place for startups as well as established entities. They can roll out their AI pilots in the sandbox. The goal is to cap the risk. Like sir said, it’s very difficult to identify all the risks. But whatever possible risks can be identified, let’s cap the risk without going into the technical mechanics, you know, the internal mechanics. And then see how it flows out. Based on the data that you receive in the experimentation, accordingly the regulations can be tailored.

Thank you.

Priyanka Jain

I know we are at time, but I’m going to still extend because I have such a prestigious panel by another few minutes. Could you come back to you with a quick rapid fire? If you could tell us one risk that we are underestimating when it comes to AI. No, in

⁠Murlidhar Manchala:

general we would not like to talk about risk. So that is our approach. Our keynote speaker, Ajay Choudhury, was also at the helm and the department was formed. So risk is maybe underestimating the risk. That is what I can say. That can be addressed only through the governance, particularly in the present emergence of technology. Actually, I

Priyanka Jain

like what Sanjeev sir was telling us. It’s never going to be risk -free, but we’ll have to move forward. We’ll have to figure it out and we’ll have to do it in as much as possible compartmentalized manner. So any risk that we are overestimating, anybody from the panel who wants to talk about any risk that we are overestimating. Let’s give Vikram a chance. I

Vikram Kishore Bhattacharya

mean, I think the fundamental nature would be there is no zero risk. It’s how do you equip yourself. to handle risks because I think a point that Mr. Chaudhry Mr. Sanyal also made is as a regulator or a regulated environment, how do you create the tools to be nimble to adapt as the technology adapts and I think that that is the important element. Right now the tools are there, there is so much we can do that we’re not, maybe we’re not doing as well, so maybe we can focus very well in the here and the now and equip ourselves to be nimble enough to deal with anything that comes because anybody who’s telling you what’s coming with a certain amount of certainty, I take that with a pinch of salt.

I think that the future is a little unknowable at this point of time but there are so much that is known and we should be able to tackle that right now. I

Priyanka Jain

think that’s great. Sanyal sir I’m going to again come to you. One reform that India must prioritize, what is your view on it? That’s

Sanjeev Sanyal

Copyright law. Who is the owner of a particular innovation? At which point do you call it an innovation? And is that innovation owned by the person who put the prompt in? Is it owned by the person on whose data it got trained? Or it belongs to the algorithm that created that innovation? So all of these I would say that we need to begin to think of a judicial system that can deal with these kinds of problems. We already have a cloud judicial system. But do remember that these very different kinds of, and I would almost call them philosophical problems, are going to turn up at our doorstep very, very quickly. And we need to be thinking about them.

Thank you. When UPI came in, I think about a decade ago, and we have the benefit of having the NPCA chairman himself being in the room, I think it was more than payment. It was trust in an invisible system. and today AI is becoming that invisible system that is sitting quietly in our credit underwriting decisions, our onboarding flows, grievance redressal systems, even regulatory reporting and I think that’s, it was a great discussion to talk about how do we embed trust in an AI system that is fast evolving because at the end of the day we’re thinking about the theme of the summit which is people, planet and progress all in the same breath. People, how do we protect them from opaque systems or bias?

Planet, how do we scale sustainably and responsibly and progress because it doesn’t have to be only fast innovation, it has to be fair innovation. So I think a lot of great thoughts today that came in the panel discussion and I’m extremely grateful to everybody who made time to have this discussion. Sanjeev sir, we could have some closing thoughts from your side. Well, you mentioned trust. Let me say that… while it is fair to trust UPI, but as I said it is relatively speaking not an emergent system. Deliberately in fact, you don’t want the UPI to be innovating on the interface. It can innovate at the back end however much you want, but you don’t want any surprises.

I send somebody 100 rupees and he gets 120 rupees or 80 rupees or on average you will get 100 rupees. That can’t be the basis of a UPI. So in that sense the UPI based system isn’t backbone infrastructure. It is not deliberately emergent. But AI systems are emergent. It can give you different answers at different points in time depending on what it’s trained for, what is the context, what is the things you have and in fact that is the innovation. If you fix it in a box to start with then you won’t get the innovation. But on the other hand if you give it some open ended thing yes, presumably it will improve but sometimes it may deteriorate, sometimes it may lie to you so in that context what I am trying to say is that in the case of artificial intelligence we should use it but we certainly should not trust it in fact its future is based on a certain level of skepticism, healthy skepticism that we must have about its capabilities it will do amazing things but in my view we should be clear that it is probably much much better at solving bounded problems it can play chess for example very very well but I doubt it can plan your career it’s an unbounded problem so if that is how you think about AI then what you need to do is to as I said begin to think this through in terms of how you apply it in particular boxes and where it has a clear set of things that you are trying to do.

So as I said, bounded problems and even there, verify.

Priyanka Jain

With that, we have audience questions. We have one question from Aditya, the founder of First Tile.

Audience

Thank you. Good evening. That was an incredible set of points that came up. Actually made some really interesting notes about the capital markets equal in Sanjeev that you drew. I thought that was a really interesting way of looking at AI and we’ve been in so many summits. I think this is a very, very interesting way that you’ve put it about risk and ex -ante versus post -ante. I had one question for you and I had two suggestions or requests. For Praveen and Davis. From an AI stack perspective, every summit or every conversation across different countries is looking at all the different components of the stack. And there are two things that kind of come up in most of these conversations, which is around sovereign data asset and leverage that comes out of it in terms of tools and models and so on.

Where is India’s perspective in all of this from a sovereign data asset utilization, the model leverage? And I think different countries are looking at their stack as their stack in which they’re going to give you access and so on and so forth. So I think that is something that will be great to get your perspective.

Sanjeev Sanyal

So obviously, India, with its very large population, has stacks of information on all kinds of things, from health to consumer behavior, et cetera. So in some ways, this is a good place for a huge amount of data for experimentation on human behavior and so on. But of course, you know, if data is the new thing, the new oil, the new… we need to be clear that we own… those rights if it’s our data I mean I’m not even getting into the privacy issue I’m assuming here that it’s all that has been taken care of so we are using anonymized data but even then we should at least have the rights to that data and also to some part of the processing of it there’s no point in saying that you know that we have the data but we neither have the rights to it nor do we have the oil rigs to pump out or the refineries to process the new oil so this is the context in which you may have seen in the latest budget we announced almost quarter of a century sort of tax holiday for putting up data centers in this country that’s not a trivial thing to do why are we doing it well basically because as I said data centers are the oil rigs of this new kind of oil.

And then, of course, we need new companies that will process this oil. Those are the new… We have created one, EI -LLM, but frankly, everybody gets very excited about LLM. LLM is only a very limited, in my view, not even the most interesting usage of artificial intelligence. It just happens to be that it is linguistically talented and consequently, you know, we use it for that. But there are many, many more interesting uses of AI. And as I keep coming back to you and stating that we need to create an ecosystem and that ecosystem, we all say, oh, you need to have, you know, half a trillion dollars of investment to create. Actually, no. Much of where you will end up with this use of these refineries, so to speak, will be quite… bounded problems in certain spaces.

So there is more than enough space for startups with much more modest budgets to do interesting things in AI. And I’m not just talking about people building use cases on other people’s. I’m saying literally bottom -up uses of AI. So I think there’s a lot to be done here. It’s an open space. This is basically like discovering the Americas. But, you know, yes, Spain did have an initial sort of starting advantage. But the great empire in the world was actually built by Britain, which was actually a late starter. So there are many, many countries in the world who you do not think today to be a particular player in this game, who will also turn up here.

And one of them could do much, much better than the guys who you think are at the cutting edge today. So this is an emergent situation with all kinds of unintended consequences, uses, positive and negative will happen out of all of this. I think the key here is to be nimble, keep your eyes open, including on the regulatory find, and do not have set ideas where this whole thing is headed because, frankly, we don’t know.

Audience

No, thanks for that. You know, I’m the founder of First Eye, which is a customer data platform. We work with a large number of enterprises on data, all consent. And so we get a ringside view to the application of all of that that you’re saying. And this kind of leads me to the suggestion. As a supplement, we have AI course, which is a repository of data sets, which is growing. And then for the financial sector as well, we are looking to, say, aggregate, to start with synthetic data and then maybe take up, take correlated data from the regulated. entities with their concepts so that would come into use. Okay, awesome. Actually, that kind of goes towards my suggestion bit for the two of you, which is I think, you know, Praveen, when you spoke about the sandbox from an IFSA perspective, I think the ability to extend that beyond just IFSA to, you know, also the other regulators is something I think will be very, very interesting for at least folks like us because we work with a number of entities which cut across different regulators and an associated point is, you know, today there are so many regulations that come in and I think there’s a lot of, there are two opportunities that I see exist.

One is there is different interpretation of the regulations by different entities and second is as a large data processor, not a data owner, but a data processor, I think there is stakeholder, we are one of the stakeholders in that whole process and today we may not have the adequate access or a seat at the table from a regulatory interpretation standpoint. And there, there is, I think, an opportunity for us to define something which is like, you know, what is a consent -backed API for data consumption, for example, and having a regulatory definition of that with participation from a data processor like us. And we’d love to kind of see if there are processes that allow somebody like us to engage with the regulators.

Praveen Kamat

We are open to that idea, but you have to remember one thing. IFSA is a jurisdiction, you know. It has its set of rules which are different from domestic India. So there is an interoperable sandbox mechanism in place between IFSA, RBI, SEBI, and IRDAI. So a solution that spans across the four regulators can be tested within the sandbox. But the issue is not technological. It’s not fiscal or financial. It’s legal. For example, in India, INR transactions are the norm, right? In IFSA, INR transactions are not permitted. You have 16 foreign currencies that are enabled. and you have to do transaction in those 16 currencies. So if your solution is not compatible across these areas, just to give you an example, then the sandbox experimentation will not go through.

So there are a lot more nuances like this which affect the rollout of pilots within the intraoperable sandbox. So just to give you an example. With respect to movement and processing of data, I will not comment at the moment because there are certain things in works in IFSA. So I leave my RBI colleague for that.

⁠Murlidhar Manchala:

So just like my colleague said, we already have an intraoperable sandbox across regulators and it is on tap. So earlier it was team -based, but then now it’s on tap. Any type of product can be tested in the sandbox. But just to clarify on the sandbox, it is only when the regulated entity feels that the existing products or services is violating one of the regulations. So there are… very few number of entities which come to the sandbox because in general they are not required to be, required to come to the sandbox if they feel they are compliant to the regulations there is no need to come to the sandbox but then we are also thinking of another sandbox where we also provide some more than in terms of monitoring the regulation we can provide, we can support the innovation in terms of say compute data or tools so that is that is also in the thought process.

Priyanka Jain

We have been one of the beneficiaries of the sandbox and the hackathon at 5Money and the process has been phenomenal the way the RBI fintech teams engaged so maybe Aditya I can share some notes with you offline. but I think thank you this has been a phenomenal panel and great discussion on embedded governance when AI is making space in all things financial services how do we make space for governance in AI that was the theme of the discussion and I am very pleased to hear the views of this panel and I am grateful for making time thank you everyone applause thank you I am actually not going to say anything more apart from the fact that thank you and we will have a quick give of the mementos from India AI mission so my my colleague Kriti will do that so starting with applause applause applause applause applause applause applause applause applause applause applause applause Thank you.

Thank you. Thank you. Thank you. Thank you. Thank you.

A

Ajay Kumar Chaudhary

Speech speed

136 words per minute

Speech length

2451 words

Speech time

1075 seconds

Lifecycle‑wide embedded governance is essential for AI systems

Explanation

Ajay stresses that AI governance must be built into every phase of the AI lifecycle, from data acquisition to post‑deployment monitoring, making it a strategic imperative rather than a regulatory afterthought.


Evidence

“Governance in the AI era must however be embedded into systems design” [1]. “Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life cycle” [2]. “The life cycle based model governance institutions should embed governance checkpoints from data acquisition to deployment and post deployment monitoring” [6].


Major discussion point

Embedded AI Governance in Financial Services


Topics

Artificial intelligence | The enabling environment for digital development


Governance must be proportional, fair, transparent and accountable, not a post‑hoc overlay

Explanation

He argues that AI oversight should be risk‑based, proportional, and embedded from the start, ensuring fairness and clear accountability rather than being added after systems are deployed.


Evidence

“Governance cannot be an overlay applied after innovation has already been scaled” [16]. “One is proportionality, that is the governance intensity should be risk -based” [18]. “Fairness and non -discrimination” [20]. “And fourth is accountability, which must be clearly defined” [23].


Major discussion point

Embedded AI Governance in Financial Services


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


AI can broaden financial inclusion but must be deliberately designed to avoid discrimination and bias

Explanation

Ajay highlights AI’s potential to expand access to formal finance while warning that without inclusive design, it could reinforce inequities.


Evidence

“Yet, inclusion cannot be assumed” [73]. “If harnessed responsibly, AI can convert this expanding digital footprint into broader formal access to fair financial services and adoption at scale” [77]. “Design choicesmgovernance framework and institutional culture will determine whether AI strenghten finance finacial resilience and inclusion or not” [78].


Major discussion point

Inclusion, Fairness & Bias Mitigation


Topics

Human rights and the ethical dimensions of the information society | Closing all digital divides


Diversify semiconductor, cloud and foundation‑model supply chains for economic sovereignty

Explanation

He warns that concentration in chips and foundation models threatens financial stability and national security, urging domestic innovation and diversified supply chains.


Evidence

“Three dominate cloud capacity and a handful command foundation models threatening financial stability and economic sovereignty” [102]. “Concentration in advanced chips and foundational AI models raise critical consideration for economic sovereignty, financial stability, and I can further add, the national security” [103]. “Dependency on limited supply chains can create systemic vulnerability” [104]. “At the base are specialized semiconductor chips we all know” [106].


Major discussion point

Sovereign Data & AI Infrastructure


Topics

The enabling environment for digital development | Artificial intelligence


S

Sanjeev Sanyal

Speech speed

156 words per minute

Speech length

3299 words

Speech time

1266 seconds

Ex‑ante accountability and mandatory AI audits are required

Explanation

Sanjeev calls for clear pre‑emptive rules that name who is responsible for AI outcomes and for chartered audits when thresholds are crossed.


Evidence

“ex ante tell people who will be held responsible when things go wrong” [28]. “Maybe we have a chartered AI audit for anything that goes beyond some threshold” [29]. “We need to ex ante decide who in the system will be hauled up when things go wrong” [30].


Major discussion point

Embedded AI Governance in Financial Services


Topics

Artificial intelligence | The enabling environment for digital development


Traditional risk‑based regulation is ineffective for emergent AI

Explanation

He argues that AI’s rapid evolution makes conventional risk‑based frameworks unsuitable because risks cannot be accurately predicted in advance.


Evidence

“But AI systems are emergent” [14]. “AI is an emerging thing” [48]. “you cannot actually put AI or any types of AI into any real risk bucket because this is an emergent evolving thing” [49]. “This is an emergent technology” [51].


Major discussion point

Regulatory Approaches & Risk Management


Topics

Artificial intelligence | The enabling environment for digital development


Ex‑post liability is insufficient; clear ex‑ante liability rules are needed

Explanation

He points out that punishing after a breach is too late and advocates for pre‑defined liability structures.


Evidence

“It’s an ex post punishment” [35]. “So in some ways it works better because it’s ex post rather than ex ante system” [70]. “but as you can tell, that is some ways, is an ex post system and if something really bad goes wrong, you know, you will only find, you can punish the person after the horse has already bolted” [71].


Major discussion point

Regulatory Approaches & Risk Management


Topics

Artificial intelligence | The enabling environment for digital development


AI should be deployed in bounded, compartmentalized environments

Explanation

He recommends limiting AI to well‑defined, bounded problems and using firewalls or “Chinese walls” to prevent systemic spill‑over and hallucinations.


Evidence

“I think it’s better that we deal we give it bounded problems let it solve those bounded problems and get back to us” [83]. “AI will need to create compartments … I think we need to be willing to allow compartmentalized AI” [112].


Major discussion point

Compartmentalization, Sandbox & Experimental Labs


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Existing copyright law does not cover AI‑generated outputs; new framework needed

Explanation

He notes the legal vacuum around ownership of AI‑created content and calls for urgent reform.


Evidence

“Copyright law” [145]. “Who is the owner of a particular innovation?” [147].


Major discussion point

Intellectual Property & Copyright Reform


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


M

Murlidhar Manchala

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Regulators should grant supervisory relief to firms with robust AI control frameworks

Explanation

He suggests that entities that implement comprehensive guardrails and conduct root‑cause analyses deserve a lenient supervisory approach.


Evidence

“In fact, in the same report, it was suggested that the entities which put in place all the guardrails and then in case of any labs, if they are doing the root cause analysis, trying to address the problem, they should have a, the regulator should have a lenient supervisory approach” [41].


Major discussion point

Embedded AI Governance in Financial Services


Topics

The enabling environment for digital development | Financial mechanisms


Ongoing impact audits, representative data and glass‑box models protect vulnerable groups

Explanation

He emphasizes transparent, non‑black‑box AI with regular impact audits and representative training data to ensure fairness.


Evidence

“it should be transparent to the customer it should not be a black black box rather than it can be a glass box” [46]. “Inclusive AI thus requires representativeness in training datasets, periodic impact audits, and community -level feedback mechanism” [33].


Major discussion point

Inclusion, Fairness & Bias Mitigation


Topics

Human rights and the ethical dimensions of the information society | Data governance


Interoperable sandbox enables regulated AI innovation

Explanation

Manchala highlights that an intra‑operable sandbox across regulators is already in place and can be used to test any AI‑related product, allowing firms to experiment while staying within compliance boundaries.


Evidence

“So just like my colleague said, we already have an intraoperable sandbox across regulators and it is on tap.” [11]. “Any type of product can be tested in the sandbox.” [13]. “We are also thinking of another sandbox where we can provide monitoring and support for innovation, such as compute, data or tools.” [15].


Major discussion point

Compartmentalization, Sandbox & Experimental Labs


Topics

The enabling environment for digital development | Artificial intelligence


AI governance as an instrument for emerging tech risk

Explanation

He argues that AI should be viewed as an instrument that requires dedicated governance mechanisms, especially given the rapid emergence of the technology.


Evidence

“And it should be seen as a, it should be seen as an instrument.” [6]. “That can be addressed only through the governance, particularly in the present emergence of technology.” [7].


Major discussion point

Embedded AI Governance in Financial Services


Topics

Artificial intelligence | The enabling environment for digital development


Proactive risk assessment to avoid under‑estimation

Explanation

Manchala warns that the risk posed by AI is often underestimated and calls for forward‑looking governance, guardrails and transparent processes to protect customers.


Evidence

“So risk is maybe underestimating the risk.” [8]. “…once you put in the guardrails, processes, mechanisms across the lifecycle…the customer doesn’t face the risk…it should be transparent to the customer, not a black box but a glass box…” [9].


Major discussion point

Embedded AI Governance in Financial Services


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Sandbox activation only when a regulatory breach is suspected

Explanation

Manchala stresses that entities should approach the sandbox regime only if they believe a product or service may be violating existing regulations, ensuring the sandbox is used as a remedial rather than a routine testing tool.


Evidence

“But just to clarify on the sandbox, it is only when the regulated entity feels that the existing products or services is violating one of the regulations” [14].


Major discussion point

Compartmentalization, Sandbox & Experimental Labs


Topics

The enabling environment for digital development | Artificial intelligence


Next‑generation sandbox to provide compute, data and tool support for AI innovation

Explanation

He proposes a new sandbox layer that goes beyond compliance checks, offering regulated entities access to shared compute resources, curated datasets and development tools, thereby fostering responsible AI experimentation.


Evidence

“We are also thinking of another sandbox where we can provide monitoring and support for innovation, such as compute, data or tools.” [15].


Major discussion point

Compartmentalization, Sandbox & Experimental Labs


Topics

The enabling environment for digital development | Artificial intelligence


Governance focus over risk‑centred debate

Explanation

Manchala indicates a preference to centre discussions on governance mechanisms rather than quantifying risk, arguing that an over‑emphasis on risk can distract from building robust AI oversight.


Evidence

“general we would not like to talk about risk.” [12].


Major discussion point

Embedded AI Governance in Financial Services


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Sandbox model has evolved from team‑based to on‑tap, increasing agility

Explanation

Manchala notes that the sandbox framework, originally managed by dedicated teams, has shifted to an on‑tap model, allowing regulators and innovators to access sandbox capabilities more quickly and flexibly.


Evidence

“So earlier it was team -based, but then now it’s on tap.” [10].


Major discussion point

Compartmentalization, Sandbox & Experimental Labs


Topics

The enabling environment for digital development | Artificial intelligence


Leadership is key to building AI governance institutions

Explanation

He points out that the presence of a strong leader (Ajay Choudhary) was instrumental in establishing the department responsible for AI governance, underscoring the role of top‑level commitment in creating effective oversight structures.


Evidence

“Our keynote speaker, Ajay Choudhury, was also at the helm and the department was formed.” [1].


Major discussion point

Embedded AI Governance in Financial Services


Topics

Artificial intelligence | The enabling environment for digital development


Clear articulation of AI governance approach

Explanation

Manchala stresses the importance of explicitly stating the AI governance strategy, signalling that a concise and transparent communication of the approach is a key element of effective oversight.


Evidence

“So that is our approach.” [4].


Major discussion point

Embedded AI Governance in Financial Services


Topics

Artificial intelligence | The enabling environment for digital development


Incident reporting mechanisms are essential for AI oversight

Explanation

Manchala stresses that a formal incident reporting system must be embedded in AI governance to detect and address lapses promptly, ensuring that customers are protected from adverse outcomes.


Evidence

“you will have the incident reporting mechanisms” [9].


Major discussion point

Embedded AI Governance in Financial Services


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Recognition awards can incentivise robust AI governance

Explanation

He proposes that regulators and industry bodies introduce awards for exemplary AI governance practices, creating positive incentives for firms that embed strong controls and ethical safeguards.


Evidence

“we also suggested that there would be an award for AI in finance” [9].


Major discussion point

Embedded AI Governance in Financial Services


Topics

Artificial intelligence | The enabling environment for digital development


P

Priyanka Jain

Speech speed

110 words per minute

Speech length

1025 words

Speech time

555 seconds

Regulators should reward firms that embed robust AI controls with calibrated supervisory relief

Explanation

She asks whether companies that demonstrate strong governance, bias testing and continuous monitoring should receive regulatory incentives.


Evidence

“my question to you is If a company has embedded robust controls, model inventories, bias testing, continuous monitoring, should regulators reward and discipline such companies with calibrated supervisory relief?” [39].


Major discussion point

Embedded AI Governance in Financial Services


Topics

The enabling environment for digital development | Financial mechanisms


Balancing trust‑building with rapid innovation through compartmentalized AI

Explanation

She notes that compartmentalization is a practical way to de‑risk AI deployments while maintaining trust.


Evidence

“Rightly put technology moves fast but trust takes time to build and compartmentalization is a great way to de -risk in some form and also look at it with a focused agenda and attention” [130].


Major discussion point

Compartmentalization, Sandbox & Experimental Labs


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


P

Praveen Kamat

Speech speed

184 words per minute

Speech length

874 words

Speech time

283 seconds

Sandbox‑based, risk‑capped experimentation enables innovation while limiting systemic exposure

Explanation

He describes sandbox mechanisms that allow AI pilots across regulators, with a clear risk cap and the ability to tailor regulations based on experimental data.


Evidence

“They can roll out their AI pilots in the sandbox” [58]. “So a solution that spans across the four regulators can be tested within the sandbox” [59]. “The goal is to cap the risk” [64]. “So just like my colleague said, we already have an intraoperable sandbox across regulators and it is on tap” [65].


Major discussion point

Regulatory Approaches & Risk Management


Topics

Artificial intelligence | The enabling environment for digital development


GIF City can serve as a living lab for AI governance with flexible sandbox provisions

Explanation

He proposes using the GIF City jurisdiction as a testbed for AI governance, leveraging its interoperable sandbox across multiple regulators.


Evidence

“Can GIF City become a lab for AI governance and we wanted to know your view sir and especially a great segue from Sanjeev sir on how we can look at it differently in a compartmentalized manner” [116]. “so yes gift city has an immense ability to to come across as a lab uh for ai governance” [117]. “So there is an interoperable sandbox mechanism in place between IFSA, RBI, SEBI, and IRDAI” [119].


Major discussion point

Compartmentalization, Sandbox & Experimental Labs


Topics

Artificial intelligence | The enabling environment for digital development


Inter‑regulatory sandbox collaboration is possible but faces legal and currency constraints

Explanation

He notes that while sandbox collaboration across regulators exists, differences in legal frameworks and currency regimes can limit its reach.


Evidence

“We have sandboxes in place for startups as well as established entities” [60]. “If your solution is not compatible across these areas, then the sandbox experimentation will not go through” [61]. “Based on the data that you receive in the experimentation, accordingly the regulations can be tailored” [66].


Major discussion point

Compartmentalization, Sandbox & Experimental Labs


Topics

The enabling environment for digital development | Artificial intelligence


V

Vikram Kishore Bhattacharya

Speech speed

175 words per minute

Speech length

694 words

Speech time

236 seconds

Core security controls remain vital; generative AI mainly lowers attackers’ entry barriers

Explanation

He asserts that MFA, patching and standard cybersecurity practices still protect systems, while generative AI simply makes attacks easier to launch.


Evidence

“It talks about how generative AI has lowered the barriers for a lot of these threat actors” [135]. “But one of the more important elements is that while it’s serving as an accelerant to existing methods, I don’t think it’s foundationally changing the nature of the attacks” [136]. “The same principles and the same foundations of cybersecurity that held true before gen AI still hold true” [138]. “It’s because it’s not foundationally changed” [140].


Major discussion point

Cybersecurity in the Age of Generative AI


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


AI should be integrated into security operations for faster detection and automated response

Explanation

He highlights the need to embed AI into SOCs for rapid anomaly detection, automated scanning and real‑time remediation.


Evidence

“So how do you use these technologies to have faster responses?” [142]. “How do you automate scanning?” [144].


Major discussion point

Cybersecurity in the Age of Generative AI


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


A

Audience

Speech speed

187 words per minute

Speech length

555 words

Speech time

177 seconds

India must assert ownership over its data assets and develop domestic AI processing capabilities

Explanation

A participant asks where India’s perspective lies regarding sovereign data utilization and AI model leverage.


Evidence

“Where is India’s perspective in all of this from a sovereign data asset utilization, the model leverage?” [93].


Major discussion point

Sovereign Data & AI Infrastructure


Topics

Data governance | Artificial intelligence


Define consent‑backed data APIs and enable stakeholder participation in sandbox design

Explanation

The audience proposes a regulatory definition for consent‑backed APIs and a pathway for data processors to engage with regulators.


Evidence

“what is a consent -backed API for data consumption, for example, and having a regulatory definition of that with participation from a data processor like us” [124].


Major discussion point

Compartmentalization, Sandbox & Experimental Labs


Topics

Data governance | The enabling environment for digital development


M

Moderator

Speech speed

16 words per minute

Speech length

145 words

Speech time

531 seconds

AI governance should be an embedded layer, not a separate silo

Explanation

The moderator frames AI governance as an integral part of existing technology oversight, emphasizing its continuous, embedded nature.


Evidence

“We are looking at the overall aspect of governance of AI, but not as something that will be set aside and looked at through a different lens altogether, but something that can be looked in as an embedded layer of governance that we already govern technologies with” [5].


Major discussion point

Embedded AI Governance in Financial Services


Topics

Artificial intelligence | The enabling environment for digital development


Agreements

Agreement points

AI governance must be embedded throughout the system lifecycle rather than applied as an afterthought

Speakers

– Ajay Kumar Chaudhary
– Moderator

Arguments

Governance must be embedded by design throughout AI lifecycle rather than applied as compliance afterthought


AI governance should be embedded as a layer within existing technology governance frameworks rather than treated as separate


Summary

Both speakers emphasize that AI governance should be integrated into systems from the beginning rather than added later as a compliance measure, building on existing governance frameworks


Topics

Artificial intelligence | The enabling environment for digital development


AI systems require transparency and explainability rather than operating as black boxes

Speakers

– Ajay Kumar Chaudhary
– Murlidhar Manchala
– Sanjeev Sanyal

Arguments

Trust in AI requires predictable, explainable, and accountable systems that align with public interest


AI systems should be ‘glass boxes’ rather than ‘black boxes’ with transparent processes for customers


Financial regulation model with audits, transparency, shutdown mechanisms, and separation of functions should apply to AI


Summary

All three speakers agree that AI systems must be transparent and explainable to users, with Chaudhary emphasizing trust-building, Manchala advocating for ‘glass boxes’, and Sanyal proposing audit mechanisms similar to financial regulation


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Regulatory sandboxes and interoperability across regulators are valuable for AI innovation

Speakers

– Praveen Kamat
– Murlidhar Manchala
– Audience

Arguments

Interoperable sandbox mechanisms across regulators enable cross-sector AI solution testing


Interoperable sandbox mechanisms across regulators enable cross-sector AI solution testing


Regulatory sandboxes should be extended beyond individual regulators to enable cross-regulatory AI experimentation


Summary

There is consensus that regulatory sandboxes should work across multiple regulatory bodies to enable comprehensive testing of AI solutions that span different domains


Topics

Artificial intelligence | The enabling environment for digital development


India has strategic advantages in AI development through its digital infrastructure and data assets

Speakers

– Ajay Kumar Chaudhary
– Sanjeev Sanyal
– Priyanka Jain

Arguments

India can demonstrate that scale and safety can coexist through convergence of digital infrastructure, regulatory foresight, and innovation


India’s large population provides valuable data for AI training, but rights and processing capabilities must be domestically controlled


India has historically created, adapted, scaled and governed technology uniquely, particularly with digital public infrastructure


Summary

All speakers recognize India’s unique position with its large population, digital infrastructure, and track record of technology adaptation as providing strategic advantages for AI development


Topics

Artificial intelligence | Information and communication technologies for development


Cybersecurity risks are amplified in AI environments requiring enhanced defensive measures

Speakers

– Ajay Kumar Chaudhary
– Vikram Kishore Bhattacharya

Arguments

AI amplifies both defensive capabilities and adversarial threats, requiring strengthened detection and response systems


Generative AI lowers barriers for threat actors but doesn’t fundamentally change attack nature, so existing security principles still apply


Summary

Both speakers acknowledge that AI creates new cybersecurity challenges by empowering both defenders and attackers, requiring enhanced but fundamentally similar security approaches


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Similar viewpoints

Both RBI representatives agree that organizations implementing proper AI governance frameworks should receive lenient treatment for initial failures, recognizing the probabilistic nature of AI technology

Speakers

– Ajay Kumar Chaudhary
– Murlidhar Manchala

Arguments

Entities implementing robust controls and governance frameworks should receive supervisory relief for first-time lapses


Entities implementing robust controls and governance frameworks should receive supervisory relief for first-time lapses


Topics

Artificial intelligence | The enabling environment for digital development


Both speakers advocate for cautious, adaptive approaches to AI rather than blind trust, emphasizing the need to focus on known capabilities while building flexibility for unknown future developments

Speakers

– Sanjeev Sanyal
– Vikram Kishore Bhattacharya

Arguments

Unlike UPI which is deliberately non-emergent, AI systems require healthy skepticism rather than blind trust


Focus should be on equipping systems to handle known risks while building nimble adaptation capabilities


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Both speakers emphasize that AI development must prioritize fairness and inclusion, ensuring that technological progress doesn’t perpetuate or create new forms of discrimination

Speakers

– Ajay Kumar Chaudhary
– Priyanka Jain

Arguments

Inclusive AI requires representative training data, impact audits, and community feedback mechanisms


Innovation must balance speed with fairness, protecting people from bias while enabling sustainable progress


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Unexpected consensus

Rejection of comprehensive risk-based governance approaches

Speakers

– Sanjeev Sanyal
– Murlidhar Manchala

Arguments

Risk-based governance approach is flawed because AI is emergent and unpredictable, making ex-ante risk assessment impossible


Regulators should avoid discussing specific risks and focus on governance frameworks to address risk underestimation


Explanation

It’s unexpected that both a policy advisor and an RBI representative would be skeptical of detailed risk-based approaches, with Sanyal explicitly criticizing European-style risk assessment and Manchala preferring not to discuss specific risks, instead focusing on general governance frameworks


Topics

Artificial intelligence | The enabling environment for digital development


Preference for compartmentalized rather than interconnected AI systems

Speakers

– Sanjeev Sanyal
– Vikram Kishore Bhattacharya

Arguments

Compartmentalized AI systems are safer and more efficient than interconnected ‘AI of everything’ approaches


Human-AI collaboration should position AI as tool in the loop rather than human in the loop


Explanation

Both speakers, despite coming from different backgrounds (policy and technology), converge on preferring bounded, compartmentalized AI applications rather than comprehensive interconnected systems, which goes against much of the industry rhetoric about AI integration


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Overall assessment

Summary

The speakers demonstrated strong consensus on fundamental principles of AI governance including the need for embedded governance, transparency, regulatory sandboxes, and India’s strategic positioning. There was also unexpected agreement on skepticism toward comprehensive risk-based approaches and preference for compartmentalized AI systems.


Consensus level

High level of consensus on core governance principles with some surprising alignment on more nuanced technical and regulatory approaches. This suggests a mature understanding of AI challenges across different stakeholder groups and could facilitate more coordinated policy development in India’s AI ecosystem.


Differences

Different viewpoints

Risk-based governance approach for AI regulation

Speakers

– Ajay Kumar Chaudhary
– Sanjeev Sanyal

Arguments

Governance intensity should be risk-based. It should be risk-based intensity.


Risk-based governance approach is flawed because AI is emergent and unpredictable, making ex-ante risk assessment impossible


Summary

Chaudhary advocates for risk-based governance intensity as one of four foundational pillars, while Sanyal fundamentally rejects risk-based systems for AI, arguing they are impossible to implement effectively because AI’s emergent nature makes ex-ante risk assessment impossible


Topics

Artificial intelligence | The enabling environment for digital development


Trust in AI systems

Speakers

– Ajay Kumar Chaudhary
– Sanjeev Sanyal

Arguments

Trust in AI requires predictable, explainable, and accountable systems that align with public interest


Unlike UPI which is deliberately non-emergent, AI systems require healthy skepticism rather than blind trust


Summary

Chaudhary emphasizes building trust in AI systems through predictability and explainability, while Sanyal argues that trust in AI is inappropriate and that healthy skepticism should be maintained due to AI’s emergent and unpredictable nature


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


AI as infrastructure classification

Speakers

– Ajay Kumar Chaudhary
– Sanjeev Sanyal

Arguments

AI has evolved from analytical tool to infrastructure that shapes financial outcomes and must be treated as systemically relevant


AI systems have emergent behaviors unlike static infrastructure like UPI, requiring different regulatory approaches


Summary

Chaudhary treats AI as infrastructure requiring the same standards as critical financial utilities, while Sanyal distinguishes AI from infrastructure like UPI, emphasizing that AI’s emergent behaviors make it fundamentally different from predictable infrastructure


Topics

Artificial intelligence | The digital economy


Human-AI interaction paradigm

Speakers

– Ajay Kumar Chaudhary
– Vikram Kishore Bhattacharya

Arguments

Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life cycle


Human-AI collaboration should position AI as tool in the loop rather than human in the loop


Summary

Chaudhary focuses on embedding human oversight and governance throughout AI lifecycle, while Bhattacharya advocates for a paradigm shift where AI is positioned as a tool in the human loop rather than humans being in the AI loop


Topics

Artificial intelligence | Capacity development


Unexpected differences

Fundamental nature of AI governance philosophy

Speakers

– Ajay Kumar Chaudhary
– Sanjeev Sanyal

Arguments

Governance must be embedded by design throughout AI lifecycle rather than applied as compliance afterthought


Risk-based governance approach is flawed because AI is emergent and unpredictable, making ex-ante risk assessment impossible


Explanation

Despite both being senior government officials focused on AI governance, they have fundamentally different philosophical approaches – Chaudhary advocates for comprehensive embedded governance while Sanyal rejects the entire premise of risk-based regulation for emergent technologies


Topics

Artificial intelligence | The enabling environment for digital development


Role of prediction in AI regulation

Speakers

– Ajay Kumar Chaudhary
– Sanjeev Sanyal

Arguments

A risk-based approach to AI governance acknowledges that innovation and prudence are not opposing forces


Do not try and necessarily guess where this is headed. But of course, we need to engage in it


Explanation

Unexpectedly, the policy maker (Chaudhary) advocates for predictive risk assessment while the economic advisor (Sanyal) strongly warns against trying to predict AI’s direction, representing a reversal of typical cautious vs. progressive stances


Topics

Artificial intelligence | The enabling environment for digital development


Overall assessment

Summary

The discussion revealed significant philosophical disagreements about AI governance approaches, particularly between embedded risk-based governance versus financial market-style regulation, and whether AI should be trusted or approached with skepticism


Disagreement level

Moderate to high disagreement on fundamental approaches, but strong consensus on the importance of governance, transparency, and India’s strategic positioning. The disagreements reflect different schools of thought on regulating emergent technologies and could lead to conflicting policy directions if not reconciled


Partial agreements

Partial agreements

Both agree that governance must be built into AI systems from the beginning rather than added later, but disagree on the approach – Chaudhary favors embedded risk-based governance while Sanyal prefers financial market-style regulation with audits and accountability

Speakers

– Ajay Kumar Chaudhary
– Sanjeev Sanyal

Arguments

Governance cannot be an overlay applied after innovation has already been scaled. It must be embedded by design.


Financial regulation model with audits, transparency, shutdown mechanisms, and separation of functions should apply to AI


Topics

Artificial intelligence | The enabling environment for digital development


Both emphasize the importance of transparency and explainability in AI systems, but propose different mechanisms – Chaudhary through embedded governance pillars and Sanyal through mandatory audits similar to financial market regulation

Speakers

– Ajay Kumar Chaudhary
– Sanjeev Sanyal

Arguments

Explainability and transparency


Financial regulation model with audits, transparency, shutdown mechanisms, and separation of functions should apply to AI


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both acknowledge existing interoperable sandbox mechanisms across regulators, but Kamat emphasizes legal constraints limiting effectiveness while Manchala notes limited usage due to compliance confidence

Speakers

– Praveen Kamat
– Murlidhar Manchala

Arguments

Interoperable sandbox mechanisms across regulators enable cross-sector AI solution testing


Interoperable sandbox mechanisms across regulators enable cross-sector AI solution testing


Topics

Artificial intelligence | The enabling environment for digital development


Similar viewpoints

Both RBI representatives agree that organizations implementing proper AI governance frameworks should receive lenient treatment for initial failures, recognizing the probabilistic nature of AI technology

Speakers

– Ajay Kumar Chaudhary
– Murlidhar Manchala

Arguments

Entities implementing robust controls and governance frameworks should receive supervisory relief for first-time lapses


Entities implementing robust controls and governance frameworks should receive supervisory relief for first-time lapses


Topics

Artificial intelligence | The enabling environment for digital development


Both speakers advocate for cautious, adaptive approaches to AI rather than blind trust, emphasizing the need to focus on known capabilities while building flexibility for unknown future developments

Speakers

– Sanjeev Sanyal
– Vikram Kishore Bhattacharya

Arguments

Unlike UPI which is deliberately non-emergent, AI systems require healthy skepticism rather than blind trust


Focus should be on equipping systems to handle known risks while building nimble adaptation capabilities


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Both speakers emphasize that AI development must prioritize fairness and inclusion, ensuring that technological progress doesn’t perpetuate or create new forms of discrimination

Speakers

– Ajay Kumar Chaudhary
– Priyanka Jain

Arguments

Inclusive AI requires representative training data, impact audits, and community feedback mechanisms


Innovation must balance speed with fairness, protecting people from bias while enabling sustainable progress


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Takeaways

Key takeaways

AI governance must be embedded by design throughout the entire AI lifecycle rather than applied as a compliance afterthought, requiring integration of accountability, transparency, and risk management from conceptualization to deployment


Traditional risk-based governance approaches are inadequate for AI because of its emergent and unpredictable nature, making ex-ante risk assessment nearly impossible – governance should focus on ex-post accountability with clear responsibility chains


AI should be treated as systemically relevant financial infrastructure subject to the same standards of resilience and accountability as critical financial utilities, but managed through compartmentalized systems rather than interconnected ‘AI of everything’ approaches


India can leverage its large population data assets and clean-slate regulatory environments like Gift City to become a global leader in AI governance while maintaining sovereign control over data rights and processing capabilities


Financial services regulation models with audits, transparency requirements, shutdown mechanisms, and separation of functions provide a viable framework for AI governance that balances innovation with prudential oversight


Trust in AI systems requires predictable, explainable, and accountable operations with transparent processes for customers, positioning AI as a tool ‘in the loop’ rather than requiring humans ‘in the loop’


AI implementation must prioritize inclusion and fairness through representative training data, impact audits, and community feedback mechanisms to avoid perpetuating historical inequalities in financial services access


Resolutions and action items

Establish clear ex-ante responsibility chains defining who will be held accountable when AI systems fail, similar to how company directors are held responsible in financial markets


Implement mandatory audit systems for AI explainability, potentially creating ‘chartered AI audits’ for systems above certain thresholds


Create compartmentalized AI systems with deliberate separation and firewalls rather than pursuing interconnected AI architectures


Develop interoperable sandbox mechanisms across regulators (RBI, SEBI, IRDAI, IFSCA) to enable cross-sector AI solution testing


Provide supervisory relief and lenient regulatory treatment for entities that implement robust AI governance frameworks and conduct proper root cause analysis after lapses


Reform copyright and intellectual property laws to address AI-generated innovations and clarify data ownership rights


Establish domestic data center infrastructure and AI processing capabilities to maintain sovereign control over India’s data assets


Unresolved issues

How to effectively regulate AI systems that have emergent behaviors and unpredictable evolution paths without stifling innovation


Where exactly in the AI value chain responsibility should be assigned – whether with algorithm creators, data providers, or system deployers


How to balance the need for AI transparency and explainability with the competitive advantages that come from proprietary AI systems


What specific mechanisms should be used to ensure AI systems remain fair and inclusive as they scale, particularly for underserved populations


How to manage the concentration risk in AI supply chains, particularly regarding advanced chips and foundational models controlled by few global players


What constitutes appropriate ‘bounded problems’ for AI applications versus dangerous ‘unbounded’ use cases


How to create effective international coordination on AI governance while maintaining national sovereignty over critical AI infrastructure


Suggested compromises

Implement a balanced regulatory approach that encourages experimentation through sandboxes while maintaining institutional responsibility for outcomes


Allow first-time regulatory lapses for entities with robust governance frameworks while maintaining strict accountability for repeated failures


Focus on ex-post punishment systems with clear skin-in-the-game mechanisms rather than trying to predict and prevent all possible AI risks ex-ante


Enable compartmentalized AI development that solves bounded problems effectively while avoiding system-wide interconnection risks


Create regulatory frameworks that reward entities implementing strong AI governance with calibrated supervisory relief


Develop AI governance that is ‘context-aware’ for local realities while remaining globally coherent


Balance innovation promotion with prudential oversight by treating AI governance as a strategic imperative rather than regulatory burden


Thought provoking comments

It’s not always the first movers who benefit from it and it’s not the case that even those who invent these technologies know where they’re headed. The European Renaissance…was based on three technologies. One was the printing press, the other was gunpowder, and the third was mathematics. The first two were invented by the Chinese and the third was invented by the Indians, but it is the Europeans that took it, owned it and dominated the world.

Speaker

Sanjeev Sanyal


Reason

This historical analogy fundamentally reframes the AI race narrative, challenging the assumption that first-mover advantage or invention guarantees dominance. It provides crucial perspective that technological mastery and strategic application matter more than being first to market.


Impact

This comment set the tone for the entire discussion by establishing that India doesn’t need to lead in AI invention but can still achieve dominance through strategic implementation. It shifted the conversation from anxiety about being behind to confidence about India’s potential for AI leadership.


You cannot actually put AI or any types of AI into any real risk bucket because this is an emergent evolving thing…if you are saying I am going to do risk based it means that you have some assessment of where that thing will go and I am telling you that it is almost impossible to do this

Speaker

Sanjeev Sanyal


Reason

This directly challenges the dominant regulatory paradigm of risk-based governance that most frameworks (including EU’s) rely on. It’s intellectually honest about the fundamental unpredictability of AI systems and questions whether traditional regulatory approaches can work.


Impact

This comment fundamentally shifted the discussion away from conventional risk-based regulation toward alternative governance models. It forced other panelists to defend or reconsider their approaches, leading to deeper exploration of ex-post vs ex-ante regulatory frameworks.


There are other systems that we manage where we have no idea where they are going. Take for example the stock market…we manage it by creating a framework which…has audits and enforces transparency and explainability…systems of shutting things down when things go wrong…deliberately creates systems of separation…and creates skin in the game

Speaker

Sanjeev Sanyal


Reason

This provides a practical alternative to risk-based regulation by drawing parallels to financial market regulation. It offers concrete, implementable solutions (audits, circuit breakers, compartmentalization, accountability) rather than theoretical frameworks.


Impact

This shifted the conversation from abstract governance principles to concrete regulatory mechanisms. It provided a roadmap that other panelists could build upon and influenced subsequent discussions about compartmentalization and accountability.


I am personally very suspicious of any idea of the internet of everything and the AI of everything that would be a disaster I think we need to be willing to allow compartmentalized AI

Speaker

Sanjeev Sanyal


Reason

This challenges the prevailing tech industry narrative of interconnected AI systems and proposes deliberate fragmentation as a safety measure. It’s counterintuitive to typical efficiency arguments and prioritizes safety over optimization.


Impact

This introduced the concept of deliberate compartmentalization as a governance strategy, which became a recurring theme. Other panelists, including Priyanka Jain, picked up on this concept and it influenced discussions about how Gift City could serve as a compartmentalized testing ground.


Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life cycle. from conceptualization and data acquisition to model development, deployment and ongoing monitoring…governance cannot be an overlay applied after innovation has already been scaled. It must be embedded by design.

Speaker

Ajay Kumar Chaudhary


Reason

This articulates a fundamental shift from reactive to proactive governance, emphasizing that governance must be built into AI systems from the ground up rather than added later. It provides a comprehensive framework for thinking about AI governance across the entire development lifecycle.


Impact

This keynote comment established the central theme and framework for the entire panel discussion. All subsequent conversations referenced back to this concept of ’embedded governance,’ and panelists used it as a foundation to build their arguments about regulation, implementation, and oversight.


When you build something from scratch and when you have a brand new regulator, like IFSC which was created in 2020, you start with a clean slate. So that means you have more leg room and you have more space to experiment. So we don’t have baggage of the legacy systems.

Speaker

Praveen Kamat


Reason

This highlights a unique advantage that new regulatory jurisdictions have over established ones – the ability to design governance frameworks without being constrained by legacy systems. It suggests that innovation in governance itself can be a competitive advantage.


Impact

This comment introduced the concept of regulatory innovation and positioned Gift City as a potential laboratory for AI governance. It complemented Sanyal’s compartmentalization argument by providing a concrete example of how separated regulatory environments could foster innovation.


Rather than AI thinking about a human in the loop, humans think AI as a loop to move forward

Speaker

Priyanka Jain (referencing earlier discussion)


Reason

This represents a paradigm shift from the traditional ‘human-in-the-loop’ concept to ‘AI-in-the-loop,’ suggesting that humans should remain the primary decision-makers while using AI as a tool rather than ceding control to AI systems.


Impact

This reframing influenced how the panel discussed the relationship between human oversight and AI automation, particularly in Vikram’s response about cybersecurity and the need for humans to become ‘active participants’ rather than ‘passive passengers.’


Copyright law. Who is the owner of a particular innovation? At which point do you call it an innovation? And is that innovation owned by the person who put the prompt in? Is it owned by the person on whose data it got trained? Or it belongs to the algorithm that created that innovation?

Speaker

Sanjeev Sanyal


Reason

This identifies a fundamental legal and philosophical challenge that current legal frameworks are unprepared to handle. It highlights how AI challenges basic concepts of ownership, creativity, and intellectual property that underpin modern economic systems.


Impact

This comment broadened the discussion beyond technical governance to fundamental legal and philosophical questions. It demonstrated that AI governance isn’t just about managing technology but about rethinking basic legal and economic concepts.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional wisdom about AI governance and regulation. Sanyal’s historical perspective and critique of risk-based regulation established an intellectual framework that moved the conversation away from standard regulatory approaches toward more innovative, pragmatic solutions. His emphasis on compartmentalization and ex-post accountability mechanisms provided concrete alternatives that other panelists could build upon. Chaudhary’s concept of ’embedded governance’ provided the thematic foundation, while Kamat’s insights about regulatory innovation and clean-slate advantages offered practical pathways for implementation. Together, these comments created a discussion that was both philosophically grounded and practically oriented, moving beyond theoretical frameworks to actionable governance strategies. The conversation evolved from abstract policy discussions to concrete implementation mechanisms, with each key insight building upon previous ones to create a comprehensive approach to AI governance that balances innovation with responsibility.


Follow-up questions

How do we determine ownership and copyright in AI-generated innovations – does it belong to the person who created the prompt, the data owner, or the algorithm creator?

Speaker

Sanjeev Sanyal


Explanation

This is a fundamental legal and philosophical question that will have significant practical implications as AI becomes more prevalent in creating content and innovations


How can we develop a judicial system capable of handling complex AI-related disputes and copyright issues?

Speaker

Sanjeev Sanyal


Explanation

Current judicial systems may not be equipped to handle the unique challenges posed by AI-generated content and related disputes


What specific mechanisms should be established for AI auditing, similar to chartered accountant audits for companies?

Speaker

Sanjeev Sanyal


Explanation

There’s a need to develop standardized auditing processes for AI systems to ensure explainability and accountability


How can we establish clear ex-ante responsibility chains for AI systems to ensure accountability when things go wrong?

Speaker

Sanjeev Sanyal


Explanation

Currently, when AI systems fail, responsibility can be diffused among algorithm creators, data providers, and users – clear accountability frameworks are needed


How can Gift City develop and implement AI governance frameworks while balancing innovation with regulatory compliance?

Speaker

Praveen Kamat


Explanation

Gift City has potential to serve as a testing ground for AI governance but needs to navigate the balance between experimentation and regulation


What are the specific technical requirements and processes for creating consent-backed APIs for data consumption in regulated environments?

Speaker

Audience member (Aditya)


Explanation

Data processors need clearer regulatory definitions and frameworks for handling consent-based data sharing across different regulatory jurisdictions


How can cross-regulatory sandbox mechanisms be improved to better accommodate solutions that span multiple regulators?

Speaker

Audience member (Aditya)


Explanation

Current interoperable sandboxes face legal and jurisdictional challenges that limit their effectiveness for comprehensive AI solutions


How can India leverage its sovereign data assets to build competitive advantages in AI while maintaining data rights and processing capabilities?

Speaker

Audience member (Aditya)


Explanation

India needs to develop strategies to monetize its large data resources while maintaining sovereignty and building domestic AI capabilities


What specific governance frameworks are needed for AI systems that operate across different economic cycles and stress scenarios?

Speaker

Ajay Kumar Chaudhary


Explanation

AI systems in finance need to be tested and validated across various economic conditions, requiring new governance approaches


How can we develop effective mechanisms for continuous monitoring and recalibration of AI systems to prevent model drift and bias?

Speaker

Ajay Kumar Chaudhary


Explanation

AI systems evolve over time and need ongoing oversight to maintain their effectiveness and fairness


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.