Secure Finance Risk-Based AI Policy for the Banking Sector
20 Feb 2026 17:00h - 18:00h
Secure Finance Risk-Based AI Policy for the Banking Sector
Summary
The summit opened with a focus on embedding AI governance within existing technology oversight rather than treating it as a separate domain [2]. Ajay Kumar Chaudhary argued that AI must be governed throughout its life-cycle, from design to deployment, and that this governance should be built into the system rather than added later [7-8][24-28]. He outlined four pillars-proportionality, fairness, explainability and accountability-and advocated a risk-based approach that addresses model integrity, concentration risk, data stewardship and cybersecurity [50-54][66-78]. Chaudhary also stressed inclusion, data sovereignty and supply-chain resilience, proposing concrete checkpoints across the AI pipeline and positioning trust as the strategic outcome of embedded governance [85-92][94-106][112-115][118-124][129-131].
Economic advisor Sanjeev Sanyal cautioned that AI, like past general-purpose technologies, does not guarantee first-mover advantage and that risk-based regulation may be either too restrictive or too lax, urging instead ex-ante accountability and compartmentalized “firewalls” for AI systems [149-162][170-182]. He warned against treating AI as a monolithic “internet of everything,” recommending bounded problem scopes, auditability and clear liability for algorithm creators, and later highlighted emerging legal questions around data ownership and copyright in AI-generated outputs [238-250][317-324]. Praveen Kamat described the GIF City IFSC as a “clean-slate” jurisdiction where a sandbox environment can experiment with AI governance, noting that such pilots must respect cross-regulatory legal constraints while allowing risk-capped innovation [192-201][263-291]. Murlidhar Manchala added that regulators should grant supervisory relief to firms that implement robust guardrails, turning AI systems into “glass boxes” with transparent incident reporting rather than opaque black boxes [204-206][296-301].
Vikram Kishore emphasized that generative AI lowers attack barriers but does not fundamentally alter cybersecurity fundamentals, urging organizations to adopt multi-factor authentication, standards like ISO/NIST and to use AI for faster threat detection and automated reporting [215-224][226-233]. The panel collectively agreed that over-regulation can stifle innovation while under-regulation leaves systemic risk unchecked, proposing coordinated sandbox frameworks across RBI, SEBI, IRDAI and IFSC to balance experimentation with oversight [263-291][392-397]. Throughout, participants highlighted the need for continuous monitoring, explainability audits and “skin-in-the-game” accountability to ensure AI enhances financial inclusion without reinforcing bias or concentration [85-92][238-250][311-313].
The discussion concluded that AI’s emergent nature demands a skeptical yet proactive governance model that focuses on bounded applications, transparent oversight and resilient infrastructure to preserve trust in the financial system [326-344]. Overall, the summit underscored that embedding governance into AI from inception is essential for India to harness AI’s benefits while safeguarding stability, inclusion and sovereignty [129-131][317-324].
Keypoints
Major discussion points
– Embedding AI governance throughout the AI life-cycle – The keynote stresses that AI must not be an after-thought compliance layer but built-in from design to deployment and monitoring. Ajay Kumar Chaudhary defines “embedded governance” as integrating accountability, transparency and risk-management into every stage of the AI life-cycle and lists its four pillars: proportionality, fairness, explainability and accountability [45-48][50-54].
– India’s unique digital foundation and the need for sovereign AI infrastructure – India’s population-scale digital public infrastructure (UPI, digital identity, etc.) provides a platform for AI to become a core financial utility. The speaker warns that AI is now a systemic component that must be governed like any critical utility and highlights the five-layer AI stack (chips, cloud, data, foundation models, applications) and the strategic risk of dependence on foreign chip and model suppliers [14-21][44-45][99-106].
– Different regulatory philosophies and the limits of risk-based approaches – Sanjeev Sanyal contrasts the European risk-based model, the Chinese state-led model and the US ex-post tort-law model, arguing that AI’s emergent nature makes precise risk-bucketing impossible and that any framework must be “agnostic” and focus on ex-ante accountability, compartmentalisation and “skin-in-the-game” [149-166][170-176].
– Experimentation zones and sandboxing (GIFT City) as a way to balance innovation and oversight – Representatives from GIFT City explain that, as a newly created IFSC, it can act as a “lab” for AI governance because it starts with a clean-slate regulator, can run sandboxes for pilots, and must respect a gestation period before scaling [192-200][263-286].
– Cyber-security challenges in the age of generative AI – The cloud-service perspective stresses that generative AI lowers attack barriers but does not fundamentally change security fundamentals; organisations must adopt multi-factor authentication, standards (ISO/NIST), active threat-hunting and AI-in-the-loop automation to stay resilient [215-224][225-232].
Overall purpose / goal of the discussion
The panel was convened to explore how India can embed robust, risk-based governance into AI systems that are becoming integral to the nation’s financial infrastructure, while leveraging its digital public-goods foundation to drive inclusion, economic sovereignty, and sustainable innovation. Speakers repeatedly linked governance to trust, resilience, and the broader summit theme of “people, planet and progress.”
Overall tone and its evolution
– The session opens with an optimistic, forward-looking tone, celebrating India’s digital achievements and the transformative potential of AI [14-21].
– As the conversation moves to governance, the tone becomes cautiously analytical, highlighting unknown risks, the need for embedded safeguards, and the shortcomings of existing risk-based models [45-48][149-166].
– When discussing labs, sandboxes, and cybersecurity, the tone shifts to pragmatic collaboration, offering concrete mechanisms and emphasizing partnership between regulators, industry, and technology providers [192-200][215-232].
– The closing remarks return to an hopeful, constructive tone, reaffirming confidence that with disciplined foresight India can align AI innovation with ethical responsibility and trust [126-132][317-324].
Overall, the discussion moves from enthusiasm about AI’s promise, through a sober assessment of governance challenges, to a collaborative roadmap for responsible implementation.
Speakers
– Moderator – Session moderator who opened the panel and introduced the keynote speaker. [S9]
– Ajay Kumar Chaudhary – Keynote speaker delivering the opening address on AI governance in finance. [S2]
– Priyanka Jain – Panel moderator and discussion facilitator; associated with 5Money and experienced with RBI sandbox programmes. [S6]
– Sanjeev Sanyal – Economic Advisor to the Prime Minister of India; described as a macro-thinker, historian and strategic geopolitical analyst. [S4][S5]
– Praveen Kamat – Official from GIFT City International Financial Services Centre (IFSC); expertise in financial regulation, innovation and sandbox experimentation. [S3]
– Murlidhar Manchala – RBI official involved in the AI framework and supervisory guidance; contributes to discussions on risk-based controls and safe-harbor regimes. [S8]
– Vikram Kishore Bhattacharya – Cloud service-provider representative; specialist in cybersecurity, cloud infrastructure and the impact of generative AI on threat vectors. [S1]
– Audience – General audience members participating in the Q&A, e.g., Aditya, founder of First Tile, and other attendees. [S12]
Additional speakers:
– Aditya – Founder of First Tile (a customer-data platform); asked a question during the audience segment about sovereign data assets and AI stack utilization. (No external source citation available)
Opening & moderator remarks
The session began with the moderator reminding participants that the summit’s overarching aim was to treat AI governance not as a separate silo but as an embedded layer within the existing technology-oversight framework that already regulates other digital tools [2].
Ajay Kumar Chaudhary – keynote
Optimism tempered by caution & “Mano” proposal – Chaudhary opened with optimism about the four-day summit and warned that rapid AI scaling will bring both known and unknown risks that must be managed through embedded governance [4-11][13-21]. He cited the Prime Minister’s one-word summary “Mano” (humanity) and proposed* that it could replace the term “responsible AI”, evolving into a “human AI” framing that captures moral, ethical, sovereign, inclusive and accountable dimensions [9-12][15].
* India’s digital public-infrastructure – He highlighted India’s population-scale public digital infrastructure such as UPI and other platforms, showing how interoperability, transparency and scale have reshaped financial participation [14-16].
* AI’s structural shift – AI is now being super-imposed on this foundation, integrating with payment systems, credit-risk platforms, supervisory frameworks and cybersecurity architectures that already operate at national scale [17-20]. This marks a structural shift: unlike earlier automation, AI introduces adaptive, learning systems that can dynamically influence outcomes [21-23].
Core question – The question is no longer whether* AI will transform finance-it already is-but whether governance can keep pace and be designed into the system from inception rather than added later as a compliance overlay [24-28][30-33].
Quotes – He invoked Peter Drucker: governance in AI-enabled finance must be about “doing the right things at the right time” to preserve trust, resilience and inclusion [29-32]; and quoted Christiane Lagarde: “Innovation and regulations are not adversity, they are partners in progress.”* [34-36].
* Embedded governance pillars – Chaudhary defined embedded governance as integrating accountability, transparency and risk-management into every stage of the AI life-cycle – from conceptualisation and data acquisition to model development, deployment and continuous monitoring [45-52]. He distilled this into four pillars:
1. Proportionality – governance intensity should be risk-based [50-51];
2. Fairness & non-discrimination [52];
3. Explainability & transparency [53];
4. Accountability (clearly defined) [54-55].
These pillars must be embedded by design, not retro-fitted, because AI systems affecting credit access or financial behaviour cannot remain opaque black boxes [26-28][31-33].
* Risk-based governance & concrete benefits – He advocated a proportional, risk-based approach that treats AI as a systemic financial utility [45-52]. Key risk dimensions he highlighted were:
– Model integrity – ongoing validation and stress-testing across extreme but plausible scenarios [66-70];
– Operational concentration risk – the systemic danger of a few providers dominating AI infrastructure [71-75];
– Data governance – ensuring data integrity, consent, purpose limitation and minimisation [75-78];
– Cybersecurity – AI can amplify attack vectors (adversarial AI) and therefore requires anticipatory safeguards [78-81].
He illustrated the quantitative impact of AI-enabled detection, noting that in high-value payment environments (NPCI) fraud-loss reductions of 25-30 % are already being realised [66-70]. He also stressed that AI accelerates compliance and broadens access and inclusion by automating routine checks and expanding service reach [67-69]. Finally, he highlighted that regulators are leveraging advanced analytics to monitor systemic patterns, identify anomalies and strengthen early-warning mechanisms [70-73].
* Inclusion, bias mitigation & “glass-box” model – AI can expand financial inclusion by providing granular, dynamic risk assessments that reduce reliance on heavy collateral and static credit histories [82-84]. However, without intentional design, AI could perpetuate structural inequalities (e.g., gender-biased data distorting credit outcomes) [87-90]. Chaudhary called for representative training datasets, periodic impact audits, community-level feedback and transparent redress pathways that turn opaque systems into “glass-boxes” for customers [91-93][204-207].
* Five-layer AI stack & sovereign infrastructure – He described a five-layer stack: (1) specialised semiconductor chips, (2) cloud & data-centric infrastructure, (3) large data sets that fuel the system, (4) foundation models, and (5) application-level services [99-104]. Over-reliance on foreign chips (over 90 % controlled by a single firm) and a handful of cloud and model providers threatens economic sovereignty, financial stability and national security [104-106]. Chaudhary urged diversification through domestic innovation, international collaboration, consent-based data sharing and the promotion of home-grown AI entities [107-110].
* Operationalising embedded governance – He outlined concrete governance checkpoints across the AI pipeline: risk-based classification of systemic impact, independent review, auditable documentation, cross-functional governance committees, continuous monitoring with feedback loops, and consumer-centric safeguards such as transparent disclosures, clear appeal processes and human-in-the-loop interventions [112-115][124-131]. He framed trust as the strategic outcome of these measures, asserting that finance rests on confidence that systems are fair, stable and accountable [118-124][126-132].
Panelist perspectives
* Sanjeev Sanyal – Drew historical analogies, warning that first-mover advantage is not guaranteed for general-purpose technologies and that the European-style risk-based regulation may become either over-restrictive or under-protective because AI’s emergent nature defies ex-ante risk-bucketting [149-166][238-250]. He advocated ex-ante accountability (“skin-in-the-game”), clear liability for algorithm creators, and compartmentalised “firewalls” to prevent systemic spill-over [170-182][242-250]. He also raised novel IP questions about ownership of AI-generated outputs, calling for a judicial framework [317-324].
* Praveen Kamat – Presented GIFT City IFSC as a “clean-slate” jurisdiction (established 2015, regulator 2020) offering regulatory “legroom” for sandbox pilots that cap risk while allowing iterative learning [192-200][263-286]. He highlighted an inter-operable sandbox linking RBI, SEBI, IRDAI and the IFSC, while noting legal constraints such as currency incompatibility (INR not permitted in the IFSC) that must be resolved [392-401][404-410].
* Murlidhar Manchala – Echoed the AI-mission report’s suggestion that firms implementing comprehensive guardrails (model inventories, bias testing, continuous monitoring) should receive supervisory relief (“safe-harbour”) [204-207][296-301]. He stressed senior-management accountability and that incident-reporting mechanisms should turn black-box systems into transparent “glass-boxes” [204-207][296-301].
* Vikram Kishore Bhattacharya – As a cloud-service scientist, he acknowledged that generative AI lowers barriers for phishing, credential theft and malicious code, but maintained that core cybersecurity principles (MFA, strong passwords, regular patching) remain unchanged [215-224]. He urged the adoption of AI-in-the-loop tools for faster threat detection, automated scanning and real-time reporting, alongside skill-building and standards compliance (ISO, NIST, third-party audits) [225-233].
Agreement & disagreement matrix
Common ground – All speakers agreed that trust, inclusion and resilience are essential for AI-enabled finance and that embedded governance is preferable to retro-fitted compliance [45-52][170-182][204-207][215-224].
Points of disagreement –
1. Risk-based regulation – Chaudhary champions a proportional, risk-based framework [50-54]; Sanyal argues that AI’s unknown risks make any ex-ante risk-bucket ineffective and potentially stifling [160-166][238-250].
2. Sandbox purpose – Kamat views the IFSC sandbox as a proactive experimental space for AI pilots [263-291]; Manchala sees the current sandbox as a remedial tool triggered by compliance breaches, though he supports expanding it to include monitoring and tooling [407-411].
3. AI as systemic infrastructure vs emergent technology – Chaudhary treats AI as a core financial utility subject to resilience standards [44]; Sanyal stresses AI’s emergent behaviours that resist traditional infrastructure regulation [242-250].
4. Cybersecurity impact – Bhattacharya maintains that AI does not fundamentally alter security fundamentals [215-224]; Chaudhary warns that AI amplifies cyber-risk, creating new adversarial threats that need anticipatory safeguards [78-80].
5. Purpose of the sandbox (expanded) – The panel differed on whether the sandbox should primarily enable innovation experimentation (Kamat) or serve compliance remediation with supervisory relief (Manchala).
Audience question & response
An audience member (Aditya, founder of First Tile) asked how India’s sovereign data assets could be leveraged for AI model development while respecting privacy and ownership [359-380]. Sanyal responded that India’s massive data pool is “new oil” and that rights to the data and the ability to process it (through domestic data centres and AI refineries) are essential for strategic autonomy, noting the recent tax holiday for data-centre investment as a policy lever [359-362][361].
Aditya also proposed a consent-backed API standard for data sharing and a regulatory seat for data processors. Kamat acknowledged the idea but highlighted legal incompatibilities between the IFSC’s foreign-currency regime and domestic regulations that must be resolved before a cross-jurisdictional sandbox can operate [392-401][404-410]. Manchala added that an inter-operable sandbox already exists for compliance issues, and a broader sandbox offering compute, data and tooling support is under consideration [407-411].
When asked to name an under-estimated risk, Manchala replied that risk itself is being underestimated, underscoring the need for robust governance to surface hidden vulnerabilities [296-301].
Closing remarks
Priyanka concluded the session by emphasizing that AI should initially be applied to bounded problems (e.g., chess) and that the community must maintain a healthy skepticism about AI’s promises, ensuring that optimism is always tempered by rigorous scrutiny [331-336].
Forward-looking roadmap – The panel distilled the discussion into actionable recommendations:
* Adopt a proportional, risk-based governance model flexible enough for AI’s emergent behaviours [50-54][59-63];
* Provide supervisory relief (“safe-harbour”) for firms that demonstrably implement robust guardrails, model inventories and transparent incident reporting [204-207][296-301];
* Expand the IFSC sandbox into an interoperable platform for cross-regulatory AI pilots, while addressing legal constraints such as currency compatibility [263-291][392-401][404-410];
* Develop a consent-backed API framework giving data processors a voice in rule-making and ensuring privacy-by-design [381-391];
* Invest in sovereign AI infrastructure – domestic semiconductor capability, cloud capacity, data-centre incentives and home-grown foundation models – to reduce dependence on a few foreign suppliers [99-106][107-110];
* Mandate continuous monitoring, explainability audits and periodic impact assessments to detect model drift, bias and concentration risk [68-71][112-115];
* Strengthen cybersecurity by combining traditional controls (MFA, patching) with AI-in-the-loop detection, automated reporting and upskilling programmes [215-224][225-233];
* Clarify intellectual-property rules for AI-generated outputs, establishing ownership among prompt authors, data owners and model creators [317-324].
Unresolved challenges remain: operationalising a risk-based framework when many AI risks are unknown, assigning ex-ante liability across the AI supply chain, and harmonising cross-jurisdictional data handling between the IFSC and domestic regulators. The collective optimism, tempered by a realistic appraisal of systemic vulnerabilities, underscores a strategic imperative for India to lead a balanced, home-grown AI governance model that draws lessons from the US, EU and China while remaining uniquely suited to its digital public-goods ecosystem [126-132][317-324].
Thank you. very much in line with the overall theme of the summit. We are looking at the overall aspect of governance of AI, but not as something that will be set aside and looked at through a different lens altogether, but something that can be looked in as an embedded layer of governance that we already govern technologies with. In the interest of time that we have with us, I will request the panelists to be seated on the dais, and I will request AK Chaudhary sir to please begin his keynote.
Good afternoon to everyone. Distinguished policy makers, regulators, industry leaders, members of the FinTech community, and esteemed guests. I will just very closely following last four days how and what are things happening and it was amazing the type of enthusiasm type of excitement and type of budge around AI and this summit and I believe and that whatever is there actually is a real thing which is happening possibly multiple small applications are going to come in coming days which will solve multiple issues and problems in coming days and we’ll have the real leading role actually to play as a country that is the way we look at it we also will have a great role to play on the data side particularly when we are going to train the models for that obviously when we are going to scale up entire thing then possibly there might be some run -throughs some risk also and those risks something is known, something is unknown and for unknown much cannot be done except we need to do take care of the embedding the governance part.
That is the theme of today’s talk, how we need to embed the governance actually the entire life cycle of the AI, the design of the AI. That is the way we have to look at. Yesterday I was again listening our Honorable Prime Minister and the beautiful way that he summarized the entire theme in one word that is called mano, that is called humanity. So possibly in future I am going to use that instead of responsible AI, that is possibly we can talk about human AI because it is going to touch upon moral and ethical systems, accountable governance to sovereign, national sovereignty, accessible and inclusive and valid. All the aspects what we are going to touch upon, everything is covered in this one word that is called Mano.
Now coming back to my address, proposed address, I’m coming back to this now. It’s indeed a privilege to participate in this dialogue at a defining moment in India’s digital evolution. Over the past decade, India has demonstrated how population -scale digital public infrastructure can drive inclusion, efficiency and trust. Systems built with interoperability, transparency and scale at their core have reshaped financial participation by millions. Today we stand at the next inflection point in that journey. A new tech layer is being superimposed upon this digital foundation. AI, artificial intelligence, what we know it, is not arriving in isolation. It is integrating with payment systems, credit and risk management platforms, supervisory frameworks. and cybersecurity architecture that already operate at national scale.
This convergence of scale and intelligence marks a structural shift. Unlike earlier waves of digitalization that automated existing processes, AI introduced adaptive systems, systems that learn, recalibrate, and influence outcome dynamically. In a country as large and diverse as India, such systems do not merely improve efficiency, that see access, opportunity, and systemic resilience. The question before us is not whether AI will transform finance. It already is. The more fundamental question is whether governance will evolve at the same pace as innovation and whether it will be designed into a system from inception rather than appended later as a compliance of the thought. In financial services, trust is foundational. AI system cannot function as opaque black boxes, especially when they influence access to credit or flag financial behavior.
Governance cannot be an overlay applied after innovation has already been scaled. It must be embedded by design. As Peter Drucker observed, quote, management is doing things right, leadership is doing right things, unquote. In the context of AI in finance, governance is not merely about tech correctness. It is about doing the right things at the right time in ways that preserve trust, resilience, and inclusion. Now, looking at AI as infrastructure tool, it has evolved from analytical assistance to shaping financial outcomes. In credit market, machine learning model analyze transaction histories, behavioral signals, and dynamic cash flows to generate granular borrower assessments. In fraud prevention, AI detects anomalous activities within milliseconds, processing volume beyond earlier systems. AI -enabled detection can reduce certain categories of fraud losses by up to 25 to 30 percent at this point of time in high -value payment environment, what we are witnessing in NPCI.
Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to emerging threats in real time. The diffusion of AI across the financial value chain enhances efficiency and precision. Yet, when models operate on a systemic scale, even marginal inaccuracy can produce material consequences. In finance, where stability and trust are public goods, the tolerance for systemic error is limited. India’s financial system adds its own complexities. Its scale of digital participation, linguistic diversity, demographic heterogeneity, and income variability are also important. heightened model risk. Although trained on narrow urban centric or historically squid data sets may inadvertently misclassify, misprice or exclude segments that digital finance is intended to integrate. It is therefore imperative that we do not view AI as a peripheral tech enhancement.
It must instead be understood as a component of financial infrastructure which is systemically relevant and should be subject to the same standard of resilience, governance and accountability what we expect of any critical financial utility. When we talk about embedded governance in AI, historically regulation in financial services have often responded to innovation after risk gets materialized. Governance in the AI era must however be embedded into systems design. Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life cycle. from conceptualization and data acquisition to model development, deployment and ongoing monitoring. It rests on several foundational pillars. I will mention four. One is proportionality, that is the governance intensity should be risk -based.
It should be risk -based intensity. Fairness and non -discrimination. Third is explainability and transparency. And fourth is accountability, which must be clearly defined. While institutions may collaborate with tech providers or leverage shared infrastructure, responsibility for outcomes cannot be outsourced. Potential vulnerability of AI systems that save their operations, board and senior management must understand that logic, limitations, et cetera. Further, and more importantly, in financial AI, algo efficiency should not compromise equitable opportunity. Now, specifically coming to the financial infrastructure, risk -based approach to AI governance, just I’ll touch upon this. A risk -based approach to AI governance acknowledges that innovation and prudence are not opposing forces. They are complementary. Financial authorities globally are converging on principles that emphasize robustness, resilience, transparency, and human oversight.
India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsibility. The objective is not to slow innovation, but to ensure that systemic risk does not accumulate invisibly. Several risk dimensions deserve particular attention as AI becomes integral to financial systems. It may include multiple issues. I will touch upon only four. One is the model integrity. For instance, it can no longer be viewed as a one -time validation exercise. Intelligent systems must be evaluated across economic cycles. And stress against extreme but plausible scenarios. As data patterns evolve and models recalibrate, continuous oversight becomes inevitable to guard against drift, unintended bias, or reinforcing feedback loops. Second is operational concentration risk. I will detail subsequently also. It is an emerging systemic concern.
Diversification and resilience planning are essential to safeguard continuity. Data governance through data integrity, consent management, purpose limitation, and minimization principle is foundational. Financial data is not merely transactional. It reflects livelihood, behavioral choices, and economic participation. And the fourth item is cybersecurity risks that are amplified in the AI environment. As AI strengthens defense mechanisms, it can also be leveraged by adversaries. Institutions must anticipate adversarial AI and strengthen defensiveness. Detection capability accordingly. A risk -based… framework recognizes that governance cannot be static system that learn and evolve demand demand oversight that is equally dynamic as also measured proportionate and forward looking now just touching upon supervisory intelligence as ai permeates financial institution supervisory framework are also evolving supervisors increasingly leverage advanced analytics to monitor systemic pattern identify anomalies and strengthen early warning mechanism this creates a reciprocal dynamic institution embed ai in operation while oversight bodies integrate intelligence into supervision however governance cannot be regulated driven alone institution capability is critical ai literacy at the board and senior management level is no longer optional leaders must understand model architecture validation methodology vendor dependency and ethical limit implications Effective governance requires interdisciplinary capability bringing together tech, risk, compliance and legal experts as well as business leaders together Institutions that integrate AI governance into their ERM framework strengthen resilience Christian Lagarde has noted, innovation and regulations are not adversity, they are partners in progress That partnership must guide the embedding of AI within finance Coming to the inclusion part, what our Honorable Prime Minister has mentioned about the last A in MANO, that is access and inclusion India’s financial transformation has been anchored in inclusion Over the past decades, tech has lowered barriers, reduced transaction costs and brought millions into the formal financial ecosystem AI now offers an opportunity to deepen that trajectory Through granular dynamic risk assessment Thank you It can reduce reliance on collateral heavy models and static credit history.
Transition level data, cash flow analytics and behaviour indicators can provide more nuanced insight into the repayment capacity, particularly for MSME who are presently outside the traditional credit framework. India is expected to account for a significant share of global digital transition growth this decade. If harnessed responsibly, AI can convert this expanding digital footprint into broader formal access to fair financial services and adoption at scale. Yet, inclusion cannot be assumed. It must be intentionally designed. Algo, trained on historically squid dataset, risks perpetuating structural inequalities. In formal sector, income volatility. In terms of the future, of the Gender -based data gas may distort credit outcomes. Without corrective safeguards, technology may reinforce rather than reduce disparities. Inclusive AI thus requires representativeness in training datasets, periodic impact audits, and community -level feedback mechanism.
It calls for institutional mechanisms that allow individuals to seek clarification and redress where automated decisions affect their financial standing. Now coming to the sovereign and resilient AI foundation. AI governance intersects not only with the institutional risk, but with strategic resilience. Concentration in advanced chips and foundational AI models raise critical consideration for economic sovereignty, financial stability, and I can further add, the national security. Dependency on limited supply chains can create systemic vulnerability. If we may look at AI stability. I’m going to go ahead and start with the AI. more granularly. It rests on five interdependent layers. At the base are specialized semiconductor chips we all know. Above this sits the cloud and data -centric infrastructure that provides scalable processing capacity.
And these systems are fueled by vast data sets drawn from public and proprietary sources. On this foundation operate large foundation models adaptable across domain and finally at the top are application and that embed AI into financial services and everyday economic life. In this context we should be conscious of the fact that one firm controls more than 90 % of advanced chips. Three dominate cloud capacity and a handful command foundation models threatening financial stability and economic sovereignty. We must therefore diversify supply chains to the extent possible through domestic innovation and international collaboration to secure resilient AI foundations. Further, if you look at what is the pathway for ecosystem scaling possibly we have to look at the consent based data sharing, shared AI and risk infrastructure investment in AI literacy and governance at all levels including board and senior management and most importantly encouraging home grown tech and AI capable entities.
It may be appreciated that an India first approach is not inward looking. It is context aware. It ensures that governance reflects local realities while remaining global coherent. Now coming to the operationalization of embedded governance, it may involve multiple issues but I am touching upon 5 to 6 one. The life cycle based model governance institutions should embed governance checkpoints from data acquisition to deployment and post deployment monitoring. obviously clear risk classification framework based on the systemic impact that we should have to have independent review and oversight, enhanced oversight on that. It should be auditable and documentation should be there cross functional governance committee will be helpful no doubt on that and continuous monitoring and feedback loop that basically helps in periodic recalibration by way of external audit.
Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mechanism are critical to maintain public trust. These pathway ensure that governance is not episodic but embedded women into operations DNA. Now I will just before concluding I will touch upon the role of India in AI and trust as a corner store of financial AI. Finance rest on confidence that systems are fair, stable and accountable. Deposit trust institution to safeguard asset borrowers’ trust systems to assess risk fairly, and market trust, transparency, and stability. EI has the potential to enhance this trust by improving fraud detection, accelerating compliance and broadening access and inclusion. But if governance is ineducated, EI can erode confidence rapidly.
Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with public interest. And trust endures when leadership anticipates risk rather than reacts to failure. India stands at a pivotal moment, working across all five layers of the EI stack, and demonstrating the ability to deploy application at population scale. It is shaping a global agenda for inclusive EI. The convergence of digital infra, regulatory foresight, and entrepreneurial innovation offers a chance to show that scale and safety can coexist and governace can catalyze inovation.Coming to conclusion artificial intelligance wiil sace the next chapter of financial services. But tech alone does not determine outcomes. Institutional design does. Design choicesmgovernance framework and institutional culture will determine whether AI strenghten finance finacial resilience and inclusion or not.
Embedded governance is not regulatory burden.It is strategic imperative.It ensures that innovation is sustainable, trust is preserved and the system stabillity is protected. If we embed fairness,transparency,anccuntability and proportional oversight into the architecture of financial AI form iception, India can chart distinctive path,one that alligns tech ambition with ethical responsibility. Let as approach rhos moment not with hestitation but with disciplined forseght .Let us ensure that as our financial systems become more intelligent,our governance become more robust, our oversight becomes more anticipatory and our commitment to inclusion more resolute. In doing so, we will not only harness the power of AI ,but we also shape it to serve the broader goals of stability, oppertunity and share prospectively.Thank you.
Thank you, sir. That was very insightful and sets the context for the panel discussion to follow. We could also request you, if you would want, you could join us in the audience. That would be great. Over to you, Priyanka, for introduction to the panelists and then taking this discussion forward.
Thank you so much. Thank you. Our panelists need no introduction. I’m going to keep it very fast so that we can make the most of, you know, capturing their thoughts. First, I have with me Mr. Sanjeev Sanyal. Sir is the economic advisor to the Prime Minister. He’s in the Prime Minister’s office and he needs no introduction. If I actually go by what AI has given me as his persona, AI summarized it as a macro thinker, a historian, a historian of structural cycles and a strategic geopolitical lens. Fortunately, today we have the OG himself in the room. And without any further ado, I want to ask him my first question. So historically, countries that have mastered general purpose technologies, right from the steam engine, early electricity, Internet, they’ve gained outsized economic advantage.
Is AI that inflection point for India? And if so, does early well -designed self -governance accelerate trust or does it deny us of any competitive momentum?
Yes, it is important that you are engaging in it, but let me point out that it’s not always the first movers who benefit from it and it’s not the case that even those who invent these technologies know where they’re headed. I mean, just to give you an example, the European Renaissance, which led ultimately to the Western domination of the world for half a millennium, was based on three technologies. One was the printing press, the other was gunpowder, and the third was mathematics. The first two were invented by the Chinese and the third was invented by the Indians, but it is the Europeans that took it, owned it and dominated the world. So, one important thing to recognize in all of this is that do not try and necessarily guess where this is headed.
But of course, we need to engage in it. We need to engage in these technologies and build on them. Otherwise, you know, somebody will take your technology and dominate you. So it is very, very important that India does participate in this AI revolution. But again, in this context, let me say, that does not mean that we should spend time trying to work out exactly where this is headed. For example, when the social media revolution was happening 20 years ago, when Facebook and all these things came about, the marketing tool of the people at that time was, see, now everybody can talk to everybody, we will all move to the golden mean, because we will all have similar views, because we can all talk to each other, and so on.
But in fact, the algorithms went out of their way to put us in buckets and echo chambers. So in fact, we ended up, social media ended up doing exactly the opposite of what the, you know, the technology experts were telling us social media would do. now why does this apply to AI as well and here I am going to talk about this risk based thing that everybody is talking about let me tell you that you cannot actually put AI or any types of AI into any real risk bucket because this is an emergent evolving thing even more so than social media so consequently if you are saying I am going to do risk based it means that you have some assessment of where that thing will go and I am telling you that it is almost impossible to do this so for example in my view the European way in which they are going about and having you know risk they are the pioneers of risk based systems I understand it is pretty obvious that you don’t want AI to take over our nuclear buttons but other than that the risk levels of most of the other things is utterly unknown this is a bad thing because I am not saying that because I am not saying that because I am not saying that something totally innocuous might go and blow up the whole system because these things are emerging they are evolving, they are interconnecting therefore I actually do not think the risk a system that is largely based on perceptions of risk will work because it is not possible ex -ante to work out what is dangerous or for that matter what is beneficial now what should you do if you can’t tell what is going to happen I am telling you the European system is either going to be strangulate the system by being too stringent or it will open things up because it wants progress but will ultimately the risk based system will not be able to take control of it so the other model that is there is of China which is the state knows best but we know from the experience we had with the Wuhan virus that the state can very often lose control of things that are happening and it can spiral out.
The third model that is mostly the American model is to have a laissez -faire and let anybody do whatever they want. Now the dangers of that are obvious. In my view, the way they control it is through tort laws, i .e. if something goes badly wrong, you will then end up with a billion dollar fine or something like that. So in some ways it works better because it’s ex post rather than ex ante system. It depends on those who are running the system having skin in the game, i .e. your company will go down and you will be jailed and you will have a billion dollar fine on it. If things go wrong, that is how they are doing it.
It’s an ex post punishment. But as you can tell, that is some ways, is an ex post system and if something really bad goes wrong, you know, it will you’ll only find, you know you can punish the person after the horse has already bolted you are going to lock it. So all these systems have their downsides but I’m just telling you that whatever system we design in order to control this has got to be based on being agnostic to how this whole thing works going forward. Now, I know I’m taking up their time but give me a minute. There are other systems that we manage where we have no idea where they are going. Take for example the stock market.
You and I don’t know where the stock market will be in a decade’s time. It’s a complex system just like artificial intelligence but we manage it. How do we do it? Well, we do it by creating a framework which does the following thing. It first of all has institutes audits. And enforces transparency and explainability. if you can’t explain your accounts you can’t be in the stock market two it has systems of shutting things down when things go wrong so there are every stock market will have when things spiral out it shuts down three it deliberately creates systems of separation for example this you know there are the same company cannot you know be a bank as well as being a company that so there are conflict of interest so in the same way AI will need to create compartments I am personally very suspicious of any idea of the internet of everything and the AI of everything that would be a disaster I think we need to be willing to allow compartmentalized AI I think it will be more efficient anyway from an energy perspective but I think it’s also safer and most importantly you need to create skin in the game, i .e.
ex ante tell people who will be held responsible when things go wrong. So, in the case of financial markets, the directors of the company are the ones hauled up when things go wrong, or the CEO. In the case of AI, we will have situations where when things go wrong, the person who made the algorithm will blame the data, the data guy will blame the company, the guy who is the user, all kinds of things will happen. We need to ex ante decide who in the system will be hauled up when things go wrong. That will create skin in the game. But we cannot wait for something to go wrong and then this happens, we need to decide this ex ante.
So, all of these things exist in the case of financial regulation. I personally think a similar system.
Rightly put technology moves fast but trust takes time to build and compartmentalization is a great way to de -risk in some form and also look at it with a focused agenda and attention. With that we can actually bring in Mr. Kamath. Mr. Kamath is from the GIF City IFSC, a compartmentalized global financial hub in a way that India has created and we are very fortunate to have you sir here GIF City actually operates at a unique intersection of innovation and global credibility. It competes with the likes of Singapore, Dubai, London. Can GIF City become a lab for AI governance and we wanted to know your view sir and especially a great segue from Sanjeev sir on how we can look at it differently in a compartmentalized manner.
See if you see a Gift IFSC as a jurisdiction, it is just, it was set up in 2015, so it’s just 11 years old. We are building it up from scratch. Now, when you build something from scratch and when you have a brand new regulator, like IFSC which was created in 2020, you start with a clean slate. So that means you have more leg room and you have more space to experiment. So we don’t have baggage of the legacy systems. So if you see the way we have evolved over the last six years, IFSC, the way regulations have evolved, we have all the verticals across finance, capital markets, banking, insurance, pensions. And we have introduced new verticals, ship build, ship leasing, aircraft leasing, ancillary services and so on.
You know, in line with all of the global financial centers. So with respect to experimentation, when you use the word lab, you imply experimentation. So the appetite for experimentation and the appetite for taking risks, is much higher than other, say, domestic regulators or regulators overseas because of the absence of retail investors. so yes gift city has an immense ability to to come across as a lab uh for ai governance however building a financial center is a is is like a 45 kilometer marathon you know it’s not a 8 kilometer dream run so it will take its time uh we are on the growth trajectory on the upward trajectory and there is a certain gestation period for every financial center that that period gestation period cannot be skipped we are in that gestation period so once we reach critical mass we will we’re going to see a lot of things happening and coming out of gift ifsc.
Thank you actually i will go murli sir and the rbi free ai report or the framework on uh you know any enablement of ethical ai i think it’s very forward looking it is actually building on existing regulatory controls and architecture to bring in you know the principal base ai ecosystem so my question to you is If a company has embedded robust controls, model inventories, bias testing, continuous monitoring, should regulators reward and discipline such companies with calibrated supervisory relief? And in other words, is there a safe harbor for somebody who’s, you know, who’s put in risk -based controls but, you know, has been a first -time defaulter?
Yeah. In fact, in the same report, it was suggested that the entities which put in place all the guardrails and then in case of any labs, if they are doing the root cause analysis, trying to address the problem, they should have a, the regulator should have a lenient supervisory approach. And it should be seen as a, it should be seen as an instrument. It should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a over acting risk area so that is something which we recognize so on both friends one is we understand the technology is probabilistic and then it can have lapses but once in terms of governance if you if you put in the guardrails if you put in the processes if you put in the mechanisms across the lifecycle to see that this the the customer doesn’t face the risk so that is the main focus the customer it should be transparent to the customer it should not be a black black box rather than it can be a glass box so so and and it should be understandable to the customers so once all the measures are taken take are taken into consideration by the by the entity in terms of governance as well as the processes you can see that the customer does not face the risk so that is the main focus the customer it should be transparent to the customer it should not be a black black box rather than it can be a glass box so and it should be understandable to the customers so once all the measures are taken take are taken into consideration by the by the entity in terms of governance as well as the processes because of the nature of the technology presently we understand it can lead to some to some aberrations but then as long as it is it is taken in a in a right process you have these incident reporting mechanisms you have the you will have the manual or right so the once you have these controls and the and the right approach the supervision will not should not be used as a as a systemic or or a greater risk you should rather you should allow first time lapse and then in terms of say rewarding it we also suggested that there would be an award for a in finance particularly there are specific works done in in terms
Thank you. I think Vikram your advantage point here because you are a global infrastructure player. You are seeing regulatory trends across the US, UK, Singapore and many other markets. You heard about how the panel has been shaping right from the policy makers to international financial center and also RBI. Want to know as an infrastructure provider how are you looking at cyber security and its evolution in the age of generative AI?
Thanks so much Priyanka. I would just make one correction as a cloud scientist. I am a cloud scientist and I am a cloud scientist. I am a cloud service provider and not merely an infrastructure provider. I think one of the things is, for good or for worse, we’ve seen the benefits of generative AI, but we’re also seeing bad actors use generative AI for phishing attacks, for credential attacks, for malicious code. So, you know, with the good comes the challenges. But one of the more important elements is that while it’s serving as an accelerant to existing methods, I don’t think it’s foundationally changing the nature of the attacks. And, in fact, there was a report that came out in 2025.
It talks about how generative AI has lowered the barriers for a lot of these threat actors. But I go back to what I said. It’s because it’s not foundationally changed. The same principles and the same foundations of cybersecurity that held true before gen AI still hold true. So, you know, multi -factor authentication, strong passwords, regular updates, scanning your systems. And I think it is imperative for organizations. To fundamentally, especially in the financial services, who are always. being attacked and India is a country where not only the banks but we have a huge citizenry with different levels of financial literacy so therefore how do you use these tools to actually safeguard the financial system so I think that in that respect you know a lot of sort of kudos to the RBI for also thinking about it on you know these principal lines but also the banks for actually leveraging these technologies and I think that one of the elements that you need to always do is you know trust service providers like us but also banks should verify and that is done through standards like ISO or the NIST and you know independent third party reports validate the various controls that are there and I think that now and it’s a point that I was making a little earlier you have to become an active participant in cyber security no longer can you be a passive passenger in it because the landscape is changing and as more and more people are digitizing so are the people who are willing to and are looking to attack any vulnerability.
So GenAI does provide you with the tools, because, again, I’m also a believer in not, you know, human in the loop, but having AI in the loop. So how do you use these technologies to have faster responses? How do you automate scanning? How do you automate getting reports? To be able to make those value judgments at the right time. So that requires skilling. Again, that requires awareness, not just about something like an AWS or the cloud, but also banks and also, you know, the work that, again, regulators as well as cloud service providers are doing is having these awareness programs to make sure that the more people understand the technology, the better the framework and the groundwork will be for them to adopt.
Thank you.
I think I also referred to our earlier discussion today afternoon, wherein rather than AI thinking, about a human in the loop, humans think AI as a loop to move forward and I think that was a great paradigm shift that we can look at. Sanjeev sir I am going to come back to you but I also want to give a backdrop to this question India has never simply adopted technology we have created it, we have adapted it, we have scaled it and we have governed it in our own way. We did it with identity, we did it with payments and we did it with digital public infrastructure If the governance frameworks around AI are beginning to emerge and they are also being divergent globally like US being innovation led, EU being compliance led, China being state led, where is the access that India is going to strategically position itself and how are you looking at it from your lens?
So I
think I will continue from what I was saying earlier Now, we need to be very, very careful that we don’t end up with a bureaucratic risk -based system. This is an emergent technology. It will evolve in all different ways, and we’ll have to be very, very creative about this. Now, there is a difference between, say, the systems as an architecture. AI is an emerging thing. It’s not just infrastructure in the sense that, say, you can think of UPI as infrastructure, for example, digital identity as infrastructure. It doesn’t in itself have emergent behaviors. AI has emergent behaviors, i .e., it evolves and interacts with other forms of AI, and which is why I said you need to be fundamentally suspicious of anybody who says that they have a very clear idea where this whole thing is going.
We don’t at all have a clear idea. Nobody on the planet has a clear idea where it’s going. So we do need some regulation. We need to be very, very careful about having humans in the loop. as I said right in the beginning you need to have systems switch off buttons you need to create what are called in finance Chinese walls which separate different tracks as I said earlier I am not a huge fan of the AI of everything I think that’s dangerous and will lead to bad outcomes however AI can be run in compartments rather well and why don’t we use that because in any case that’s less energy using in any case it is better at solving bounded problems when you give AI an unbounded problem it tends to hallucinate because unfortunately it has learnt another human trait that it doesn’t like to tell you that I don’t know it rather make up stuff so consequently I think it is better that we deal we give it bounded problems let it solve those bounded problems and get back to us going for this AI or internet of everything which everything is interconnected sounds very good but just it was last July or July before that we saw when one very small code of a Microsoft program which was by the way static it wasn’t even a fluid one it went wrong and you ended up with causing havoc in airports ATMs all kinds of things around the world now imagine the same thing happening in a system where it has emergent characteristics by the time you fix one bit of it it has flowed into some other part of the system so I personally think we need to create firewalls you know forest fire is also an emergent thing and the way we control it is not by predicting where the fire is coming from and where it will go we just have these firewalls from time to time we do that in finance all the time we don’t try to work out what the conflict of interest is, we simply ban situations where conflict of interest will emerge and the same thing is true of skin in the game I think we need to ex -ante work out where in the chain is the responsibility I personally think that it should be done at the level of where the algorithm is made public to use whoever is making it even if their data is wrong you cannot blame the data you are responsible so somebody else may disagree, whatever point of the matter is we need to have very clear points of punishment when things go wrong we need to have audit systems for explainability there is nothing very deep about this after all every company listed in the stock market has got it itself several times a year why can’t we ask major AI companies to be audited?
If you cannot explain why your results are turning up too bad, you shut it down. We do that even with relatively small companies have to go to a chartered accountant several times and chartered accountant has to sign it off. Maybe we have a chartered AI audit for anything that goes beyond some threshold. And I think given how potentially dangerous it is and lucrative it is as well, I don’t think we should be thinking about this as a problem. Rather than doing what I think many others say, okay, they understand it’s dangerous, they will say, but why don’t we have risk -based? Now, ex ante, you cannot work it out. All you will do is you will have technologies that are just, you will end up with regulations that will become just too stringent and will kill the sector.
Rather, along the way, you have a system of explainability audits. With that, let me hand
Mr. Kamath, I’m going to come to you. Economists worry about tourists under regulation that creates instability and over -regulation that will kill dynamism. Where do you see gift city? Because, again, it’s at an intersection of local and I want to hear your views on it.
That’s the problem facing all regulators worldwide across financial sector. Over -regulation repels innovation. Under -regulation repels serious long -term capital. So now, where do you draw the balancing equilibrium point? Let me explain it with an example, simple example. I joined SEBI, Securities and Exchange Board of India in 2008. I was posted to the surveillance department. In 2008 itself, the financial crisis was in full flow. So in our surveillance systems, which are very, very powerful systems, we noticed 1 ,000 orders being entered in a span of couple of microseconds. So we were wondering how is this possible, how can a human enter so many orders. Then we came to know algorithmic trading terminals have been deployed by certain entities in the stock market.
When we dug deeper we came to know that initially it was deployed in 2004 by one entity and then slowly slowly it was the volumes were increasing. I mean it didn’t reach a critical point but they were slowly increasing. Now in 2010 the inflection point came when it reached a critical mass. SEBI came up with guidelines to protect safeguard the retail investors and to preserve financial stability. So here is a perfect example where an innovation in the capital market which is algorithmic trading was deployed by entities for a good six years. It was not regulated, it was being used and the regulator didn’t do anything to stop it. But when the regulator issued the guidelines the necessary safeguards were put in place.
However at the same time there were no breaks applied on the rollout of the innovation. So algorithmic trading even after the guidelines grew exponentially in the Indian capital market to where it is today. So in the same manner, we hope to facilitate innovation in gift IFSE. We have sandboxes in place for startups as well as established entities. They can roll out their AI pilots in the sandbox. The goal is to cap the risk. Like sir said, it’s very difficult to identify all the risks. But whatever possible risks can be identified, let’s cap the risk without going into the technical mechanics, you know, the internal mechanics. And then see how it flows out. Based on the data that you receive in the experimentation, accordingly the regulations can be tailored.
Thank you.
I know we are at time, but I’m going to still extend because I have such a prestigious panel by another few minutes. Could you come back to you with a quick rapid fire? If you could tell us one risk that we are underestimating when it comes to AI. No, in
general we would not like to talk about risk. So that is our approach. Our keynote speaker, Ajay Choudhury, was also at the helm and the department was formed. So risk is maybe underestimating the risk. That is what I can say. That can be addressed only through the governance, particularly in the present emergence of technology. Actually, I
like what Sanjeev sir was telling us. It’s never going to be risk -free, but we’ll have to move forward. We’ll have to figure it out and we’ll have to do it in as much as possible compartmentalized manner. So any risk that we are overestimating, anybody from the panel who wants to talk about any risk that we are overestimating. Let’s give Vikram a chance. I
mean, I think the fundamental nature would be there is no zero risk. It’s how do you equip yourself. to handle risks because I think a point that Mr. Chaudhry Mr. Sanyal also made is as a regulator or a regulated environment, how do you create the tools to be nimble to adapt as the technology adapts and I think that that is the important element. Right now the tools are there, there is so much we can do that we’re not, maybe we’re not doing as well, so maybe we can focus very well in the here and the now and equip ourselves to be nimble enough to deal with anything that comes because anybody who’s telling you what’s coming with a certain amount of certainty, I take that with a pinch of salt.
I think that the future is a little unknowable at this point of time but there are so much that is known and we should be able to tackle that right now. I
think that’s great. Sanyal sir I’m going to again come to you. One reform that India must prioritize, what is your view on it? That’s
Copyright law. Who is the owner of a particular innovation? At which point do you call it an innovation? And is that innovation owned by the person who put the prompt in? Is it owned by the person on whose data it got trained? Or it belongs to the algorithm that created that innovation? So all of these I would say that we need to begin to think of a judicial system that can deal with these kinds of problems. We already have a cloud judicial system. But do remember that these very different kinds of, and I would almost call them philosophical problems, are going to turn up at our doorstep very, very quickly. And we need to be thinking about them.
Thank you. When UPI came in, I think about a decade ago, and we have the benefit of having the NPCA chairman himself being in the room, I think it was more than payment. It was trust in an invisible system. and today AI is becoming that invisible system that is sitting quietly in our credit underwriting decisions, our onboarding flows, grievance redressal systems, even regulatory reporting and I think that’s, it was a great discussion to talk about how do we embed trust in an AI system that is fast evolving because at the end of the day we’re thinking about the theme of the summit which is people, planet and progress all in the same breath. People, how do we protect them from opaque systems or bias?
Planet, how do we scale sustainably and responsibly and progress because it doesn’t have to be only fast innovation, it has to be fair innovation. So I think a lot of great thoughts today that came in the panel discussion and I’m extremely grateful to everybody who made time to have this discussion. Sanjeev sir, we could have some closing thoughts from your side. Well, you mentioned trust. Let me say that… while it is fair to trust UPI, but as I said it is relatively speaking not an emergent system. Deliberately in fact, you don’t want the UPI to be innovating on the interface. It can innovate at the back end however much you want, but you don’t want any surprises.
I send somebody 100 rupees and he gets 120 rupees or 80 rupees or on average you will get 100 rupees. That can’t be the basis of a UPI. So in that sense the UPI based system isn’t backbone infrastructure. It is not deliberately emergent. But AI systems are emergent. It can give you different answers at different points in time depending on what it’s trained for, what is the context, what is the things you have and in fact that is the innovation. If you fix it in a box to start with then you won’t get the innovation. But on the other hand if you give it some open ended thing yes, presumably it will improve but sometimes it may deteriorate, sometimes it may lie to you so in that context what I am trying to say is that in the case of artificial intelligence we should use it but we certainly should not trust it in fact its future is based on a certain level of skepticism, healthy skepticism that we must have about its capabilities it will do amazing things but in my view we should be clear that it is probably much much better at solving bounded problems it can play chess for example very very well but I doubt it can plan your career it’s an unbounded problem so if that is how you think about AI then what you need to do is to as I said begin to think this through in terms of how you apply it in particular boxes and where it has a clear set of things that you are trying to do.
So as I said, bounded problems and even there, verify.
With that, we have audience questions. We have one question from Aditya, the founder of First Tile.
Thank you. Good evening. That was an incredible set of points that came up. Actually made some really interesting notes about the capital markets equal in Sanjeev that you drew. I thought that was a really interesting way of looking at AI and we’ve been in so many summits. I think this is a very, very interesting way that you’ve put it about risk and ex -ante versus post -ante. I had one question for you and I had two suggestions or requests. For Praveen and Davis. From an AI stack perspective, every summit or every conversation across different countries is looking at all the different components of the stack. And there are two things that kind of come up in most of these conversations, which is around sovereign data asset and leverage that comes out of it in terms of tools and models and so on.
Where is India’s perspective in all of this from a sovereign data asset utilization, the model leverage? And I think different countries are looking at their stack as their stack in which they’re going to give you access and so on and so forth. So I think that is something that will be great to get your perspective.
So obviously, India, with its very large population, has stacks of information on all kinds of things, from health to consumer behavior, et cetera. So in some ways, this is a good place for a huge amount of data for experimentation on human behavior and so on. But of course, you know, if data is the new thing, the new oil, the new… we need to be clear that we own… those rights if it’s our data I mean I’m not even getting into the privacy issue I’m assuming here that it’s all that has been taken care of so we are using anonymized data but even then we should at least have the rights to that data and also to some part of the processing of it there’s no point in saying that you know that we have the data but we neither have the rights to it nor do we have the oil rigs to pump out or the refineries to process the new oil so this is the context in which you may have seen in the latest budget we announced almost quarter of a century sort of tax holiday for putting up data centers in this country that’s not a trivial thing to do why are we doing it well basically because as I said data centers are the oil rigs of this new kind of oil.
And then, of course, we need new companies that will process this oil. Those are the new… We have created one, EI -LLM, but frankly, everybody gets very excited about LLM. LLM is only a very limited, in my view, not even the most interesting usage of artificial intelligence. It just happens to be that it is linguistically talented and consequently, you know, we use it for that. But there are many, many more interesting uses of AI. And as I keep coming back to you and stating that we need to create an ecosystem and that ecosystem, we all say, oh, you need to have, you know, half a trillion dollars of investment to create. Actually, no. Much of where you will end up with this use of these refineries, so to speak, will be quite… bounded problems in certain spaces.
So there is more than enough space for startups with much more modest budgets to do interesting things in AI. And I’m not just talking about people building use cases on other people’s. I’m saying literally bottom -up uses of AI. So I think there’s a lot to be done here. It’s an open space. This is basically like discovering the Americas. But, you know, yes, Spain did have an initial sort of starting advantage. But the great empire in the world was actually built by Britain, which was actually a late starter. So there are many, many countries in the world who you do not think today to be a particular player in this game, who will also turn up here.
And one of them could do much, much better than the guys who you think are at the cutting edge today. So this is an emergent situation with all kinds of unintended consequences, uses, positive and negative will happen out of all of this. I think the key here is to be nimble, keep your eyes open, including on the regulatory find, and do not have set ideas where this whole thing is headed because, frankly, we don’t know.
No, thanks for that. You know, I’m the founder of First Eye, which is a customer data platform. We work with a large number of enterprises on data, all consent. And so we get a ringside view to the application of all of that that you’re saying. And this kind of leads me to the suggestion. As a supplement, we have AI course, which is a repository of data sets, which is growing. And then for the financial sector as well, we are looking to, say, aggregate, to start with synthetic data and then maybe take up, take correlated data from the regulated. entities with their concepts so that would come into use. Okay, awesome. Actually, that kind of goes towards my suggestion bit for the two of you, which is I think, you know, Praveen, when you spoke about the sandbox from an IFSA perspective, I think the ability to extend that beyond just IFSA to, you know, also the other regulators is something I think will be very, very interesting for at least folks like us because we work with a number of entities which cut across different regulators and an associated point is, you know, today there are so many regulations that come in and I think there’s a lot of, there are two opportunities that I see exist.
One is there is different interpretation of the regulations by different entities and second is as a large data processor, not a data owner, but a data processor, I think there is stakeholder, we are one of the stakeholders in that whole process and today we may not have the adequate access or a seat at the table from a regulatory interpretation standpoint. And there, there is, I think, an opportunity for us to define something which is like, you know, what is a consent -backed API for data consumption, for example, and having a regulatory definition of that with participation from a data processor like us. And we’d love to kind of see if there are processes that allow somebody like us to engage with the regulators.
We are open to that idea, but you have to remember one thing. IFSA is a jurisdiction, you know. It has its set of rules which are different from domestic India. So there is an interoperable sandbox mechanism in place between IFSA, RBI, SEBI, and IRDAI. So a solution that spans across the four regulators can be tested within the sandbox. But the issue is not technological. It’s not fiscal or financial. It’s legal. For example, in India, INR transactions are the norm, right? In IFSA, INR transactions are not permitted. You have 16 foreign currencies that are enabled. and you have to do transaction in those 16 currencies. So if your solution is not compatible across these areas, just to give you an example, then the sandbox experimentation will not go through.
So there are a lot more nuances like this which affect the rollout of pilots within the intraoperable sandbox. So just to give you an example. With respect to movement and processing of data, I will not comment at the moment because there are certain things in works in IFSA. So I leave my RBI colleague for that.
So just like my colleague said, we already have an intraoperable sandbox across regulators and it is on tap. So earlier it was team -based, but then now it’s on tap. Any type of product can be tested in the sandbox. But just to clarify on the sandbox, it is only when the regulated entity feels that the existing products or services is violating one of the regulations. So there are… very few number of entities which come to the sandbox because in general they are not required to be, required to come to the sandbox if they feel they are compliant to the regulations there is no need to come to the sandbox but then we are also thinking of another sandbox where we also provide some more than in terms of monitoring the regulation we can provide, we can support the innovation in terms of say compute data or tools so that is that is also in the thought process.
We have been one of the beneficiaries of the sandbox and the hackathon at 5Money and the process has been phenomenal the way the RBI fintech teams engaged so maybe Aditya I can share some notes with you offline. but I think thank you this has been a phenomenal panel and great discussion on embedded governance when AI is making space in all things financial services how do we make space for governance in AI that was the theme of the discussion and I am very pleased to hear the views of this panel and I am grateful for making time thank you everyone applause thank you I am actually not going to say anything more apart from the fact that thank you and we will have a quick give of the mementos from India AI mission so my my colleague Kriti will do that so starting with applause applause applause applause applause applause applause applause applause applause applause applause Thank you.
Thank you. Thank you. Thank you. Thank you. Thank you.
Ajay Kumar Chaudhary opened by highlighting India’s opportunity to lead in AI development while managing associated risks through embedded governance across the AI lifecycle. He emphasized the concept…
EventDrew emphasizes that governance is not something that can be added after the fact as an afterthought. Instead, it needs to be something designed to fit into each stage of the AI system lifecycle, from…
EventBeyond safety by design, companies need governance from design embedded at every stage from ideation through deployment and monitoring. This requires heavy reliance on automation and even AI for polic…
EventAnd as we look at the journey on AI, which is just beginning for most of the world, what I see is if I look at the US, for instance, there is one spectrum of conversation, which is AGI and beyond. And…
EventAs India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain trusted, interoperable, and globally compatible while avoiding fragmentation and …
EventThis comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between application-layer innovation and foundational model development. It highlights India’s u…
EventThis is a strategic concern for national security and autonomy, as very few countries can be completely digitally sovereign, but they need to ensure continuity of services and control over their data.
EventVirkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-risk use cases while allowing most AI applications to operate with minimal regulat…
EventOne aspect of the proposed AI regulation in Brazil is its risk-based approach. Critics argue that this approach only considers tangible risks, neglecting likely occurrences with lower impact. They pro…
EventPaola mentions different regulatory approaches such as risk-based, human rights-based, principles-based, rules-based, and outcomes-based.
EventThe discussion highlighted tensions between regulation and innovation. While some advocated for light-touch governance to avoid stifling progress, others stressed the importance of ensuring AI safety …
EventBy fostering interaction, investigation, and the exchange of ideas, sandboxes serve as a stepping stone towards implementing robust regulatory measures. Advocates stress the importance of a multi-stak…
EventAl Rejraje promotes regulatory sandboxes as a key tool for de-risking investment in emerging technologies. These sandboxes allow companies to experiment with their solutions and test business models i…
EventDr. Yazeed Alabdulkarim:Yeah, regulations are basically a controversial topic because many believe that it’s challenging to enforce the constraints. And it’s basically wishful thinking. But if we see …
EventHiroshi Honjo:Yes. So pretty much close to what Dr. Balushi said. So as a private company, we kind of state the AI governance guidelines within the company, and that includes the privacy and ethics an…
EventThe defensive applications include autonomous response systems, intelligent threat detection, and AI-powered security agents that can operate at the same speed as AI-enhanced attacks. However, impleme…
EventThe overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technology for government. There was a sense of urgency about the need for governments t…
EventThe discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI potential and collaborative problem-solving. Speakers demonstrated confidence in…
EventThe discussion maintained an optimistic and collaborative tone throughout, with panelists sharing positive field experiences and success stories. The tone was professional yet conversational, with spe…
EventThe tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation. While Kurbalija maintains an expert, measured delivery, there’s a growing sens…
EventThe tone was largely serious and analytical, with panelists offering thoughtful insights on complex governance challenges. There were also moments of optimism, particularly when discussing potential s…
EventThe tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disagreeing. While acknowledging serious challenges and risks, the discussion maintai…
EventThe tone of the discussion was largely analytical and cautiously optimistic. While speakers acknowledged serious risks and challenges posed by AI in elections, they also highlighted potential benefits…
EventThe discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers drawing encouraging parallels between Internet and AI governance challenges. Ho…
EventMariana Rozo-Pan: Thank you, Sophie. And hi, everyone. Good morning, good afternoon, good evening. We are very excited about hosting this workshop. I think it’s like the third workshop that we host at…
EventThe discussion maintained a professional, collaborative tone throughout, characterized by constructive problem-solving rather than confrontational debate. Speakers acknowledged both the challenges and…
EventThe overall tone was constructive and collaborative, with countries sharing their experiences implementing CBMs and offering suggestions to improve global cooperation. There was broad support for expa…
EventThe discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving and shared vision. Panelists demonstrated mutual respect and built upon each ot…
EventThe tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enthusiastic and confident demeanor while presenting a bold vision for India’s future in AI-power…
EventThe tone of the discussion was largely optimistic and solution-oriented. Speakers acknowledged significant challenges but focused on practical ways to overcome them through collaboration, policy chang…
EventThe tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “limitless potential,” mutual benefits, and shared democratic values. The atmosphe…
Event“India’s population‑scale public digital infrastructure such as UPI and other platforms, showing how interoperability, transparency and scale have reshaped financial participation.”
The knowledge base notes that India’s digital public infrastructure emphasizes trusted, interoperable, and scalable systems that transform financial inclusion, confirming the report’s description of UPI-style infrastructure [S29] and the broader DPI discussion [S42].
“AI is now being super‑imposed on this foundation, integrating with payment systems, credit‑risk platforms, supervisory frameworks and cybersecurity architectures that already operate at national scale.”
Sources describe AI being applied to payment networks (e.g., MasterCard’s AI use in payments) and AI’s role in digital public infrastructure, providing context that AI is being layered onto existing financial and cybersecurity systems [S106] and within India’s DPI ecosystem [S42].
“Risk‑based governance treats AI as a systemic financial utility.”
A dedicated discussion on a risk-based AI policy for the banking sector confirms that a risk-based, systemic approach to AI governance is being advocated for finance [S1].
“Embedded governance pillars: proportionality, fairness & non‑discrimination, explainability & transparency, accountability.”
The knowledge base lists fairness, non-discrimination, and the need for governance frameworks that include accountability and transparency as core AI governance concerns, aligning with the reported pillars [S103] and the broader ethical AI discussion [S102].
“The moderator reminded participants that the summit’s overarching aim was to treat AI governance as an embedded layer within the existing technology‑oversight framework.”
Opening remarks from the AI Policy Summit emphasize shaping governance to be inclusive and integrated across the technology landscape, providing contextual support for the claim of an “embedded layer” approach [S98].
The panel shows strong consensus on the need for trustworthy AI, inclusion, and interdisciplinary governance, but diverges on the suitability of risk‑based regulation, the classification of AI as infrastructure, the design of sandbox mechanisms, and the extent to which AI reshapes cybersecurity. These disagreements reflect differing views on regulatory flexibility versus predictability and on the balance between innovation and systemic risk.
Moderate disagreement: while participants align on overarching goals, they propose contrasting regulatory tools and conceptual framings, indicating that achieving a unified AI governance model will require negotiation between risk‑based, ex‑ante, and experimental approaches.
The discussion was driven forward by a handful of pivotal insights that reframed AI governance from a technical checklist to a human‑centric, sovereign, and legally nuanced endeavor. Ajay Kumar Chaudhary’s ‘Mano’ framing and call for embedded governance set the conceptual foundation. Sanjeev Sanyal’s historical analogy and sharp critique of risk‑based regulation introduced a strategic urgency and challenged conventional regulatory thinking, prompting the panel to explore compartmentalisation, clear liability, and the need for ex‑post accountability. Praveen Kamat’s proposal of GIF City as a sandbox provided a tangible experimental venue, while Murlidhar Manchala’s safe‑harbour suggestion offered a pragmatic incentive model. Vikram Kishore’s security perspective broadened the scope to operational resilience, and Sanyal’s copyright and data‑sovereignty remarks opened new legal and economic dimensions. Collectively, these comments redirected the conversation from abstract policy to concrete mechanisms—sandboxing, audits, liability frameworks, and infrastructure investment—thereby deepening the analysis and shaping a multidimensional roadmap for AI governance in India’s financial sector.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

