Secure Finance Risk-Based AI Policy for the Banking Sector

20 Feb 2026 17:00h - 18:00h

Secure Finance Risk-Based AI Policy for the Banking Sector

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened with a focus on embedding AI governance within existing technology oversight rather than treating it as a separate domain [2]. Ajay Kumar Chaudhary argued that AI must be governed throughout its life-cycle, from design to deployment, and that this governance should be built into the system rather than added later [7-8][24-28]. He outlined four pillars-proportionality, fairness, explainability and accountability-and advocated a risk-based approach that addresses model integrity, concentration risk, data stewardship and cybersecurity [50-54][66-78]. Chaudhary also stressed inclusion, data sovereignty and supply-chain resilience, proposing concrete checkpoints across the AI pipeline and positioning trust as the strategic outcome of embedded governance [85-92][94-106][112-115][118-124][129-131].


Economic advisor Sanjeev Sanyal cautioned that AI, like past general-purpose technologies, does not guarantee first-mover advantage and that risk-based regulation may be either too restrictive or too lax, urging instead ex-ante accountability and compartmentalized “firewalls” for AI systems [149-162][170-182]. He warned against treating AI as a monolithic “internet of everything,” recommending bounded problem scopes, auditability and clear liability for algorithm creators, and later highlighted emerging legal questions around data ownership and copyright in AI-generated outputs [238-250][317-324]. Praveen Kamat described the GIF City IFSC as a “clean-slate” jurisdiction where a sandbox environment can experiment with AI governance, noting that such pilots must respect cross-regulatory legal constraints while allowing risk-capped innovation [192-201][263-291]. Murlidhar Manchala added that regulators should grant supervisory relief to firms that implement robust guardrails, turning AI systems into “glass boxes” with transparent incident reporting rather than opaque black boxes [204-206][296-301].


Vikram Kishore emphasized that generative AI lowers attack barriers but does not fundamentally alter cybersecurity fundamentals, urging organizations to adopt multi-factor authentication, standards like ISO/NIST and to use AI for faster threat detection and automated reporting [215-224][226-233]. The panel collectively agreed that over-regulation can stifle innovation while under-regulation leaves systemic risk unchecked, proposing coordinated sandbox frameworks across RBI, SEBI, IRDAI and IFSC to balance experimentation with oversight [263-291][392-397]. Throughout, participants highlighted the need for continuous monitoring, explainability audits and “skin-in-the-game” accountability to ensure AI enhances financial inclusion without reinforcing bias or concentration [85-92][238-250][311-313].


The discussion concluded that AI’s emergent nature demands a skeptical yet proactive governance model that focuses on bounded applications, transparent oversight and resilient infrastructure to preserve trust in the financial system [326-344]. Overall, the summit underscored that embedding governance into AI from inception is essential for India to harness AI’s benefits while safeguarding stability, inclusion and sovereignty [129-131][317-324].


Keypoints


Major discussion points


Embedding AI governance throughout the AI life-cycle – The keynote stresses that AI must not be an after-thought compliance layer but built-in from design to deployment and monitoring. Ajay Kumar Chaudhary defines “embedded governance” as integrating accountability, transparency and risk-management into every stage of the AI life-cycle and lists its four pillars: proportionality, fairness, explainability and accountability [45-48][50-54].


India’s unique digital foundation and the need for sovereign AI infrastructure – India’s population-scale digital public infrastructure (UPI, digital identity, etc.) provides a platform for AI to become a core financial utility. The speaker warns that AI is now a systemic component that must be governed like any critical utility and highlights the five-layer AI stack (chips, cloud, data, foundation models, applications) and the strategic risk of dependence on foreign chip and model suppliers [14-21][44-45][99-106].


Different regulatory philosophies and the limits of risk-based approaches – Sanjeev Sanyal contrasts the European risk-based model, the Chinese state-led model and the US ex-post tort-law model, arguing that AI’s emergent nature makes precise risk-bucketing impossible and that any framework must be “agnostic” and focus on ex-ante accountability, compartmentalisation and “skin-in-the-game” [149-166][170-176].


Experimentation zones and sandboxing (GIFT City) as a way to balance innovation and oversight – Representatives from GIFT City explain that, as a newly created IFSC, it can act as a “lab” for AI governance because it starts with a clean-slate regulator, can run sandboxes for pilots, and must respect a gestation period before scaling [192-200][263-286].


Cyber-security challenges in the age of generative AI – The cloud-service perspective stresses that generative AI lowers attack barriers but does not fundamentally change security fundamentals; organisations must adopt multi-factor authentication, standards (ISO/NIST), active threat-hunting and AI-in-the-loop automation to stay resilient [215-224][225-232].


Overall purpose / goal of the discussion


The panel was convened to explore how India can embed robust, risk-based governance into AI systems that are becoming integral to the nation’s financial infrastructure, while leveraging its digital public-goods foundation to drive inclusion, economic sovereignty, and sustainable innovation. Speakers repeatedly linked governance to trust, resilience, and the broader summit theme of “people, planet and progress.”


Overall tone and its evolution


– The session opens with an optimistic, forward-looking tone, celebrating India’s digital achievements and the transformative potential of AI [14-21].


– As the conversation moves to governance, the tone becomes cautiously analytical, highlighting unknown risks, the need for embedded safeguards, and the shortcomings of existing risk-based models [45-48][149-166].


– When discussing labs, sandboxes, and cybersecurity, the tone shifts to pragmatic collaboration, offering concrete mechanisms and emphasizing partnership between regulators, industry, and technology providers [192-200][215-232].


– The closing remarks return to an hopeful, constructive tone, reaffirming confidence that with disciplined foresight India can align AI innovation with ethical responsibility and trust [126-132][317-324].


Overall, the discussion moves from enthusiasm about AI’s promise, through a sober assessment of governance challenges, to a collaborative roadmap for responsible implementation.


Speakers

Moderator – Session moderator who opened the panel and introduced the keynote speaker. [S9]


Ajay Kumar Chaudhary – Keynote speaker delivering the opening address on AI governance in finance. [S2]


Priyanka Jain – Panel moderator and discussion facilitator; associated with 5Money and experienced with RBI sandbox programmes. [S6]


Sanjeev Sanyal – Economic Advisor to the Prime Minister of India; described as a macro-thinker, historian and strategic geopolitical analyst. [S4][S5]


Praveen Kamat – Official from GIFT City International Financial Services Centre (IFSC); expertise in financial regulation, innovation and sandbox experimentation. [S3]


Murlidhar Manchala – RBI official involved in the AI framework and supervisory guidance; contributes to discussions on risk-based controls and safe-harbor regimes. [S8]


Vikram Kishore Bhattacharya – Cloud service-provider representative; specialist in cybersecurity, cloud infrastructure and the impact of generative AI on threat vectors. [S1]


Audience – General audience members participating in the Q&A, e.g., Aditya, founder of First Tile, and other attendees. [S12]


Additional speakers:


Aditya – Founder of First Tile (a customer-data platform); asked a question during the audience segment about sovereign data assets and AI stack utilization. (No external source citation available)


Full session reportComprehensive analysis and detailed insights

Opening & moderator remarks


The session began with the moderator reminding participants that the summit’s overarching aim was to treat AI governance not as a separate silo but as an embedded layer within the existing technology-oversight framework that already regulates other digital tools [2].


Ajay Kumar Chaudhary – keynote


Optimism tempered by caution & “Mano” proposal – Chaudhary opened with optimism about the four-day summit and warned that rapid AI scaling will bring both known and unknown risks that must be managed through embedded governance [4-11][13-21]. He cited the Prime Minister’s one-word summary “Mano” (humanity) and proposed* that it could replace the term “responsible AI”, evolving into a “human AI” framing that captures moral, ethical, sovereign, inclusive and accountable dimensions [9-12][15].


* India’s digital public-infrastructure – He highlighted India’s population-scale public digital infrastructure such as UPI and other platforms, showing how interoperability, transparency and scale have reshaped financial participation [14-16].


* AI’s structural shift – AI is now being super-imposed on this foundation, integrating with payment systems, credit-risk platforms, supervisory frameworks and cybersecurity architectures that already operate at national scale [17-20]. This marks a structural shift: unlike earlier automation, AI introduces adaptive, learning systems that can dynamically influence outcomes [21-23].


Core question – The question is no longer whether* AI will transform finance-it already is-but whether governance can keep pace and be designed into the system from inception rather than added later as a compliance overlay [24-28][30-33].


Quotes – He invoked Peter Drucker: governance in AI-enabled finance must be about “doing the right things at the right time” to preserve trust, resilience and inclusion [29-32]; and quoted Christiane Lagarde: “Innovation and regulations are not adversity, they are partners in progress.”* [34-36].


* Embedded governance pillars – Chaudhary defined embedded governance as integrating accountability, transparency and risk-management into every stage of the AI life-cycle – from conceptualisation and data acquisition to model development, deployment and continuous monitoring [45-52]. He distilled this into four pillars:


1. Proportionality – governance intensity should be risk-based [50-51];


2. Fairness & non-discrimination[52];


3. Explainability & transparency[53];


4. Accountability (clearly defined)[54-55].


These pillars must be embedded by design, not retro-fitted, because AI systems affecting credit access or financial behaviour cannot remain opaque black boxes [26-28][31-33].


* Risk-based governance & concrete benefits – He advocated a proportional, risk-based approach that treats AI as a systemic financial utility [45-52]. Key risk dimensions he highlighted were:


Model integrity – ongoing validation and stress-testing across extreme but plausible scenarios [66-70];


Operational concentration risk – the systemic danger of a few providers dominating AI infrastructure [71-75];


Data governance – ensuring data integrity, consent, purpose limitation and minimisation [75-78];


Cybersecurity – AI can amplify attack vectors (adversarial AI) and therefore requires anticipatory safeguards [78-81].


He illustrated the quantitative impact of AI-enabled detection, noting that in high-value payment environments (NPCI) fraud-loss reductions of 25-30 % are already being realised [66-70]. He also stressed that AI accelerates compliance and broadens access and inclusion by automating routine checks and expanding service reach [67-69]. Finally, he highlighted that regulators are leveraging advanced analytics to monitor systemic patterns, identify anomalies and strengthen early-warning mechanisms[70-73].


* Inclusion, bias mitigation & “glass-box” model – AI can expand financial inclusion by providing granular, dynamic risk assessments that reduce reliance on heavy collateral and static credit histories [82-84]. However, without intentional design, AI could perpetuate structural inequalities (e.g., gender-biased data distorting credit outcomes) [87-90]. Chaudhary called for representative training datasets, periodic impact audits, community-level feedback and transparent redress pathways that turn opaque systems into “glass-boxes” for customers [91-93][204-207].


* Five-layer AI stack & sovereign infrastructure – He described a five-layer stack: (1) specialised semiconductor chips, (2) cloud & data-centric infrastructure, (3) large data sets that fuel the system, (4) foundation models, and (5) application-level services [99-104]. Over-reliance on foreign chips (over 90 % controlled by a single firm) and a handful of cloud and model providers threatens economic sovereignty, financial stability and national security [104-106]. Chaudhary urged diversification through domestic innovation, international collaboration, consent-based data sharing and the promotion of home-grown AI entities [107-110].


* Operationalising embedded governance – He outlined concrete governance checkpoints across the AI pipeline: risk-based classification of systemic impact, independent review, auditable documentation, cross-functional governance committees, continuous monitoring with feedback loops, and consumer-centric safeguards such as transparent disclosures, clear appeal processes and human-in-the-loop interventions [112-115][124-131]. He framed trust as the strategic outcome of these measures, asserting that finance rests on confidence that systems are fair, stable and accountable [118-124][126-132].


Panelist perspectives


* Sanjeev Sanyal – Drew historical analogies, warning that first-mover advantage is not guaranteed for general-purpose technologies and that the European-style risk-based regulation may become either over-restrictive or under-protective because AI’s emergent nature defies ex-ante risk-bucketting [149-166][238-250]. He advocated ex-ante accountability (“skin-in-the-game”), clear liability for algorithm creators, and compartmentalised “firewalls” to prevent systemic spill-over [170-182][242-250]. He also raised novel IP questions about ownership of AI-generated outputs, calling for a judicial framework [317-324].


* Praveen Kamat – Presented GIFT City IFSC as a “clean-slate” jurisdiction (established 2015, regulator 2020) offering regulatory “legroom” for sandbox pilots that cap risk while allowing iterative learning [192-200][263-286]. He highlighted an inter-operable sandbox linking RBI, SEBI, IRDAI and the IFSC, while noting legal constraints such as currency incompatibility (INR not permitted in the IFSC) that must be resolved [392-401][404-410].


* Murlidhar Manchala – Echoed the AI-mission report’s suggestion that firms implementing comprehensive guardrails (model inventories, bias testing, continuous monitoring) should receive supervisory relief (“safe-harbour”)[204-207][296-301]. He stressed senior-management accountability and that incident-reporting mechanisms should turn black-box systems into transparent “glass-boxes” [204-207][296-301].


* Vikram Kishore Bhattacharya – As a cloud-service scientist, he acknowledged that generative AI lowers barriers for phishing, credential theft and malicious code, but maintained that core cybersecurity principles (MFA, strong passwords, regular patching) remain unchanged [215-224]. He urged the adoption of AI-in-the-loop tools for faster threat detection, automated scanning and real-time reporting, alongside skill-building and standards compliance (ISO, NIST, third-party audits) [225-233].


Agreement & disagreement matrix


Common ground – All speakers agreed that trust, inclusion and resilience are essential for AI-enabled finance and that embedded governance is preferable to retro-fitted compliance [45-52][170-182][204-207][215-224].


Points of disagreement


1. Risk-based regulation – Chaudhary champions a proportional, risk-based framework [50-54]; Sanyal argues that AI’s unknown risks make any ex-ante risk-bucket ineffective and potentially stifling [160-166][238-250].


2. Sandbox purpose – Kamat views the IFSC sandbox as a proactive experimental space for AI pilots [263-291]; Manchala sees the current sandbox as a remedial tool triggered by compliance breaches, though he supports expanding it to include monitoring and tooling [407-411].


3. AI as systemic infrastructure vs emergent technology – Chaudhary treats AI as a core financial utility subject to resilience standards [44]; Sanyal stresses AI’s emergent behaviours that resist traditional infrastructure regulation [242-250].


4. Cybersecurity impact – Bhattacharya maintains that AI does not fundamentally alter security fundamentals [215-224]; Chaudhary warns that AI amplifies cyber-risk, creating new adversarial threats that need anticipatory safeguards [78-80].


5. Purpose of the sandbox (expanded) – The panel differed on whether the sandbox should primarily enable innovation experimentation (Kamat) or serve compliance remediation with supervisory relief (Manchala).


Audience question & response


An audience member (Aditya, founder of First Tile) asked how India’s sovereign data assets could be leveraged for AI model development while respecting privacy and ownership [359-380]. Sanyal responded that India’s massive data pool is “new oil” and that rights to the data and the ability to process it (through domestic data centres and AI refineries) are essential for strategic autonomy, noting the recent tax holiday for data-centre investment as a policy lever [359-362][361].


Aditya also proposed a consent-backed API standard for data sharing and a regulatory seat for data processors. Kamat acknowledged the idea but highlighted legal incompatibilities between the IFSC’s foreign-currency regime and domestic regulations that must be resolved before a cross-jurisdictional sandbox can operate [392-401][404-410]. Manchala added that an inter-operable sandbox already exists for compliance issues, and a broader sandbox offering compute, data and tooling support is under consideration [407-411].


When asked to name an under-estimated risk, Manchala replied that risk itself is being underestimated, underscoring the need for robust governance to surface hidden vulnerabilities [296-301].


Closing remarks


Priyanka concluded the session by emphasizing that AI should initially be applied to bounded problems (e.g., chess) and that the community must maintain a healthy skepticism about AI’s promises, ensuring that optimism is always tempered by rigorous scrutiny [331-336].


Forward-looking roadmap – The panel distilled the discussion into actionable recommendations:


* Adopt a proportional, risk-based governance model flexible enough for AI’s emergent behaviours [50-54][59-63];


* Provide supervisory relief (“safe-harbour”) for firms that demonstrably implement robust guardrails, model inventories and transparent incident reporting [204-207][296-301];


* Expand the IFSC sandbox into an interoperable platform for cross-regulatory AI pilots, while addressing legal constraints such as currency compatibility [263-291][392-401][404-410];


* Develop a consent-backed API framework giving data processors a voice in rule-making and ensuring privacy-by-design [381-391];


* Invest in sovereign AI infrastructure – domestic semiconductor capability, cloud capacity, data-centre incentives and home-grown foundation models – to reduce dependence on a few foreign suppliers [99-106][107-110];


* Mandate continuous monitoring, explainability audits and periodic impact assessments to detect model drift, bias and concentration risk [68-71][112-115];


* Strengthen cybersecurity by combining traditional controls (MFA, patching) with AI-in-the-loop detection, automated reporting and upskilling programmes [215-224][225-233];


* Clarify intellectual-property rules for AI-generated outputs, establishing ownership among prompt authors, data owners and model creators [317-324].


Unresolved challenges remain: operationalising a risk-based framework when many AI risks are unknown, assigning ex-ante liability across the AI supply chain, and harmonising cross-jurisdictional data handling between the IFSC and domestic regulators. The collective optimism, tempered by a realistic appraisal of systemic vulnerabilities, underscores a strategic imperative for India to lead a balanced, home-grown AI governance model that draws lessons from the US, EU and China while remaining uniquely suited to its digital public-goods ecosystem [126-132][317-324].


Session transcriptComplete transcript of the session
Moderator

Thank you. very much in line with the overall theme of the summit. We are looking at the overall aspect of governance of AI, but not as something that will be set aside and looked at through a different lens altogether, but something that can be looked in as an embedded layer of governance that we already govern technologies with. In the interest of time that we have with us, I will request the panelists to be seated on the dais, and I will request AK Chaudhary sir to please begin his keynote.

Ajay Kumar Chaudhary

Good afternoon to everyone. Distinguished policy makers, regulators, industry leaders, members of the FinTech community, and esteemed guests. I will just very closely following last four days how and what are things happening and it was amazing the type of enthusiasm type of excitement and type of budge around AI and this summit and I believe and that whatever is there actually is a real thing which is happening possibly multiple small applications are going to come in coming days which will solve multiple issues and problems in coming days and we’ll have the real leading role actually to play as a country that is the way we look at it we also will have a great role to play on the data side particularly when we are going to train the models for that obviously when we are going to scale up entire thing then possibly there might be some run -throughs some risk also and those risks something is known, something is unknown and for unknown much cannot be done except we need to do take care of the embedding the governance part.

That is the theme of today’s talk, how we need to embed the governance actually the entire life cycle of the AI, the design of the AI. That is the way we have to look at. Yesterday I was again listening our Honorable Prime Minister and the beautiful way that he summarized the entire theme in one word that is called mano, that is called humanity. So possibly in future I am going to use that instead of responsible AI, that is possibly we can talk about human AI because it is going to touch upon moral and ethical systems, accountable governance to sovereign, national sovereignty, accessible and inclusive and valid. All the aspects what we are going to touch upon, everything is covered in this one word that is called Mano.

Now coming back to my address, proposed address, I’m coming back to this now. It’s indeed a privilege to participate in this dialogue at a defining moment in India’s digital evolution. Over the past decade, India has demonstrated how population -scale digital public infrastructure can drive inclusion, efficiency and trust. Systems built with interoperability, transparency and scale at their core have reshaped financial participation by millions. Today we stand at the next inflection point in that journey. A new tech layer is being superimposed upon this digital foundation. AI, artificial intelligence, what we know it, is not arriving in isolation. It is integrating with payment systems, credit and risk management platforms, supervisory frameworks. and cybersecurity architecture that already operate at national scale.

This convergence of scale and intelligence marks a structural shift. Unlike earlier waves of digitalization that automated existing processes, AI introduced adaptive systems, systems that learn, recalibrate, and influence outcome dynamically. In a country as large and diverse as India, such systems do not merely improve efficiency, that see access, opportunity, and systemic resilience. The question before us is not whether AI will transform finance. It already is. The more fundamental question is whether governance will evolve at the same pace as innovation and whether it will be designed into a system from inception rather than appended later as a compliance of the thought. In financial services, trust is foundational. AI system cannot function as opaque black boxes, especially when they influence access to credit or flag financial behavior.

Governance cannot be an overlay applied after innovation has already been scaled. It must be embedded by design. As Peter Drucker observed, quote, management is doing things right, leadership is doing right things, unquote. In the context of AI in finance, governance is not merely about tech correctness. It is about doing the right things at the right time in ways that preserve trust, resilience, and inclusion. Now, looking at AI as infrastructure tool, it has evolved from analytical assistance to shaping financial outcomes. In credit market, machine learning model analyze transaction histories, behavioral signals, and dynamic cash flows to generate granular borrower assessments. In fraud prevention, AI detects anomalous activities within milliseconds, processing volume beyond earlier systems. AI -enabled detection can reduce certain categories of fraud losses by up to 25 to 30 percent at this point of time in high -value payment environment, what we are witnessing in NPCI.

Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to emerging threats in real time. The diffusion of AI across the financial value chain enhances efficiency and precision. Yet, when models operate on a systemic scale, even marginal inaccuracy can produce material consequences. In finance, where stability and trust are public goods, the tolerance for systemic error is limited. India’s financial system adds its own complexities. Its scale of digital participation, linguistic diversity, demographic heterogeneity, and income variability are also important. heightened model risk. Although trained on narrow urban centric or historically squid data sets may inadvertently misclassify, misprice or exclude segments that digital finance is intended to integrate. It is therefore imperative that we do not view AI as a peripheral tech enhancement.

It must instead be understood as a component of financial infrastructure which is systemically relevant and should be subject to the same standard of resilience, governance and accountability what we expect of any critical financial utility. When we talk about embedded governance in AI, historically regulation in financial services have often responded to innovation after risk gets materialized. Governance in the AI era must however be embedded into systems design. Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life cycle. from conceptualization and data acquisition to model development, deployment and ongoing monitoring. It rests on several foundational pillars. I will mention four. One is proportionality, that is the governance intensity should be risk -based.

It should be risk -based intensity. Fairness and non -discrimination. Third is explainability and transparency. And fourth is accountability, which must be clearly defined. While institutions may collaborate with tech providers or leverage shared infrastructure, responsibility for outcomes cannot be outsourced. Potential vulnerability of AI systems that save their operations, board and senior management must understand that logic, limitations, et cetera. Further, and more importantly, in financial AI, algo efficiency should not compromise equitable opportunity. Now, specifically coming to the financial infrastructure, risk -based approach to AI governance, just I’ll touch upon this. A risk -based approach to AI governance acknowledges that innovation and prudence are not opposing forces. They are complementary. Financial authorities globally are converging on principles that emphasize robustness, resilience, transparency, and human oversight.

India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsibility. The objective is not to slow innovation, but to ensure that systemic risk does not accumulate invisibly. Several risk dimensions deserve particular attention as AI becomes integral to financial systems. It may include multiple issues. I will touch upon only four. One is the model integrity. For instance, it can no longer be viewed as a one -time validation exercise. Intelligent systems must be evaluated across economic cycles. And stress against extreme but plausible scenarios. As data patterns evolve and models recalibrate, continuous oversight becomes inevitable to guard against drift, unintended bias, or reinforcing feedback loops. Second is operational concentration risk. I will detail subsequently also. It is an emerging systemic concern.

Diversification and resilience planning are essential to safeguard continuity. Data governance through data integrity, consent management, purpose limitation, and minimization principle is foundational. Financial data is not merely transactional. It reflects livelihood, behavioral choices, and economic participation. And the fourth item is cybersecurity risks that are amplified in the AI environment. As AI strengthens defense mechanisms, it can also be leveraged by adversaries. Institutions must anticipate adversarial AI and strengthen defensiveness. Detection capability accordingly. A risk -based… framework recognizes that governance cannot be static system that learn and evolve demand demand oversight that is equally dynamic as also measured proportionate and forward looking now just touching upon supervisory intelligence as ai permeates financial institution supervisory framework are also evolving supervisors increasingly leverage advanced analytics to monitor systemic pattern identify anomalies and strengthen early warning mechanism this creates a reciprocal dynamic institution embed ai in operation while oversight bodies integrate intelligence into supervision however governance cannot be regulated driven alone institution capability is critical ai literacy at the board and senior management level is no longer optional leaders must understand model architecture validation methodology vendor dependency and ethical limit implications Effective governance requires interdisciplinary capability bringing together tech, risk, compliance and legal experts as well as business leaders together Institutions that integrate AI governance into their ERM framework strengthen resilience Christian Lagarde has noted, innovation and regulations are not adversity, they are partners in progress That partnership must guide the embedding of AI within finance Coming to the inclusion part, what our Honorable Prime Minister has mentioned about the last A in MANO, that is access and inclusion India’s financial transformation has been anchored in inclusion Over the past decades, tech has lowered barriers, reduced transaction costs and brought millions into the formal financial ecosystem AI now offers an opportunity to deepen that trajectory Through granular dynamic risk assessment Thank you It can reduce reliance on collateral heavy models and static credit history.

Transition level data, cash flow analytics and behaviour indicators can provide more nuanced insight into the repayment capacity, particularly for MSME who are presently outside the traditional credit framework. India is expected to account for a significant share of global digital transition growth this decade. If harnessed responsibly, AI can convert this expanding digital footprint into broader formal access to fair financial services and adoption at scale. Yet, inclusion cannot be assumed. It must be intentionally designed. Algo, trained on historically squid dataset, risks perpetuating structural inequalities. In formal sector, income volatility. In terms of the future, of the Gender -based data gas may distort credit outcomes. Without corrective safeguards, technology may reinforce rather than reduce disparities. Inclusive AI thus requires representativeness in training datasets, periodic impact audits, and community -level feedback mechanism.

It calls for institutional mechanisms that allow individuals to seek clarification and redress where automated decisions affect their financial standing. Now coming to the sovereign and resilient AI foundation. AI governance intersects not only with the institutional risk, but with strategic resilience. Concentration in advanced chips and foundational AI models raise critical consideration for economic sovereignty, financial stability, and I can further add, the national security. Dependency on limited supply chains can create systemic vulnerability. If we may look at AI stability. I’m going to go ahead and start with the AI. more granularly. It rests on five interdependent layers. At the base are specialized semiconductor chips we all know. Above this sits the cloud and data -centric infrastructure that provides scalable processing capacity.

And these systems are fueled by vast data sets drawn from public and proprietary sources. On this foundation operate large foundation models adaptable across domain and finally at the top are application and that embed AI into financial services and everyday economic life. In this context we should be conscious of the fact that one firm controls more than 90 % of advanced chips. Three dominate cloud capacity and a handful command foundation models threatening financial stability and economic sovereignty. We must therefore diversify supply chains to the extent possible through domestic innovation and international collaboration to secure resilient AI foundations. Further, if you look at what is the pathway for ecosystem scaling possibly we have to look at the consent based data sharing, shared AI and risk infrastructure investment in AI literacy and governance at all levels including board and senior management and most importantly encouraging home grown tech and AI capable entities.

It may be appreciated that an India first approach is not inward looking. It is context aware. It ensures that governance reflects local realities while remaining global coherent. Now coming to the operationalization of embedded governance, it may involve multiple issues but I am touching upon 5 to 6 one. The life cycle based model governance institutions should embed governance checkpoints from data acquisition to deployment and post deployment monitoring. obviously clear risk classification framework based on the systemic impact that we should have to have independent review and oversight, enhanced oversight on that. It should be auditable and documentation should be there cross functional governance committee will be helpful no doubt on that and continuous monitoring and feedback loop that basically helps in periodic recalibration by way of external audit.

Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mechanism are critical to maintain public trust. These pathway ensure that governance is not episodic but embedded women into operations DNA. Now I will just before concluding I will touch upon the role of India in AI and trust as a corner store of financial AI. Finance rest on confidence that systems are fair, stable and accountable. Deposit trust institution to safeguard asset borrowers’ trust systems to assess risk fairly, and market trust, transparency, and stability. EI has the potential to enhance this trust by improving fraud detection, accelerating compliance and broadening access and inclusion. But if governance is ineducated, EI can erode confidence rapidly.

Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with public interest. And trust endures when leadership anticipates risk rather than reacts to failure. India stands at a pivotal moment, working across all five layers of the EI stack, and demonstrating the ability to deploy application at population scale. It is shaping a global agenda for inclusive EI. The convergence of digital infra, regulatory foresight, and entrepreneurial innovation offers a chance to show that scale and safety can coexist and governace can catalyze inovation.Coming to conclusion artificial intelligance wiil sace the next chapter of financial services. But tech alone does not determine outcomes. Institutional design does. Design choicesmgovernance framework and institutional culture will determine whether AI strenghten finance finacial resilience and inclusion or not.

Embedded governance is not regulatory burden.It is strategic imperative.It ensures that innovation is sustainable, trust is preserved and the system stabillity is protected. If we embed fairness,transparency,anccuntability and proportional oversight into the architecture of financial AI form iception, India can chart distinctive path,one that alligns tech ambition with ethical responsibility. Let as approach rhos moment not with hestitation but with disciplined forseght .Let us ensure that as our financial systems become more intelligent,our governance become more robust, our oversight becomes more anticipatory and our commitment to inclusion more resolute. In doing so, we will not only harness the power of AI ,but we also shape it to serve the broader goals of stability, oppertunity and share prospectively.Thank you.

Moderator

Thank you, sir. That was very insightful and sets the context for the panel discussion to follow. We could also request you, if you would want, you could join us in the audience. That would be great. Over to you, Priyanka, for introduction to the panelists and then taking this discussion forward.

Priyanka Jain

Thank you so much. Thank you. Our panelists need no introduction. I’m going to keep it very fast so that we can make the most of, you know, capturing their thoughts. First, I have with me Mr. Sanjeev Sanyal. Sir is the economic advisor to the Prime Minister. He’s in the Prime Minister’s office and he needs no introduction. If I actually go by what AI has given me as his persona, AI summarized it as a macro thinker, a historian, a historian of structural cycles and a strategic geopolitical lens. Fortunately, today we have the OG himself in the room. And without any further ado, I want to ask him my first question. So historically, countries that have mastered general purpose technologies, right from the steam engine, early electricity, Internet, they’ve gained outsized economic advantage.

Is AI that inflection point for India? And if so, does early well -designed self -governance accelerate trust or does it deny us of any competitive momentum?

Sanjeev Sanyal

Yes, it is important that you are engaging in it, but let me point out that it’s not always the first movers who benefit from it and it’s not the case that even those who invent these technologies know where they’re headed. I mean, just to give you an example, the European Renaissance, which led ultimately to the Western domination of the world for half a millennium, was based on three technologies. One was the printing press, the other was gunpowder, and the third was mathematics. The first two were invented by the Chinese and the third was invented by the Indians, but it is the Europeans that took it, owned it and dominated the world. So, one important thing to recognize in all of this is that do not try and necessarily guess where this is headed.

But of course, we need to engage in it. We need to engage in these technologies and build on them. Otherwise, you know, somebody will take your technology and dominate you. So it is very, very important that India does participate in this AI revolution. But again, in this context, let me say, that does not mean that we should spend time trying to work out exactly where this is headed. For example, when the social media revolution was happening 20 years ago, when Facebook and all these things came about, the marketing tool of the people at that time was, see, now everybody can talk to everybody, we will all move to the golden mean, because we will all have similar views, because we can all talk to each other, and so on.

But in fact, the algorithms went out of their way to put us in buckets and echo chambers. So in fact, we ended up, social media ended up doing exactly the opposite of what the, you know, the technology experts were telling us social media would do. now why does this apply to AI as well and here I am going to talk about this risk based thing that everybody is talking about let me tell you that you cannot actually put AI or any types of AI into any real risk bucket because this is an emergent evolving thing even more so than social media so consequently if you are saying I am going to do risk based it means that you have some assessment of where that thing will go and I am telling you that it is almost impossible to do this so for example in my view the European way in which they are going about and having you know risk they are the pioneers of risk based systems I understand it is pretty obvious that you don’t want AI to take over our nuclear buttons but other than that the risk levels of most of the other things is utterly unknown this is a bad thing because I am not saying that because I am not saying that because I am not saying that something totally innocuous might go and blow up the whole system because these things are emerging they are evolving, they are interconnecting therefore I actually do not think the risk a system that is largely based on perceptions of risk will work because it is not possible ex -ante to work out what is dangerous or for that matter what is beneficial now what should you do if you can’t tell what is going to happen I am telling you the European system is either going to be strangulate the system by being too stringent or it will open things up because it wants progress but will ultimately the risk based system will not be able to take control of it so the other model that is there is of China which is the state knows best but we know from the experience we had with the Wuhan virus that the state can very often lose control of things that are happening and it can spiral out.

The third model that is mostly the American model is to have a laissez -faire and let anybody do whatever they want. Now the dangers of that are obvious. In my view, the way they control it is through tort laws, i .e. if something goes badly wrong, you will then end up with a billion dollar fine or something like that. So in some ways it works better because it’s ex post rather than ex ante system. It depends on those who are running the system having skin in the game, i .e. your company will go down and you will be jailed and you will have a billion dollar fine on it. If things go wrong, that is how they are doing it.

It’s an ex post punishment. But as you can tell, that is some ways, is an ex post system and if something really bad goes wrong, you know, it will you’ll only find, you know you can punish the person after the horse has already bolted you are going to lock it. So all these systems have their downsides but I’m just telling you that whatever system we design in order to control this has got to be based on being agnostic to how this whole thing works going forward. Now, I know I’m taking up their time but give me a minute. There are other systems that we manage where we have no idea where they are going. Take for example the stock market.

You and I don’t know where the stock market will be in a decade’s time. It’s a complex system just like artificial intelligence but we manage it. How do we do it? Well, we do it by creating a framework which does the following thing. It first of all has institutes audits. And enforces transparency and explainability. if you can’t explain your accounts you can’t be in the stock market two it has systems of shutting things down when things go wrong so there are every stock market will have when things spiral out it shuts down three it deliberately creates systems of separation for example this you know there are the same company cannot you know be a bank as well as being a company that so there are conflict of interest so in the same way AI will need to create compartments I am personally very suspicious of any idea of the internet of everything and the AI of everything that would be a disaster I think we need to be willing to allow compartmentalized AI I think it will be more efficient anyway from an energy perspective but I think it’s also safer and most importantly you need to create skin in the game, i .e.

ex ante tell people who will be held responsible when things go wrong. So, in the case of financial markets, the directors of the company are the ones hauled up when things go wrong, or the CEO. In the case of AI, we will have situations where when things go wrong, the person who made the algorithm will blame the data, the data guy will blame the company, the guy who is the user, all kinds of things will happen. We need to ex ante decide who in the system will be hauled up when things go wrong. That will create skin in the game. But we cannot wait for something to go wrong and then this happens, we need to decide this ex ante.

So, all of these things exist in the case of financial regulation. I personally think a similar system.

Priyanka Jain

Rightly put technology moves fast but trust takes time to build and compartmentalization is a great way to de -risk in some form and also look at it with a focused agenda and attention. With that we can actually bring in Mr. Kamath. Mr. Kamath is from the GIF City IFSC, a compartmentalized global financial hub in a way that India has created and we are very fortunate to have you sir here GIF City actually operates at a unique intersection of innovation and global credibility. It competes with the likes of Singapore, Dubai, London. Can GIF City become a lab for AI governance and we wanted to know your view sir and especially a great segue from Sanjeev sir on how we can look at it differently in a compartmentalized manner.

Praveen Kamat

See if you see a Gift IFSC as a jurisdiction, it is just, it was set up in 2015, so it’s just 11 years old. We are building it up from scratch. Now, when you build something from scratch and when you have a brand new regulator, like IFSC which was created in 2020, you start with a clean slate. So that means you have more leg room and you have more space to experiment. So we don’t have baggage of the legacy systems. So if you see the way we have evolved over the last six years, IFSC, the way regulations have evolved, we have all the verticals across finance, capital markets, banking, insurance, pensions. And we have introduced new verticals, ship build, ship leasing, aircraft leasing, ancillary services and so on.

You know, in line with all of the global financial centers. So with respect to experimentation, when you use the word lab, you imply experimentation. So the appetite for experimentation and the appetite for taking risks, is much higher than other, say, domestic regulators or regulators overseas because of the absence of retail investors. so yes gift city has an immense ability to to come across as a lab uh for ai governance however building a financial center is a is is like a 45 kilometer marathon you know it’s not a 8 kilometer dream run so it will take its time uh we are on the growth trajectory on the upward trajectory and there is a certain gestation period for every financial center that that period gestation period cannot be skipped we are in that gestation period so once we reach critical mass we will we’re going to see a lot of things happening and coming out of gift ifsc.

Priyanka Jain

Thank you actually i will go murli sir and the rbi free ai report or the framework on uh you know any enablement of ethical ai i think it’s very forward looking it is actually building on existing regulatory controls and architecture to bring in you know the principal base ai ecosystem so my question to you is If a company has embedded robust controls, model inventories, bias testing, continuous monitoring, should regulators reward and discipline such companies with calibrated supervisory relief? And in other words, is there a safe harbor for somebody who’s, you know, who’s put in risk -based controls but, you know, has been a first -time defaulter?

⁠Murlidhar Manchala

Yeah. In fact, in the same report, it was suggested that the entities which put in place all the guardrails and then in case of any labs, if they are doing the root cause analysis, trying to address the problem, they should have a, the regulator should have a lenient supervisory approach. And it should be seen as a, it should be seen as an instrument. It should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a over acting risk area so that is something which we recognize so on both friends one is we understand the technology is probabilistic and then it can have lapses but once in terms of governance if you if you put in the guardrails if you put in the processes if you put in the mechanisms across the lifecycle to see that this the the customer doesn’t face the risk so that is the main focus the customer it should be transparent to the customer it should not be a black black box rather than it can be a glass box so so and and it should be understandable to the customers so once all the measures are taken take are taken into consideration by the by the entity in terms of governance as well as the processes you can see that the customer does not face the risk so that is the main focus the customer it should be transparent to the customer it should not be a black black box rather than it can be a glass box so and it should be understandable to the customers so once all the measures are taken take are taken into consideration by the by the entity in terms of governance as well as the processes because of the nature of the technology presently we understand it can lead to some to some aberrations but then as long as it is it is taken in a in a right process you have these incident reporting mechanisms you have the you will have the manual or right so the once you have these controls and the and the right approach the supervision will not should not be used as a as a systemic or or a greater risk you should rather you should allow first time lapse and then in terms of say rewarding it we also suggested that there would be an award for a in finance particularly there are specific works done in in terms

Priyanka Jain

Thank you. I think Vikram your advantage point here because you are a global infrastructure player. You are seeing regulatory trends across the US, UK, Singapore and many other markets. You heard about how the panel has been shaping right from the policy makers to international financial center and also RBI. Want to know as an infrastructure provider how are you looking at cyber security and its evolution in the age of generative AI?

Vikram Kishore Bhattacharya

Thanks so much Priyanka. I would just make one correction as a cloud scientist. I am a cloud scientist and I am a cloud scientist. I am a cloud service provider and not merely an infrastructure provider. I think one of the things is, for good or for worse, we’ve seen the benefits of generative AI, but we’re also seeing bad actors use generative AI for phishing attacks, for credential attacks, for malicious code. So, you know, with the good comes the challenges. But one of the more important elements is that while it’s serving as an accelerant to existing methods, I don’t think it’s foundationally changing the nature of the attacks. And, in fact, there was a report that came out in 2025.

It talks about how generative AI has lowered the barriers for a lot of these threat actors. But I go back to what I said. It’s because it’s not foundationally changed. The same principles and the same foundations of cybersecurity that held true before gen AI still hold true. So, you know, multi -factor authentication, strong passwords, regular updates, scanning your systems. And I think it is imperative for organizations. To fundamentally, especially in the financial services, who are always. being attacked and India is a country where not only the banks but we have a huge citizenry with different levels of financial literacy so therefore how do you use these tools to actually safeguard the financial system so I think that in that respect you know a lot of sort of kudos to the RBI for also thinking about it on you know these principal lines but also the banks for actually leveraging these technologies and I think that one of the elements that you need to always do is you know trust service providers like us but also banks should verify and that is done through standards like ISO or the NIST and you know independent third party reports validate the various controls that are there and I think that now and it’s a point that I was making a little earlier you have to become an active participant in cyber security no longer can you be a passive passenger in it because the landscape is changing and as more and more people are digitizing so are the people who are willing to and are looking to attack any vulnerability.

So GenAI does provide you with the tools, because, again, I’m also a believer in not, you know, human in the loop, but having AI in the loop. So how do you use these technologies to have faster responses? How do you automate scanning? How do you automate getting reports? To be able to make those value judgments at the right time. So that requires skilling. Again, that requires awareness, not just about something like an AWS or the cloud, but also banks and also, you know, the work that, again, regulators as well as cloud service providers are doing is having these awareness programs to make sure that the more people understand the technology, the better the framework and the groundwork will be for them to adopt.

Thank you.

Priyanka Jain

I think I also referred to our earlier discussion today afternoon, wherein rather than AI thinking, about a human in the loop, humans think AI as a loop to move forward and I think that was a great paradigm shift that we can look at. Sanjeev sir I am going to come back to you but I also want to give a backdrop to this question India has never simply adopted technology we have created it, we have adapted it, we have scaled it and we have governed it in our own way. We did it with identity, we did it with payments and we did it with digital public infrastructure If the governance frameworks around AI are beginning to emerge and they are also being divergent globally like US being innovation led, EU being compliance led, China being state led, where is the access that India is going to strategically position itself and how are you looking at it from your lens?

So I

Sanjeev Sanyal

think I will continue from what I was saying earlier Now, we need to be very, very careful that we don’t end up with a bureaucratic risk -based system. This is an emergent technology. It will evolve in all different ways, and we’ll have to be very, very creative about this. Now, there is a difference between, say, the systems as an architecture. AI is an emerging thing. It’s not just infrastructure in the sense that, say, you can think of UPI as infrastructure, for example, digital identity as infrastructure. It doesn’t in itself have emergent behaviors. AI has emergent behaviors, i .e., it evolves and interacts with other forms of AI, and which is why I said you need to be fundamentally suspicious of anybody who says that they have a very clear idea where this whole thing is going.

We don’t at all have a clear idea. Nobody on the planet has a clear idea where it’s going. So we do need some regulation. We need to be very, very careful about having humans in the loop. as I said right in the beginning you need to have systems switch off buttons you need to create what are called in finance Chinese walls which separate different tracks as I said earlier I am not a huge fan of the AI of everything I think that’s dangerous and will lead to bad outcomes however AI can be run in compartments rather well and why don’t we use that because in any case that’s less energy using in any case it is better at solving bounded problems when you give AI an unbounded problem it tends to hallucinate because unfortunately it has learnt another human trait that it doesn’t like to tell you that I don’t know it rather make up stuff so consequently I think it is better that we deal we give it bounded problems let it solve those bounded problems and get back to us going for this AI or internet of everything which everything is interconnected sounds very good but just it was last July or July before that we saw when one very small code of a Microsoft program which was by the way static it wasn’t even a fluid one it went wrong and you ended up with causing havoc in airports ATMs all kinds of things around the world now imagine the same thing happening in a system where it has emergent characteristics by the time you fix one bit of it it has flowed into some other part of the system so I personally think we need to create firewalls you know forest fire is also an emergent thing and the way we control it is not by predicting where the fire is coming from and where it will go we just have these firewalls from time to time we do that in finance all the time we don’t try to work out what the conflict of interest is, we simply ban situations where conflict of interest will emerge and the same thing is true of skin in the game I think we need to ex -ante work out where in the chain is the responsibility I personally think that it should be done at the level of where the algorithm is made public to use whoever is making it even if their data is wrong you cannot blame the data you are responsible so somebody else may disagree, whatever point of the matter is we need to have very clear points of punishment when things go wrong we need to have audit systems for explainability there is nothing very deep about this after all every company listed in the stock market has got it itself several times a year why can’t we ask major AI companies to be audited?

If you cannot explain why your results are turning up too bad, you shut it down. We do that even with relatively small companies have to go to a chartered accountant several times and chartered accountant has to sign it off. Maybe we have a chartered AI audit for anything that goes beyond some threshold. And I think given how potentially dangerous it is and lucrative it is as well, I don’t think we should be thinking about this as a problem. Rather than doing what I think many others say, okay, they understand it’s dangerous, they will say, but why don’t we have risk -based? Now, ex ante, you cannot work it out. All you will do is you will have technologies that are just, you will end up with regulations that will become just too stringent and will kill the sector.

Rather, along the way, you have a system of explainability audits. With that, let me hand

Priyanka Jain

Mr. Kamath, I’m going to come to you. Economists worry about tourists under regulation that creates instability and over -regulation that will kill dynamism. Where do you see gift city? Because, again, it’s at an intersection of local and I want to hear your views on it.

Praveen Kamat

That’s the problem facing all regulators worldwide across financial sector. Over -regulation repels innovation. Under -regulation repels serious long -term capital. So now, where do you draw the balancing equilibrium point? Let me explain it with an example, simple example. I joined SEBI, Securities and Exchange Board of India in 2008. I was posted to the surveillance department. In 2008 itself, the financial crisis was in full flow. So in our surveillance systems, which are very, very powerful systems, we noticed 1 ,000 orders being entered in a span of couple of microseconds. So we were wondering how is this possible, how can a human enter so many orders. Then we came to know algorithmic trading terminals have been deployed by certain entities in the stock market.

When we dug deeper we came to know that initially it was deployed in 2004 by one entity and then slowly slowly it was the volumes were increasing. I mean it didn’t reach a critical point but they were slowly increasing. Now in 2010 the inflection point came when it reached a critical mass. SEBI came up with guidelines to protect safeguard the retail investors and to preserve financial stability. So here is a perfect example where an innovation in the capital market which is algorithmic trading was deployed by entities for a good six years. It was not regulated, it was being used and the regulator didn’t do anything to stop it. But when the regulator issued the guidelines the necessary safeguards were put in place.

However at the same time there were no breaks applied on the rollout of the innovation. So algorithmic trading even after the guidelines grew exponentially in the Indian capital market to where it is today. So in the same manner, we hope to facilitate innovation in gift IFSE. We have sandboxes in place for startups as well as established entities. They can roll out their AI pilots in the sandbox. The goal is to cap the risk. Like sir said, it’s very difficult to identify all the risks. But whatever possible risks can be identified, let’s cap the risk without going into the technical mechanics, you know, the internal mechanics. And then see how it flows out. Based on the data that you receive in the experimentation, accordingly the regulations can be tailored.

Thank you.

Priyanka Jain

I know we are at time, but I’m going to still extend because I have such a prestigious panel by another few minutes. Could you come back to you with a quick rapid fire? If you could tell us one risk that we are underestimating when it comes to AI. No, in

⁠Murlidhar Manchala

general we would not like to talk about risk. So that is our approach. Our keynote speaker, Ajay Choudhury, was also at the helm and the department was formed. So risk is maybe underestimating the risk. That is what I can say. That can be addressed only through the governance, particularly in the present emergence of technology. Actually, I

Priyanka Jain

like what Sanjeev sir was telling us. It’s never going to be risk -free, but we’ll have to move forward. We’ll have to figure it out and we’ll have to do it in as much as possible compartmentalized manner. So any risk that we are overestimating, anybody from the panel who wants to talk about any risk that we are overestimating. Let’s give Vikram a chance. I

Vikram Kishore Bhattacharya

mean, I think the fundamental nature would be there is no zero risk. It’s how do you equip yourself. to handle risks because I think a point that Mr. Chaudhry Mr. Sanyal also made is as a regulator or a regulated environment, how do you create the tools to be nimble to adapt as the technology adapts and I think that that is the important element. Right now the tools are there, there is so much we can do that we’re not, maybe we’re not doing as well, so maybe we can focus very well in the here and the now and equip ourselves to be nimble enough to deal with anything that comes because anybody who’s telling you what’s coming with a certain amount of certainty, I take that with a pinch of salt.

I think that the future is a little unknowable at this point of time but there are so much that is known and we should be able to tackle that right now. I

Priyanka Jain

think that’s great. Sanyal sir I’m going to again come to you. One reform that India must prioritize, what is your view on it? That’s

Sanjeev Sanyal

Copyright law. Who is the owner of a particular innovation? At which point do you call it an innovation? And is that innovation owned by the person who put the prompt in? Is it owned by the person on whose data it got trained? Or it belongs to the algorithm that created that innovation? So all of these I would say that we need to begin to think of a judicial system that can deal with these kinds of problems. We already have a cloud judicial system. But do remember that these very different kinds of, and I would almost call them philosophical problems, are going to turn up at our doorstep very, very quickly. And we need to be thinking about them.

Thank you. When UPI came in, I think about a decade ago, and we have the benefit of having the NPCA chairman himself being in the room, I think it was more than payment. It was trust in an invisible system. and today AI is becoming that invisible system that is sitting quietly in our credit underwriting decisions, our onboarding flows, grievance redressal systems, even regulatory reporting and I think that’s, it was a great discussion to talk about how do we embed trust in an AI system that is fast evolving because at the end of the day we’re thinking about the theme of the summit which is people, planet and progress all in the same breath. People, how do we protect them from opaque systems or bias?

Planet, how do we scale sustainably and responsibly and progress because it doesn’t have to be only fast innovation, it has to be fair innovation. So I think a lot of great thoughts today that came in the panel discussion and I’m extremely grateful to everybody who made time to have this discussion. Sanjeev sir, we could have some closing thoughts from your side. Well, you mentioned trust. Let me say that… while it is fair to trust UPI, but as I said it is relatively speaking not an emergent system. Deliberately in fact, you don’t want the UPI to be innovating on the interface. It can innovate at the back end however much you want, but you don’t want any surprises.

I send somebody 100 rupees and he gets 120 rupees or 80 rupees or on average you will get 100 rupees. That can’t be the basis of a UPI. So in that sense the UPI based system isn’t backbone infrastructure. It is not deliberately emergent. But AI systems are emergent. It can give you different answers at different points in time depending on what it’s trained for, what is the context, what is the things you have and in fact that is the innovation. If you fix it in a box to start with then you won’t get the innovation. But on the other hand if you give it some open ended thing yes, presumably it will improve but sometimes it may deteriorate, sometimes it may lie to you so in that context what I am trying to say is that in the case of artificial intelligence we should use it but we certainly should not trust it in fact its future is based on a certain level of skepticism, healthy skepticism that we must have about its capabilities it will do amazing things but in my view we should be clear that it is probably much much better at solving bounded problems it can play chess for example very very well but I doubt it can plan your career it’s an unbounded problem so if that is how you think about AI then what you need to do is to as I said begin to think this through in terms of how you apply it in particular boxes and where it has a clear set of things that you are trying to do.

So as I said, bounded problems and even there, verify.

Priyanka Jain

With that, we have audience questions. We have one question from Aditya, the founder of First Tile.

Audience

Thank you. Good evening. That was an incredible set of points that came up. Actually made some really interesting notes about the capital markets equal in Sanjeev that you drew. I thought that was a really interesting way of looking at AI and we’ve been in so many summits. I think this is a very, very interesting way that you’ve put it about risk and ex -ante versus post -ante. I had one question for you and I had two suggestions or requests. For Praveen and Davis. From an AI stack perspective, every summit or every conversation across different countries is looking at all the different components of the stack. And there are two things that kind of come up in most of these conversations, which is around sovereign data asset and leverage that comes out of it in terms of tools and models and so on.

Where is India’s perspective in all of this from a sovereign data asset utilization, the model leverage? And I think different countries are looking at their stack as their stack in which they’re going to give you access and so on and so forth. So I think that is something that will be great to get your perspective.

Sanjeev Sanyal

So obviously, India, with its very large population, has stacks of information on all kinds of things, from health to consumer behavior, et cetera. So in some ways, this is a good place for a huge amount of data for experimentation on human behavior and so on. But of course, you know, if data is the new thing, the new oil, the new… we need to be clear that we own… those rights if it’s our data I mean I’m not even getting into the privacy issue I’m assuming here that it’s all that has been taken care of so we are using anonymized data but even then we should at least have the rights to that data and also to some part of the processing of it there’s no point in saying that you know that we have the data but we neither have the rights to it nor do we have the oil rigs to pump out or the refineries to process the new oil so this is the context in which you may have seen in the latest budget we announced almost quarter of a century sort of tax holiday for putting up data centers in this country that’s not a trivial thing to do why are we doing it well basically because as I said data centers are the oil rigs of this new kind of oil.

And then, of course, we need new companies that will process this oil. Those are the new… We have created one, EI -LLM, but frankly, everybody gets very excited about LLM. LLM is only a very limited, in my view, not even the most interesting usage of artificial intelligence. It just happens to be that it is linguistically talented and consequently, you know, we use it for that. But there are many, many more interesting uses of AI. And as I keep coming back to you and stating that we need to create an ecosystem and that ecosystem, we all say, oh, you need to have, you know, half a trillion dollars of investment to create. Actually, no. Much of where you will end up with this use of these refineries, so to speak, will be quite… bounded problems in certain spaces.

So there is more than enough space for startups with much more modest budgets to do interesting things in AI. And I’m not just talking about people building use cases on other people’s. I’m saying literally bottom -up uses of AI. So I think there’s a lot to be done here. It’s an open space. This is basically like discovering the Americas. But, you know, yes, Spain did have an initial sort of starting advantage. But the great empire in the world was actually built by Britain, which was actually a late starter. So there are many, many countries in the world who you do not think today to be a particular player in this game, who will also turn up here.

And one of them could do much, much better than the guys who you think are at the cutting edge today. So this is an emergent situation with all kinds of unintended consequences, uses, positive and negative will happen out of all of this. I think the key here is to be nimble, keep your eyes open, including on the regulatory find, and do not have set ideas where this whole thing is headed because, frankly, we don’t know.

Audience

No, thanks for that. You know, I’m the founder of First Eye, which is a customer data platform. We work with a large number of enterprises on data, all consent. And so we get a ringside view to the application of all of that that you’re saying. And this kind of leads me to the suggestion. As a supplement, we have AI course, which is a repository of data sets, which is growing. And then for the financial sector as well, we are looking to, say, aggregate, to start with synthetic data and then maybe take up, take correlated data from the regulated. entities with their concepts so that would come into use. Okay, awesome. Actually, that kind of goes towards my suggestion bit for the two of you, which is I think, you know, Praveen, when you spoke about the sandbox from an IFSA perspective, I think the ability to extend that beyond just IFSA to, you know, also the other regulators is something I think will be very, very interesting for at least folks like us because we work with a number of entities which cut across different regulators and an associated point is, you know, today there are so many regulations that come in and I think there’s a lot of, there are two opportunities that I see exist.

One is there is different interpretation of the regulations by different entities and second is as a large data processor, not a data owner, but a data processor, I think there is stakeholder, we are one of the stakeholders in that whole process and today we may not have the adequate access or a seat at the table from a regulatory interpretation standpoint. And there, there is, I think, an opportunity for us to define something which is like, you know, what is a consent -backed API for data consumption, for example, and having a regulatory definition of that with participation from a data processor like us. And we’d love to kind of see if there are processes that allow somebody like us to engage with the regulators.

Praveen Kamat

We are open to that idea, but you have to remember one thing. IFSA is a jurisdiction, you know. It has its set of rules which are different from domestic India. So there is an interoperable sandbox mechanism in place between IFSA, RBI, SEBI, and IRDAI. So a solution that spans across the four regulators can be tested within the sandbox. But the issue is not technological. It’s not fiscal or financial. It’s legal. For example, in India, INR transactions are the norm, right? In IFSA, INR transactions are not permitted. You have 16 foreign currencies that are enabled. and you have to do transaction in those 16 currencies. So if your solution is not compatible across these areas, just to give you an example, then the sandbox experimentation will not go through.

So there are a lot more nuances like this which affect the rollout of pilots within the intraoperable sandbox. So just to give you an example. With respect to movement and processing of data, I will not comment at the moment because there are certain things in works in IFSA. So I leave my RBI colleague for that.

⁠Murlidhar Manchala

So just like my colleague said, we already have an intraoperable sandbox across regulators and it is on tap. So earlier it was team -based, but then now it’s on tap. Any type of product can be tested in the sandbox. But just to clarify on the sandbox, it is only when the regulated entity feels that the existing products or services is violating one of the regulations. So there are… very few number of entities which come to the sandbox because in general they are not required to be, required to come to the sandbox if they feel they are compliant to the regulations there is no need to come to the sandbox but then we are also thinking of another sandbox where we also provide some more than in terms of monitoring the regulation we can provide, we can support the innovation in terms of say compute data or tools so that is that is also in the thought process.

Priyanka Jain

We have been one of the beneficiaries of the sandbox and the hackathon at 5Money and the process has been phenomenal the way the RBI fintech teams engaged so maybe Aditya I can share some notes with you offline. but I think thank you this has been a phenomenal panel and great discussion on embedded governance when AI is making space in all things financial services how do we make space for governance in AI that was the theme of the discussion and I am very pleased to hear the views of this panel and I am grateful for making time thank you everyone applause thank you I am actually not going to say anything more apart from the fact that thank you and we will have a quick give of the mementos from India AI mission so my my colleague Kriti will do that so starting with applause applause applause applause applause applause applause applause applause applause applause applause Thank you.

Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (31)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“India’s population‑scale public digital infrastructure such as UPI and other platforms, showing how interoperability, transparency and scale have reshaped financial participation.”

The knowledge base notes that India’s digital public infrastructure emphasizes trusted, interoperable, and scalable systems that transform financial inclusion, confirming the report’s description of UPI-style infrastructure [S29] and the broader DPI discussion [S42].

Additional Contextmedium

“AI is now being super‑imposed on this foundation, integrating with payment systems, credit‑risk platforms, supervisory frameworks and cybersecurity architectures that already operate at national scale.”

Sources describe AI being applied to payment networks (e.g., MasterCard’s AI use in payments) and AI’s role in digital public infrastructure, providing context that AI is being layered onto existing financial and cybersecurity systems [S106] and within India’s DPI ecosystem [S42].

Confirmedhigh

“Risk‑based governance treats AI as a systemic financial utility.”

A dedicated discussion on a risk-based AI policy for the banking sector confirms that a risk-based, systemic approach to AI governance is being advocated for finance [S1].

Additional Contextmedium

“Embedded governance pillars: proportionality, fairness & non‑discrimination, explainability & transparency, accountability.”

The knowledge base lists fairness, non-discrimination, and the need for governance frameworks that include accountability and transparency as core AI governance concerns, aligning with the reported pillars [S103] and the broader ethical AI discussion [S102].

Additional Contextlow

“The moderator reminded participants that the summit’s overarching aim was to treat AI governance as an embedded layer within the existing technology‑oversight framework.”

Opening remarks from the AI Policy Summit emphasize shaping governance to be inclusive and integrated across the technology landscape, providing contextual support for the claim of an “embedded layer” approach [S98].

External Sources (115)
S1
Secure Finance Risk-Based AI Policy for the Banking Sector — -Vikram Kishore Bhattacharya- Role: Cloud service provider representative; expertise in cybersecurity and cloud infrastr…
S2
Secure Finance Risk-Based AI Policy for the Banking Sector — – Ajay Kumar Chaudhary- Moderator – Ajay Kumar Chaudhary- Murlidhar Manchala- Sanjeev Sanyal – Ajay Kumar Chaudhary- S…
S3
Secure Finance Risk-Based AI Policy for the Banking Sector — -Praveen Kamat- Role: Official from GIFT City IFSC (International Financial Services Centre); expertise in financial reg…
S4
Secure Finance Risk-Based AI Policy for the Banking Sector — -Sanjeev Sanyal- Role: Economic Advisor to the Prime Minister; described as a macro thinker, historian, and strategic ge…
S5
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — Thank you so much. Thank you. Our panelists need no introduction. I’m going to keep it very fast so that we can make the…
S6
Secure Finance Risk-Based AI Policy for the Banking Sector — -Priyanka Jain- Role: Panel moderator and discussion facilitator; mentioned as being from 5Money and having experience w…
S7
Global Standards for a Sustainable Digital Future — Maike Luiken: Well, we do continue to develop standards around AI. And of course, we talked here a lot about AI based on…
S8
Secure Finance Risk-Based AI Policy for the Banking Sector — – Ajay Kumar Chaudhary- Murlidhar Manchala
S9
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S10
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S11
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S12
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S13
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S14
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S15
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — And together with these sutras, there are six pillars under which these recommendations are classified. And these have, …
S16
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously th…
S17
WS #123 Responsible AI in Security Governance Risks and Innovation — Drew emphasizes that governance is not something that can be added after the fact as an afterthought. Instead, it needs …
S18
What is it about AI that we need to regulate? — TheWS #123emphasized that”governance is not something that can be added on after the fact. It’s not an afterthought. It …
S19
AI Meets Cybersecurity Trust Governance & Global Security — AI -related risk is really no different. And third, framing privacy and encryption as tradeoffs against security ultimat…
S20
Advancing Scientific AI with Safety Ethics and Responsibility — “So we consider the distribution aspects of the data and models.”[122]. Artificial intelligence | Monitoring and measur…
S21
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Yeah, thanks, Steve. Very well covered. If I can add just a few more points. I think one of the challenges we see is cop…
S22
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — But as far as the regulated entity. As far as the regulated entity is dealing with the customers are concerned, we would…
S23
WS #205 Contextualising Fairness: AI Governance in Asia — 4. Difficulty in “cleaning” biased data: Mueller argued that historical data inherently reflects past biases and cannot …
S24
Day 0 Event #171 Legalization of data governance — He Bo: Thank you. Good afternoon, everyone. I’m He Bo from China Academy. Academy of Information and Communication T…
S25
EU AI Act (Commission proposal) — (44) High data quality is essential for the performance of many AI systems, especially when techniques involving the tra…
S26
https://dig.watch/event/india-ai-impact-summit-2026/medtech-and-ai-innovations-in-public-health-systems — So as we know that health is a state subject, so ultimately government of India works in collaboration with the state go…
S27
Panel Discussion Data Sovereignty India AI Impact Summit — Compute infrastructure must be within national control as it processes, stores data and builds models, but can use forei…
S28
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — The conversation addressed critical questions about technological sovereignty and long-term sustainability. Kumar distin…
S29
Building Indias Digital and Industrial Future with AI — Another thing I mean in February 2019, 7 years back we had something called draft e -commerce policy. Now the tagline of…
S30
Next Steps for Digital Worlds — In conclusion, the Metaverse and virtual reality offer exciting possibilities for connectivity and advancements in vario…
S31
The Battle for Chips — Dependence on Taiwan’s chip manufacturing and its complex international supply chain poses risks to the global economy. …
S32
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — “And that is something which has resulted into that 38 ,000 GPUs, which government is talking about, the shared compute …
S33
AI data centre boom sparks incentives and pushback — The explosive growth of AI and cloud computing hasignited a data centre building boomacross the United States, with stat…
S34
Deepfake and AI fraud surges despite stable identity-fraud rates — According to the 2025 Identity Fraud Report by verification firm Sumsub, the global rate of identity fraud has declined …
S35
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — So one of the key application, key product what we have developed is Fraud Pro. What it does, it actually detects the fr…
S36
Responsible AI in India Leadership Ethics & Global Impact — “I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine …
S37
Boosting women digital entrepreneurship: Bridging the gender financing gap (UNCTAD) — Overall, the analysis provides valuable insights into the significance of MSMEs, the challenges faced by MSMEs in access…
S38
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — Sharma identified AI’s transformative potential in financial services, arguing that “access to credit creates wealth.” H…
S39
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S40
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — In the document and then in our trainings, we have four pillars. They’re all linked. The first pillar is context-based a…
S41
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S42
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And as we look at the journey on AI, which is just beginning for most of the world, what I see is if I look at the US, f…
S43
Building Population-Scale Digital Public Infrastructure for AI — This is a strategic concern for national security and autonomy, as very few countries can be completely digitally sovere…
S44
State of Play: AI Governance / DAVOS 2025 — The discussion highlighted tensions between regulation and innovation. While some advocated for light-touch governance t…
S45
Open Forum #33 Building an International AI Cooperation Ecosystem — Risk-based regulatory approaches are needed but implementation remains challenging
S46
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Armando Guío:Thank you very much. Thank you, Axel, for your kind introductions. And it’s a real pleasure to be here in s…
S47
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Participant 1 (Maureen) This comment reveals a fundamental practical barrier that challenges assumptions about regulato…
S48
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Dr. Yazeed Alabdulkarim:Yeah, regulations are basically a controversial topic because many believe that it’s challenging…
S49
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — The defensive applications include autonomous response systems, intelligent threat detection, and AI-powered security ag…
S50
Secure Finance Risk-Based AI Policy for the Banking Sector — “We have sandboxes in place for startups as well as established entities”[60]. “If your solution is not compatible acros…
S51
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — High consensus with strong implications for global sandbox development. The alignment suggests that despite different re…
S52
WS #35 Unlocking sandboxes for people and the planet — 3. European Union: Katerina Yordanova discussed the European context, particularly the AI Act’s sandbox requirements. Sh…
S53
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — However, financial requirements for entering a regulatory sandbox can be challenging for startups and small innovators. …
S54
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — Popelka provided a personal example, describing how her own voice had been deepfaked and used in an attempted attack dur…
S55
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Anastasiya Kozakova:Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work …
S56
Open Forum #3 Cyberdefense and AI in Developing Economies — Artificial intelligence has fundamentally changed the speed and dynamics of cyber attacks, allowing threats that previou…
S57
The Innovation Beneath AI: The US-India Partnership powering the AI Era — However, Jeff Binder warned that hardware breakthroughs could potentially make entire data centres “almost instantly, at…
S58
Critical infrastructure — AI plays a pivotal role in safeguarding critical infrastructure systems. AI can strengthen the security of critical infr…
S59
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Furthermore, the analysis underscores the importance of considering regional regulations and governance in cybersecurity…
S60
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Evaluation of sandbox implementation is another crucial aspect discussed in the analysis. It emphasizes the need to meas…
S61
WS #100 Integrating the Global South in Global AI Governance — Martin points out the positive role of regulatory sandboxes in enabling safe experimentation with AI technologies. These…
S62
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — She emphasizes the need for addressing the unique risks associated with AI in their development and implementation, ensu…
S63
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — In conclusion, Brazil’s ongoing efforts to establish a comprehensive legal framework for AI regulation are commendable. …
S64
Comprehensive Report: European Approaches to AI Regulation and Governance — A particularly concerning dimension emerged around mental health impacts of AI use. An audience member reported people b…
S65
Secure Finance Risk-Based AI Policy for the Banking Sector — Ajay Kumar Chaudhary opened by highlighting India’s opportunity to lead in AI development while managing associated risk…
S66
WS #123 Responsible AI in Security Governance Risks and Innovation — Drew emphasizes that governance is not something that can be added after the fact as an afterthought. Instead, it needs …
S67
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — Beyond safety by design, companies need governance from design embedded at every stage from ideation through deployment …
S68
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And as we look at the journey on AI, which is just beginning for most of the world, what I see is if I look at the US, f…
S69
Building Indias Digital and Industrial Future with AI — As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain t…
S70
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S71
Building Population-Scale Digital Public Infrastructure for AI — This is a strategic concern for national security and autonomy, as very few countries can be completely digitally sovere…
S72
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S73
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — One aspect of the proposed AI regulation in Brazil is its risk-based approach. Critics argue that this approach only con…
S74
WS #162 Overregulation: Balance Policy and Innovation in Technology — Paola mentions different regulatory approaches such as risk-based, human rights-based, principles-based, rules-based, an…
S75
State of Play: AI Governance / DAVOS 2025 — The discussion highlighted tensions between regulation and innovation. While some advocated for light-touch governance t…
S76
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — By fostering interaction, investigation, and the exchange of ideas, sandboxes serve as a stepping stone towards implemen…
S77
WSIS Action Line C6: Digital Ecosystem Builders in action: Redefining the role of ICT regulators — Al Rejraje promotes regulatory sandboxes as a key tool for de-risking investment in emerging technologies. These sandbox…
S78
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Dr. Yazeed Alabdulkarim:Yeah, regulations are basically a controversial topic because many believe that it’s challenging…
S79
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — Hiroshi Honjo:Yes. So pretty much close to what Dr. Balushi said. So as a private company, we kind of state the AI gover…
S80
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — The defensive applications include autonomous response systems, intelligent threat detection, and AI-powered security ag…
S81
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S82
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S83
Empowering India & the Global South Through AI Literacy — The discussion maintained an optimistic and collaborative tone throughout, with panelists sharing positive field experie…
S84
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S85
Evolving Threat of Poor Governance / DAVOS 2025 — The tone was largely serious and analytical, with panelists offering thoughtful insights on complex governance challenge…
S86
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S87
WS #255 AI and disinformation: Safeguarding Elections — The tone of the discussion was largely analytical and cautiously optimistic. While speakers acknowledged serious risks a…
S88
WS #187 Bridging Internet AI Governance From Theory to Practice — The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers…
S89
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Mariana Rozo-Pan: Thank you, Sophie. And hi, everyone. Good morning, good afternoon, good evening. We are very excited a…
S90
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — The discussion maintained a professional, collaborative tone throughout, characterized by constructive problem-solving r…
S91
Dedicated stakeholder session (in accordance with agreedmodalities for the participation of stakeholders of 22 April 2022) — The overall tone was constructive and collaborative, with countries sharing their experiences implementing CBMs and offe…
S92
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving…
S93
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — The tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enthusiastic and …
S94
AI, Data Governance, and Innovation for Development — The tone of the discussion was largely optimistic and solution-oriented. Speakers acknowledged significant challenges bu…
S95
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S96
Opening address of the co-chairs of the AI Governance Dialogue — – **Moderator**: Role/Title not specified beyond being a moderator for the event The co-chairs expressed their commitme…
S97
How to make AI governance fit for purpose? — – **Innovation focus** – Each representative emphasized avoiding over-regulation that could stifle technological advance…
S98
AI Policy Summit Opening Remarks: Discussion Report — And basically it shows that we can answer the call in a swift way when we need it. So what does it mean to be the AI gen…
S99
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Estímulo à geração de emprego e renda. This was the paradigm of the Declaration on Artificial Intelligence, which we app…
S100
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S101
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — The Prime Minister advocates for the responsible development and use of artificial intelligence. This argument stresses …
S102
Ethical AI_ Keeping Humanity in the Loop While Innovating — Absolutely. And it’s about having these different entities around the table, but also having different governments and h…
S103
GOVERNING AI FOR HUMANITY — – Discrimination and unfair treatment of groups, including based on individual or group traits, such as gender, group is…
S104
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — India’s approach to Digital Public Infrastructure (DPI) emphasizes the importance of civil society and citizen engagemen…
S105
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — However, there are concerns that need to be addressed when implementing DPI. One major concern is the risk of exclusion …
S106
Agentic AI in Focus Opportunities Risks and Governance — Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have be…
S107
AI in 2026: Learning to live with powerful systems — Initiatives that emphasisehuman-centred governanceremind us that AI should serve human flourishing, not redefine it. Thi…
S108
Skilling and Education in AI — Create stackable, modular learning systems that can adapt to changing requirements rather than fixed long-term programs
S109
Agents of Change AI for Government Services & Climate Resilience — The minister says AI is moving beyond simple question answering toward agents that can act autonomously. This marks a sh…
S110
Shaping the Future AI Strategies for Jobs and Economic Development — Now, this session hits squarely within the summit’s trusted AI pillar, and deliberately so. Because trust is no longer a…
S111
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — Parliaments are pivotal to ensuring coherence between domestic legislation, established human rights, and evolving inter…
S112
AI for Democracy_ Reimagining Governance in the Age of Intelligence — I say this because the theme of this session, AI for Democracy, cuts to the heart of the matter. We are not simply debat…
S113
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S114
AI Governance Dialogue: Steering the future of AI — Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a un…
S115
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ajay Kumar Chaudhary
21 arguments136 words per minute2451 words1075 seconds
Argument 1
Governance pillars: proportionality, fairness, explainability, accountability (Ajay Kumar Chaudhary)
EXPLANATION
The speaker outlines four core pillars that should guide AI governance in finance: proportionality (risk‑based intensity), fairness and non‑discrimination, explainability and transparency, and clear accountability.
EVIDENCE
He enumerates the four pillars, stating that governance intensity should be risk-based (proportionality) [50-51], that fairness and non-discrimination are essential [52], that explainability and transparency must be ensured [53], and that accountability must be clearly defined [54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for proportional, fair, transparent and accountable AI governance is echoed in the EU AI Act’s emphasis on data quality and non-discrimination [S25] and in calls for glass-box transparency for customers [S22]; overall embedded governance is highlighted in the Secure Finance policy discussion [S1].
MAJOR DISCUSSION POINT
Governance pillars: proportionality, fairness, explainability, accountability
DISAGREED WITH
Sanjeev Sanyal
Argument 2
Governance must be built‑in by design, not an after‑thought (Ajay Kumar Chaudhary)
EXPLANATION
AI governance should be embedded into the system from the outset rather than added later as a compliance overlay.
EVIDENCE
He stresses that governance cannot be an overlay applied after innovation has been scaled and must be embedded by design [46-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources stress that AI governance cannot be an after-thought and must be integrated throughout the lifecycle [S17][S18].
MAJOR DISCUSSION POINT
Governance must be built‑in by design, not an after‑thought
Argument 3
Continuous oversight to monitor model drift and bias (Ajay Kumar Chaudhary)
EXPLANATION
AI models need ongoing monitoring throughout their lifecycle to detect drift, unintended bias, or feedback loops, especially as data patterns evolve.
EVIDENCE
He notes that intelligent systems must be evaluated across economic cycles, stressed against extreme scenarios, and continuously overseen to guard against drift, bias, or reinforcing feedback loops [68-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Continuous monitoring of data and model distribution shifts is identified as essential for safety and bias detection [S20]; broader AI systems rely on ongoing oversight of large data sets [S5].
MAJOR DISCUSSION POINT
Continuous oversight to monitor model drift and bias
Argument 4
AI can expand financial inclusion but must avoid reinforcing historical biases (Ajay Kumar Chaudhary)
EXPLANATION
While AI offers opportunities to broaden access to finance, it must be designed to prevent the perpetuation of existing structural inequalities.
EVIDENCE
He warns that models trained on narrow, urban-centric data can misclassify or exclude segments that digital finance aims to integrate, thereby risking structural inequalities [42-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bias in historical data and the need for diverse datasets are highlighted as key to avoiding exclusion [S23]; AI-driven credit scoring can improve inclusion when designed responsibly [S38]; inclusion cannot be assumed without safeguards [S1].
MAJOR DISCUSSION POINT
AI can expand financial inclusion but must avoid reinforcing historical biases
Argument 5
Require representative training data, periodic impact audits, and redress mechanisms (Ajay Kumar Chaudhary)
EXPLANATION
To ensure fair outcomes, AI systems should use diverse datasets, undergo regular impact assessments, and provide mechanisms for individuals to seek clarification or redress.
EVIDENCE
He calls for representativeness in training datasets, periodic impact audits, community-level feedback, and institutional mechanisms that allow individuals to seek clarification and redress where automated decisions affect their financial standing [91-93].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act stresses high-quality, representative data and impact assessments for high-risk systems [S25]; transparency and redress are advocated as glass-box approaches for customers [S22]; expanding datasets is recommended to mitigate bias [S23].
MAJOR DISCUSSION POINT
Require representative training data, periodic impact audits, and redress mechanisms
Argument 6
India must own its data and develop domestic AI infrastructure to safeguard sovereignty (Ajay Kumar Chaudhary)
EXPLANATION
The speaker argues that strategic autonomy requires India to retain ownership of its data and build home‑grown semiconductor, cloud, and model capabilities.
EVIDENCE
He describes a five-layer AI stack, highlighting that over 90 % of advanced chips are controlled by a single firm, three firms dominate cloud capacity, and a handful command foundation models, underscoring the need for domestic innovation to protect economic sovereignty [100-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data sovereignty and domestic compute capacity are emphasized as strategic priorities for India’s AI stack [S27][S28][S32].
MAJOR DISCUSSION POINT
India must own its data and develop domestic AI infrastructure to safeguard sovereignty
Argument 7
Dependence on foreign chips, cloud providers, and foundation models threatens economic security (Ajay Kumar Chaudhary)
EXPLANATION
Reliance on external suppliers for critical AI components creates systemic vulnerabilities that could affect financial stability and national security.
EVIDENCE
He notes that one firm controls more than 90 % of advanced chips, three dominate cloud capacity, and a few control foundation models, which could threaten financial stability and economic sovereignty [100-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concentration of advanced chips and cloud services creates systemic risk, as noted in global chip supply analyses and Indian sovereignty discussions [S31][S27].
MAJOR DISCUSSION POINT
Dependence on foreign chips, cloud providers, and foundation models threatens economic security
Argument 8
Government incentives for data‑centres and home‑grown AI models to build a resilient stack (Ajay Kumar Chaudhary)
EXPLANATION
The speaker suggests that policy measures such as tax holidays for data‑centre construction can help develop a domestic AI ecosystem.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy incentives for data-centre construction are discussed in the context of rapid AI-driven infrastructure growth [S33].
MAJOR DISCUSSION POINT
Government incentives for data‑centres and home‑grown AI models to build a resilient stack
Argument 9
AI can cut fraud losses in high‑value payment environments by up to 30 percent
EXPLANATION
Ajay highlights that AI‑enabled detection systems can significantly reduce certain categories of fraud, improving the security of large‑value transactions.
EVIDENCE
He notes that AI-enabled detection can reduce certain categories of fraud losses by up to 25 to 30 percent in high-value payment environments, citing the experience of NPCI [35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Generative AI-enabled fraud detection tools have shown significant loss reductions, with industry reports confirming up to 30 % improvement [S34][S35][S36].
MAJOR DISCUSSION POINT
AI-driven reduction of fraud losses
Argument 10
AI‑driven granular risk assessment can broaden credit access for MSMEs
EXPLANATION
By leveraging transaction histories, cash‑flow analytics and behavioural signals, AI can create more nuanced credit scores for micro, small and medium enterprises, reducing reliance on traditional collateral‑heavy models.
EVIDENCE
He explains that transition-level data, cash-flow analytics and behaviour indicators can provide nuanced insight into repayment capacity, especially for MSMEs that are currently outside the traditional credit framework, thereby reducing dependence on heavy collateral models [82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-based credit scoring that leverages transaction and behavioural data is highlighted as a way to expand MSME financing [S38].
MAJOR DISCUSSION POINT
AI enhancing financial inclusion for MSMEs
Argument 11
AI should be treated as a core financial utility and subject to the same resilience standards as other critical infrastructure
EXPLANATION
Ajay argues that AI is not a peripheral add‑on but a systemic component of the financial system, requiring the same level of accountability, transparency and robustness as any critical financial service.
EVIDENCE
He states that AI must be understood as a component of financial infrastructure that is systemically relevant and should be subject to the same standards of resilience, governance and accountability expected of any critical financial utility [44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI is described as a systemic component of financial infrastructure requiring the same resilience standards as other utilities [S1][S5].
MAJOR DISCUSSION POINT
AI as critical financial infrastructure requiring robust governance
Argument 12
AI markedly improves operational efficiency and precision across the entire financial value chain.
EXPLANATION
By automating complex analyses and decision‑making, AI reduces processing time, cuts errors, and enhances the accuracy of credit assessments, fraud detection, and other core financial functions.
EVIDENCE
He describes how AI models analyze transaction histories and behavioral signals to generate granular borrower assessments, how AI detects anomalous activities within milliseconds, and how the diffusion of AI across the financial value chain enhances efficiency and precision [33-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s role in automating analyses and enhancing precision across financial services is noted in discussions of AI-enabled platforms built on large data foundations [S5][S38].
MAJOR DISCUSSION POINT
AI-driven efficiency and precision in finance
Argument 13
AI strengthens compliance and regulatory reporting by automating pattern recognition and real‑time monitoring.
EXPLANATION
Advanced AI tools can continuously scan transactions, identify compliance breaches, and generate timely reports, reducing the burden on human staff and improving regulatory oversight.
EVIDENCE
He notes that compliance functions increasingly rely on automated pattern recognition and that adaptive cybersecurity models respond to emerging threats in real time, illustrating AI’s role in enhancing regulatory processes [35-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven compliance tools that continuously scan transactions and generate reports are highlighted as enhancing regulatory oversight [S19].
MAJOR DISCUSSION POINT
AI‑enabled compliance and regulatory reporting
Argument 14
Trust requires explainability and transparency in AI systems that affect credit decisions
EXPLANATION
Ajay stresses that AI models influencing credit access must be understandable to maintain public trust, warning against opaque black‑box approaches.
EVIDENCE
He states that AI systems cannot function as opaque black boxes, especially when they influence access to credit or flag financial behavior, highlighting the need for transparency and explainability [26-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for glass-box transparency and clear explanations of automated credit decisions are echoed in governance guidelines [S22][S16][S25].
MAJOR DISCUSSION POINT
Trust and transparency in AI-driven credit decisions
Argument 15
Proactive embedded governance is needed to prevent invisible systemic risk accumulation
EXPLANATION
Ajay stresses that AI governance should be proactive, ensuring that systemic risks do not build up unnoticed as AI systems scale, rather than reacting after problems emerge.
EVIDENCE
He states that the objective is not to slow innovation, but to ensure that systemic risk does not accumulate invisibly, highlighting the need for forward-looking governance measures [62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proactive, lifecycle-wide governance is advocated to avoid hidden risk build-up [S17][S18][S19].
MAJOR DISCUSSION POINT
Proactive governance to prevent hidden systemic risk
Argument 16
Embedded AI governance is a strategic imperative, not a regulatory burden
EXPLANATION
Ajay argues that embedding governance into AI systems should be seen as a strategic necessity that sustains innovation, preserves trust, and protects system stability, rather than being treated as an additional regulatory hurdle.
EVIDENCE
He declares that embedded governance is not a regulatory burden but a strategic imperative that ensures sustainable innovation, preserves trust, and protects system stability [129-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding governance is framed as a strategic necessity for sustainable innovation and system stability [S1].
MAJOR DISCUSSION POINT
Embedded governance as a strategic imperative
Argument 17
Operational concentration risk of AI in finance requires diversification and resilience planning
EXPLANATION
Ajay warns that as AI becomes embedded across the financial value chain, it can create concentration risk where a few AI providers or models dominate critical functions, potentially threatening systemic stability. He calls for diversification of AI providers and resilience measures to mitigate this risk.
EVIDENCE
He identifies operational concentration risk as one of the four key risk dimensions of AI in finance and stresses the need for diversification and resilience planning to safeguard continuity [71-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Operational concentration risks from a few AI providers are highlighted as a systemic threat, underscoring the need for diversification [S31][S27].
MAJOR DISCUSSION POINT
Operational concentration risk of AI in finance
Argument 18
Robust data governance—integrity, consent, purpose limitation, and minimisation—is foundational for trustworthy AI in finance
EXPLANATION
Ajay emphasizes that AI models rely on high‑quality data and that data must be governed through strict integrity checks, explicit consent, clear purpose limitation, and minimisation to prevent misuse and protect individuals.
EVIDENCE
He outlines data-governance principles such as integrity, consent management, purpose limitation and minimisation as foundational for financial AI, noting that financial data reflects livelihoods and economic participation [75-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
High-quality, purpose-limited data and consent management are identified as core requirements for trustworthy AI systems [S25][S27].
MAJOR DISCUSSION POINT
Foundational data‑governance principles for financial AI
Argument 19
AI integration with cybersecurity creates both defensive benefits and new attack vectors, requiring anticipatory safeguards against adversarial AI
EXPLANATION
Ajay points out that while AI can strengthen cyber‑defence mechanisms, it also lowers barriers for adversaries to launch sophisticated attacks, so regulators and institutions must anticipate and mitigate adversarial AI threats.
EVIDENCE
He notes that AI-enabled cybersecurity risks are amplified in the AI environment, with AI strengthening defenses but also being leveraged by adversaries, calling for anticipation of adversarial AI and stronger defensiveness [78-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI can strengthen cyber-defence while also lowering attack barriers, necessitating anticipatory safeguards [S19][S34].
MAJOR DISCUSSION POINT
Dual‑use nature of AI in cybersecurity and need for anticipatory safeguards
Argument 20
Effective AI governance requires interdisciplinary teams and integration into enterprise risk‑management frameworks
EXPLANATION
Ajay argues that governing AI safely cannot be the sole domain of any single function; it demands collaboration among technology, risk, compliance, legal, and business experts, and should be embedded within an institution’s ERM system to strengthen overall resilience.
EVIDENCE
He states that effective governance needs interdisciplinary capability bringing together tech, risk, compliance and legal experts together with business leaders, and that institutions integrating AI governance into their ERM framework strengthen resilience [81-84].
MAJOR DISCUSSION POINT
Interdisciplinary capability and ERM integration for AI governance
Argument 21
AI should be layered onto India’s existing digital public infrastructure to leverage its scale, interoperability, and trust, ensuring AI‑driven services inherit the robustness of systems like UPI and digital identity.
EXPLANATION
The speaker stresses that AI is not arriving in isolation but is being superimposed on the digital foundation that already supports payments, credit, risk management, supervisory frameworks and cybersecurity. By embedding AI within these trusted platforms, the sector can benefit from the proven inclusion, efficiency and reliability of the existing infrastructure.
EVIDENCE
He notes that AI integrates with payment systems, credit and risk-management platforms, supervisory frameworks and cybersecurity architecture that already operate at national scale, describing this convergence as a structural shift [18-21]. He also references India’s decade-long experience of population-scale digital public infrastructure driving inclusion, efficiency and trust, positioning AI as the next layer on this foundation [14-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building AI on top of established digital public infrastructure is recommended to inherit proven inclusion and trust benefits [S5].
MAJOR DISCUSSION POINT
AI integration with existing digital public infrastructure
S
Sanjeev Sanyal
9 arguments156 words per minute3299 words1266 seconds
Argument 1
Ex‑ante responsibility and “skin‑in‑the‑game” for algorithm creators (Sanjeev Sanyal)
EXPLANATION
Responsibility for AI outcomes should be assigned before deployment, ensuring that creators and operators have direct accountability for any failures.
EVIDENCE
He emphasizes that ex-ante decisions must identify who will be hauled up when things go wrong, creating “skin-in-the-game” for algorithm creators, board members, and senior management [180-185].
MAJOR DISCUSSION POINT
Ex‑ante responsibility and “skin‑in‑the‑game” for algorithm creators
Argument 2
Risk‑based approach cannot predict unknown AI risks; may be too stringent or too lax (Sanjeev Sanyal)
EXPLANATION
Because AI is an emergent technology, risk‑based regulation cannot reliably anticipate unknown hazards, leading either to over‑regulation or insufficient safeguards.
EVIDENCE
He argues that you cannot put AI into any real risk bucket because its emergent nature makes ex-ante assessment almost impossible, risking either overly stringent or overly lax regulation [160-166].
MAJOR DISCUSSION POINT
Risk‑based approach cannot predict unknown AI risks; may be too stringent or too lax
DISAGREED WITH
Ajay Kumar Chaudhary
Argument 3
European risk‑based system risks stifling innovation; US relies on ex‑post tort penalties (Sanjeev Sanyal)
EXPLANATION
The European model’s risk‑based framework may choke innovation, whereas the US relies on post‑incident tort penalties to enforce accountability.
EVIDENCE
He contrasts the European risk-based approach, which could strangle progress, with the US model that uses ex-post tort law and large fines as a deterrent after harms occur [161-168].
MAJOR DISCUSSION POINT
European risk‑based system risks stifling innovation; US relies on ex‑post tort penalties
Argument 4
Need for clear ex‑ante accountability rather than post‑hoc punishment (Sanjeev Sanyal)
EXPLANATION
Regulatory frameworks should define responsibility before AI systems are deployed, avoiding reliance on reactive penalties after failures.
EVIDENCE
He reiterates that ex-ante clarity on who is responsible is essential, so that accountability is built into the system rather than imposed after the fact [180-185].
MAJOR DISCUSSION POINT
Need for clear ex‑ante accountability rather than post‑hoc punishment
Argument 5
Compartmentalized AI reduces systemic bias and concentration risk (Sanjeev Sanyal)
EXPLANATION
Separating AI applications into bounded, compartmentalized environments can limit systemic bias, reduce concentration risk, and improve energy efficiency.
EVIDENCE
He advocates for compartmentalized AI, warning against an “AI of everything” and suggesting that bounded, compartmentalized AI solves problems more efficiently while limiting emergent risks such as bias and concentration [238-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Segregating AI applications and expanding diverse datasets are suggested to limit bias and concentration risk [S23][S31].
MAJOR DISCUSSION POINT
Compartmentalized AI reduces systemic bias and concentration risk
Argument 6
Mandatory explainability audits, similar to financial audits, should be required for high‑impact AI systems
EXPLANATION
Sanjeev proposes that AI models exceeding a certain systemic impact threshold undergo a chartered AI audit to ensure transparency and accountability, with the possibility of shutdown if explainability cannot be provided.
EVIDENCE
He suggests that companies with AI beyond a threshold should have a chartered AI audit, comparable to regular financial audits, and that if they cannot explain their results they should be shut down, mirroring existing audit practices for listed companies [250-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act mandates conformity assessments and audits for high-risk AI, providing a model for mandatory explainability audits [S25].
MAJOR DISCUSSION POINT
Chartered AI audits for high‑risk models
Argument 7
AI‑generated content raises novel copyright ownership questions that require a new judicial framework
EXPLANATION
Sanjeev raises fundamental questions about who owns innovations created by AI—whether it is the prompt author, the data owner, or the algorithm creator—and calls for the development of judicial mechanisms to address these issues.
EVIDENCE
He lists a series of questions concerning ownership of AI-generated innovation, prompting the need for a judicial system to handle such disputes, including who owns the prompt, the data, or the algorithm itself [317-322].
MAJOR DISCUSSION POINT
Need for judicial mechanisms to resolve AI copyright ownership
Argument 8
AI systems should incorporate kill‑switches and compartmentalized “Chinese walls” to prevent systemic failures.
EXPLANATION
Embedding hard shutdown mechanisms and clear separations between AI applications limits the spread of errors or malicious behavior, safeguarding the broader financial system.
EVIDENCE
He argues that AI must have system-switch-off buttons and Chinese-wall style separations to avoid cascading failures, warning against an “AI of everything” approach and advocating bounded, compartmentalized deployments [238-244].
MAJOR DISCUSSION POINT
Kill‑switches and compartmentalization for AI safety
Argument 9
Avoid bureaucratic, overly risk‑based regulation; adopt creative, flexible approaches
EXPLANATION
Sanjeev cautions that a bureaucratic risk‑based system could stifle innovation and argues that regulators need to be inventive and adaptable when governing emergent AI technologies.
EVIDENCE
He says “we need to be very, very careful that we don’t end up with a bureaucratic risk-based system” and calls for creativity in regulation [238-240].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Adapting regulations to local contexts rather than copying foreign models is emphasized as essential for flexibility [S21].
MAJOR DISCUSSION POINT
Need for flexible, non‑bureaucratic AI regulation
M
Murlidhar Manchala
16 arguments0 words per minute0 words1 seconds
Argument 1
Supervisory relief for firms that implement robust controls (Murlidhar Manchala)
EXPLANATION
Regulators should adopt a lenient supervisory stance for entities that have put effective guardrails and processes in place, treating compliance as an instrument rather than a punitive measure.
EVIDENCE
He notes that the report suggests entities with strong guardrails should receive a lenient supervisory approach and that such firms should not be treated as higher-risk, emphasizing a “lenient supervisory approach” for those that have implemented robust controls [204-206].
MAJOR DISCUSSION POINT
Supervisory relief for firms that implement robust controls
Argument 2
Regulators should remain flexible and adapt as AI evolves (Murlidhar Manchala)
EXPLANATION
Regulatory frameworks need to be dynamic, incorporating continuous audits, transparency, and the ability to shut down systems when necessary, mirroring practices used in financial market oversight.
EVIDENCE
He describes a framework that includes audits, transparency, explainability, and mechanisms to shut down systems when they spiral out, drawing parallels with stock-market regulation and emphasizing the need for flexible, adaptive oversight [172-176].
MAJOR DISCUSSION POINT
Regulators should remain flexible and adapt as AI evolves
Argument 3
Transparency (“glass‑box”) for customers to understand automated decisions (Murlidhar Manchala)
EXPLANATION
AI‑driven services should be presented as a “glass‑box” rather than a “black‑box,” ensuring customers can see and understand the logic behind automated outcomes.
EVIDENCE
He stresses that customers should have transparent, understandable decisions, describing the ideal as a “glass-box” rather than a black-box system [204-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulators and industry advocates call for glass-box AI that lets customers see decision logic [S22].
MAJOR DISCUSSION POINT
Transparency (“glass‑box”) for customers to understand automated decisions
Argument 4
Risk underestimation must be tackled through proactive governance rather than reactive measures
EXPLANATION
Murlidhar warns that the industry may be underestimating AI‑related risks and argues that only a strong, forward‑looking governance framework can mitigate these unknowns.
EVIDENCE
He states that risk is maybe underestimating the risk and that it can be addressed only through governance in the present emergence of technology, highlighting the need for robust oversight [296-302].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proactive, lifecycle-wide AI governance is recommended to address hidden risks before they materialise [S17].
MAJOR DISCUSSION POINT
Addressing risk underestimation via proactive governance
Argument 5
Sandbox mechanisms should be expanded to provide monitoring and support, not only to address compliance breaches
EXPLANATION
Murlidhar explains that the current sandbox is invoked when entities suspect regulatory breaches, but proposes a new sandbox that also offers monitoring, data, and tooling to help innovators manage risk proactively.
EVIDENCE
He describes the existing intra-operable sandbox used when regulated entities feel a product violates regulations, and then suggests a future sandbox that would provide monitoring, compute, data, and tools to support innovation while managing risk [407-411].
MAJOR DISCUSSION POINT
Extending sandbox use for proactive risk monitoring and support
Argument 6
Mandatory incident reporting and manual override mechanisms are essential for AI-driven financial services to ensure rapid remediation of failures.
EXPLANATION
Regulators should require firms to establish clear incident reporting procedures and provide manual controls that can intervene when AI systems behave unexpectedly, thereby protecting customers and maintaining trust.
EVIDENCE
Murlidhar explains that once robust governance and processes are in place, there should be incident reporting mechanisms and manual overrides to address any aberrations in AI operations, emphasizing that these safeguards are part of a right process for handling risks [204-206].
MAJOR DISCUSSION POINT
Incident reporting and manual overrides for AI systems
Argument 7
Regulators should allow first‑time lapses without punitive action when robust guardrails are in place
EXPLANATION
Murlidhar proposes that if firms have implemented comprehensive controls, regulators should treat initial failures as learning opportunities rather than imposing strict penalties.
EVIDENCE
He mentions that supervision should not be used as a systemic greater risk and that “you should allow first time lapse” when appropriate controls exist [204-206].
MAJOR DISCUSSION POINT
First‑time lapse tolerance with strong governance
Argument 8
Supervisory relief should be granted only when firms can demonstrably document robust AI guardrails and governance processes, making the relief an instrument of risk mitigation rather than a blanket leniency.
EXPLANATION
Regulators should require firms to provide clear, auditable evidence of the controls they have put in place before offering a lenient supervisory stance, ensuring that relief is tied to concrete governance measures.
EVIDENCE
He notes that the report suggests entities with all guardrails should receive a lenient supervisory approach and that supervision should not be used as a systemic greater risk, implying the need for documented processes to qualify for relief [204-206].
MAJOR DISCUSSION POINT
Conditional supervisory relief based on documented AI governance measures
Argument 9
Regulators should formalise a proactive incident‑reporting and manual‑override framework for AI‑driven financial services to ensure rapid remediation of failures.
EXPLANATION
A mandatory system for reporting AI incidents and providing human‑in‑the‑loop overrides can help contain errors, protect customers, and maintain trust in AI‑enabled financial products.
EVIDENCE
He describes the need for incident reporting mechanisms and manual overrides as part of the right process for handling AI risks, emphasizing that these safeguards are essential once robust governance is in place [204-206].
MAJOR DISCUSSION POINT
Mandatory incident reporting and manual overrides for AI systems
Argument 10
Board and senior management must develop AI literacy to understand system logic, limitations and vulnerabilities.
EXPLANATION
Murlidhar stresses that senior leaders need to be knowledgeable about how AI models work, their constraints, and potential risks so they can oversee deployment responsibly.
EVIDENCE
He notes that potential vulnerability of AI systems requires board and senior management to understand the logic, limitations, and other aspects of the technology, emphasizing the need for deep AI literacy [204-206].
MAJOR DISCUSSION POINT
AI literacy for senior leadership
Argument 11
AI guardrails should be treated as an enabling instrument rather than a punitive compliance checkbox.
EXPLANATION
He argues that the purpose of guardrails is to facilitate safe innovation, acting as a tool that supports firms rather than merely imposing penalties.
EVIDENCE
Murlidhar states that the guardrails should be seen as an instrument, implying they enable innovation while ensuring safety [204-206].
MAJOR DISCUSSION POINT
Guardrails as enabling instrument
Argument 12
Regulatory response should prioritize root‑cause analysis and remediation before imposing supervisory actions.
EXPLANATION
He suggests that when a firm experiences a lapse, regulators should first require a thorough investigation and corrective measures, using the findings to guide any supervisory approach.
EVIDENCE
He mentions that entities performing root-cause analysis and addressing problems should receive a lenient supervisory approach, indicating that remediation should precede enforcement [204-206].
MAJOR DISCUSSION POINT
Root‑cause analysis before supervisory action
Argument 13
Introduce a formal award to recognize financial institutions that demonstrate exemplary AI governance and risk management.
EXPLANATION
Murlidhar proposes that regulators create an award to honor firms that excel in implementing AI guardrails and robust governance processes, thereby incentivizing best practices in the financial sector.
EVIDENCE
He mentions that the report suggests there would be an award for finance, highlighting specific works done in the area and indicating a proposal to recognize and encourage strong AI governance efforts [204-206].
MAJOR DISCUSSION POINT
Recognition award for exemplary AI governance in finance
Argument 14
Algorithmic efficiency must not compromise equitable opportunity; AI systems should balance performance with fairness.
EXPLANATION
He emphasizes that while AI can improve operational efficiency, it should not do so at the cost of equity, urging that AI-driven financial services maintain inclusive outcomes.
EVIDENCE
He explicitly states that in financial AI, algorithmic efficiency should not compromise equitable opportunity, underscoring the need to preserve fairness alongside efficiency [204-206].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing efficiency with fairness requires diverse data and bias mitigation, as discussed in fairness-focused AI governance literature [S23].
MAJOR DISCUSSION POINT
Balancing algorithmic efficiency with equitable opportunity
Argument 15
AI should not be automatically classified as a higher‑risk category; risk classification must be proportionate and evidence‑based
EXPLANATION
Murlidhar cautions against over‑acting by treating AI as a high‑risk area by default, arguing that regulators should apply a proportional risk‑based approach that reflects the actual systemic impact of each AI application.
EVIDENCE
He notes that the regulator should not over-act by labeling AI as a higher-risk area and that this over-classification is recognized as something to avoid [204-206].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act advocates proportionate risk classification based on actual systemic impact, warning against blanket high-risk labeling [S25].
MAJOR DISCUSSION POINT
Proportional risk classification for AI systems
Argument 16
Regulators should develop a standardized AI incident‑reporting framework to ensure consistent, timely disclosure across financial institutions
EXPLANATION
Murlidhar emphasizes the need for clear, uniform incident‑reporting mechanisms and manual overrides, suggesting that a standardized framework would enable rapid remediation and maintain customer trust.
EVIDENCE
He describes the existence of incident-reporting mechanisms and manual overrides as essential safeguards and implies that a formal, consistent process across entities would improve oversight [204-206].
MAJOR DISCUSSION POINT
Standardized incident reporting for AI‑driven financial services
P
Praveen Kamat
5 arguments184 words per minute874 words283 seconds
Argument 1
Sandbox experimentation to test governance in a controlled environment (Praveen Kamat)
EXPLANATION
A dedicated sandbox can allow firms to pilot AI models under regulatory oversight, enabling risk‑based testing and iterative refinement before full deployment.
EVIDENCE
He explains that IFSC, being a clean-slate jurisdiction, can host sandboxes where AI pilots are tested, with risk caps and continuous monitoring, allowing regulators to tailor rules based on observed outcomes [192-199] and later [284-291].
MAJOR DISCUSSION POINT
Sandbox experimentation to test governance in a controlled environment
DISAGREED WITH
Murlidhar Manchala
Argument 2
IFSC as a clean‑slate jurisdiction to experiment with AI governance frameworks (Praveen Kamat)
EXPLANATION
The International Financial Services Centre (IFSC) offers a fresh regulatory environment without legacy constraints, making it suitable for innovative AI governance experiments.
EVIDENCE
He notes that IFSC was set up in 2015, built from scratch, and therefore provides “more leg room” and space to experiment without legacy baggage, positioning it as an ideal lab for AI governance [192-199].
MAJOR DISCUSSION POINT
IFSC as a clean‑slate jurisdiction to experiment with AI governance frameworks
Argument 3
Legal and cross‑currency constraints limit sandbox interoperability between IFSC and domestic regulators
EXPLANATION
Praveen points out that while technical sandbox integration exists, differences such as the prohibition of INR transactions in IFSC create legal barriers that hinder seamless experimentation across jurisdictions.
EVIDENCE
He explains that IFSC operates with 16 foreign currencies and does not permit INR transactions, which creates legal incompatibilities that affect sandbox rollout and limit cross-jurisdictional experimentation [399-402].
MAJOR DISCUSSION POINT
Legal and currency barriers to sandbox interoperability
Argument 4
IFSC’s comprehensive regulatory coverage across finance, capital markets, banking, insurance and pensions enables holistic AI governance.
EXPLANATION
Having a single jurisdiction that oversees multiple financial verticals allows coordinated AI policy, risk‑management standards, and cross‑sector supervision, fostering consistent governance.
EVIDENCE
He lists that IFSC has introduced verticals across finance, capital markets, banking, insurance, pensions and ancillary services, highlighting its breadth of regulatory authority [198-200].
MAJOR DISCUSSION POINT
Broad regulatory scope of IFSC supports integrated AI governance
Argument 5
AI governance in IFSC requires a gestation period; rapid scaling may compromise stability
EXPLANATION
Praveen likens building a financial centre to a long marathon, emphasizing that AI governance frameworks need time to mature before full deployment can be safely achieved.
EVIDENCE
He compares building a financial centre to a “45 kilometre marathon” and stresses the need for a gestation period before reaching critical mass for AI initiatives [196-199].
MAJOR DISCUSSION POINT
Importance of gradual development and gestation for AI governance in IFSC
V
Vikram Kishore Bhattacharya
4 arguments175 words per minute694 words236 seconds
Argument 1
Generative AI lowers attack barriers but does not change core security principles (Vikram Kishore Bhattacharya)
EXPLANATION
While generative AI makes it easier for malicious actors to craft phishing or credential‑stealing attacks, the fundamental cybersecurity controls remain unchanged.
EVIDENCE
He cites a 2025 report showing generative AI has lowered barriers for threat actors, yet stresses that the same principles-multi-factor authentication, strong passwords, regular updates, and system scanning-still apply [215-223].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Generative AI makes attacks easier, yet fundamental security controls such as MFA and patching remain essential [S34][S19].
MAJOR DISCUSSION POINT
Generative AI lowers attack barriers but does not change core security principles
DISAGREED WITH
Ajay Kumar Chaudhary
Argument 2
Organizations need an active AI‑in‑the‑loop security posture for faster detection and response (Vikram Kishore Bhattacharya)
EXPLANATION
Integrating AI into security operations can accelerate threat detection, automate scanning, and enable rapid decision‑making, but requires appropriate skill development.
EVIDENCE
He describes using AI in the loop to automate scanning, generate reports, and make timely value judgments, emphasizing the need for upskilling and awareness across banks and regulators [226-232].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding AI into security operations accelerates threat detection and response, as recommended in AI-enabled cybersecurity frameworks [S19].
MAJOR DISCUSSION POINT
Organizations need an active AI‑in‑the‑loop security posture for faster detection and response
Argument 3
Adoption of standards, third‑party audits, and upskilling are essential to manage AI‑related threats (Vikram Kishore Bhattacharya)
EXPLANATION
Compliance with recognized standards (e.g., ISO, NIST), independent audits, and continuous training are critical to mitigate AI‑driven cyber risks.
EVIDENCE
He recommends verification through standards like ISO or NIST, third-party audit reports, and extensive upskilling programs to ensure organizations can handle AI-related security challenges [224-233].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Compliance with standards (ISO/NIST) and independent audits, together with workforce upskilling, are highlighted as key to mitigating AI-driven cyber risks [S25][S19].
MAJOR DISCUSSION POINT
Adoption of standards, third‑party audits, and upskilling are essential to manage AI‑related threats
Argument 4
Financial institutions must shift from a passive to an active cybersecurity posture, proactively integrating AI to detect and respond to threats.
EXPLANATION
Rather than merely defending existing perimeters, firms should embed AI tools that continuously monitor, scan and automate response actions, complemented by upskilling programmes to maintain readiness.
EVIDENCE
He stresses that organizations need to become active participants in cybersecurity, using AI-in-the-loop for faster detection, automated scanning, and rapid value judgments, and calls for extensive upskilling and awareness across banks and regulators [225-233].
MAJOR DISCUSSION POINT
Proactive AI‑driven cybersecurity for financial services
P
Priyanka Jain
4 arguments110 words per minute1025 words555 seconds
Argument 1
India must craft a balanced, home‑grown AI governance model, learning from US, EU, and China (Priyanka Jain)
EXPLANATION
India should develop its own AI governance framework that draws lessons from the strengths and weaknesses of the US, EU, and Chinese approaches, ensuring a tailored, sovereign strategy.
EVIDENCE
She asks the panel whether India should position itself between the US, EU, and China models, highlighting the need for a balanced, home-grown approach [146-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Localising AI regulations to Indian context while drawing lessons from global models is advocated as a best-practice approach [S21][S25][S27].
MAJOR DISCUSSION POINT
India must craft a balanced, home‑grown AI governance model, learning from US, EU, and China
Argument 2
Emphasis on compartmentalization and bounded problem solving to maintain trust (Priyanka Jain)
EXPLANATION
Focusing AI applications on well‑defined, bounded problems and using compartmentalized architectures can reduce systemic risk and preserve user trust.
EVIDENCE
She remarks that compartmentalization is a great way to de-risk AI and that solving bounded problems helps maintain trust while avoiding the pitfalls of an “AI of everything” [236-240].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Compartmentalised AI deployments reduce systemic bias and concentration risk, supporting trust in AI systems [S23][S31].
MAJOR DISCUSSION POINT
Emphasis on compartmentalization and bounded problem solving to maintain trust
Argument 3
AI governance must embed access and inclusion as core design principles to avoid reinforcing existing inequalities.
EXPLANATION
Policies should require representative training data, periodic impact audits, and clear redress mechanisms so that AI‑enabled financial services broaden participation rather than marginalise vulnerable groups.
EVIDENCE
She emphasizes that inclusion cannot be assumed and must be intentionally designed, calling for representativeness in datasets, impact audits, community-level feedback, and institutional redress mechanisms for automated decisions affecting individuals [85-93].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive AI requires representative data, impact audits, and redress mechanisms to prevent structural bias [S23][S38][S1].
MAJOR DISCUSSION POINT
Inclusion‑by‑design in AI governance
Argument 4
Leverage India’s existing digital public infrastructure as a foundation for AI governance
EXPLANATION
Priyanka points out that India’s proven digital public infrastructure—such as UPI and digital identity—offers a solid base onto which AI governance mechanisms can be layered.
EVIDENCE
She references India’s track record of scaling digital public infrastructure that drives inclusion, efficiency and trust, suggesting AI can be embedded on this foundation [14-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building AI on top of established digital public platforms (e.g., UPI, digital ID) is recommended to inherit proven inclusion and trust benefits [S5].
MAJOR DISCUSSION POINT
Building AI governance on established digital public infrastructure
A
Audience
1 argument187 words per minute555 words177 seconds
Argument 1
Call for consent‑backed APIs and regulatory participation for data processors to shape interpretation of rules (Audience)
EXPLANATION
Stakeholders propose a regulatory framework for consent‑backed data‑sharing APIs, allowing data processors to engage with regulators and influence rule interpretation.
EVIDENCE
An audience member suggests developing a consent-backed API for data consumption and seeks regulatory definitions that involve data processors in shaping interpretation of rules [354-360].
MAJOR DISCUSSION POINT
Call for consent‑backed APIs and regulatory participation for data processors to shape interpretation of rules
M
Moderator
2 arguments16 words per minute145 words531 seconds
Argument 1
Framing the summit’s focus on AI governance sets the agenda for subsequent discussion
EXPLANATION
The moderator emphasizes that the summit’s overarching theme is to embed AI governance within existing technology regulation, thereby establishing the context for the panel’s contributions.
EVIDENCE
In the opening remarks the moderator thanks participants and states that the discussion will look at AI governance as an embedded layer of existing technology governance, not as a separate lens [1-2]. Later, after the keynote, the moderator thanks the speaker and notes that the insights will set the context for the panel discussion, reinforcing the need to frame the conversation around AI governance [133-136].
MAJOR DISCUSSION POINT
Setting the agenda for AI governance discussion
Argument 2
AI governance should be embedded as an integral layer of existing technology regulation rather than a separate silo
EXPLANATION
The moderator frames the summit’s purpose as integrating AI governance into the current governance structures for technologies, stressing that AI should not be treated as a stand‑alone regulatory domain.
EVIDENCE
In the opening remarks the moderator says the discussion will look at AI governance as an embedded layer of governance that we already govern technologies with, not as a separate lens [2].
MAJOR DISCUSSION POINT
Embedding AI governance within existing regulatory frameworks
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Differences
Different Viewpoints
Effectiveness and feasibility of a risk‑based AI regulatory approach
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal
Governance pillars: proportionality, fairness, explainability, accountability (Ajay Kumar Chaudhary) Risk‑based approach cannot predict unknown AI risks; may be too stringent or too lax (Sanjeev Sanyal) Avoid bureaucratic risk‑based system; need creative flexible regulation (Sanjeev Sanyal)
Ajay advocates a risk-based, proportional governance framework as a core pillar for AI in finance [50-51][59-63], while Sanjeev argues that risk-based regulation cannot reliably anticipate AI’s emergent risks and may either over-regulate or under-protect, warning against a bureaucratic risk-based system [160-166][238-240].
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-based AI regulation is embedded in sector-specific frameworks such as the Secure Finance Risk-Based AI Policy for banking, which ties sandbox outcomes to tailored rules [S50], and has been endorsed in multistakeholder forums as a balanced approach to AI-related cyber risks [S62].
Purpose and scope of sandbox mechanisms for AI experimentation
Speakers: Praveen Kamat, Murlidhar Manchala
Sandbox experimentation to test governance in a controlled environment (Praveen Kamat) Sandbox only for compliance breaches; propose expanded sandbox for monitoring and support (Murlidhar Manchala)
Praveen describes the IFSC sandbox as a venue for pilots, risk caps and iterative regulation, emphasizing its experimental role [192-199][284-291], whereas Murlidhar states that the current sandbox is invoked only when a regulated entity suspects a breach and suggests a new sandbox that also provides monitoring and tooling beyond compliance issues [407-411].
POLICY CONTEXT (KNOWLEDGE BASE)
Regulatory sandboxes are employed to test AI solutions for startups and incumbents, with outcomes informing rule-making [S50]; a global consensus on sandbox principles supports responsible innovation across jurisdictions [S51]; the EU AI Act mandates sandbox provisions that differ among member states, reflecting varied sectoral needs [S52]; financial entry barriers for small innovators are noted as a challenge to sandbox participation [S53]; effective sandbox design requires systematic evaluation and stakeholder involvement [S60]; and sandboxes are highlighted as safe spaces for controlled AI deployment [S61].
Characterisation of AI as core financial infrastructure versus an emergent technology
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal
AI should be treated as a core financial utility and subject to the same resilience standards as other critical infrastructure (Ajay Kumar Chaudhary) AI is an emergent technology with behaviours that cannot be fully regulated ex‑ante (Sanjeev Sanyal)
Ajay argues that AI is a systemic component of financial infrastructure requiring the same standards of resilience and accountability as any critical utility [44], while Sanjeev emphasizes AI’s emergent, unpredictable nature, stating it cannot be treated like traditional infrastructure and resists full ex-ante regulation [242-245][246-248].
Impact of AI on cybersecurity fundamentals
Speakers: Vikram Kishore Bhattacharya, Ajay Kumar Chaudhary
Generative AI lowers attack barriers but does not change core security principles (Vikram Kishore Bhattacharya) AI integration with cybersecurity creates new attack vectors and requires anticipatory safeguards (Ajay Kumar Chaudhary)
Vikram maintains that despite AI enabling easier phishing and credential attacks, the underlying security controls such as MFA, strong passwords and regular updates remain unchanged [215-223], whereas Ajay points out that AI introduces amplified cybersecurity risks, including adversarial AI, necessitating proactive safeguards [78-80].
POLICY CONTEXT (KNOWLEDGE BASE)
AI introduces new cyber-threat vectors such as deepfake-based social engineering attacks [S54]; ethical guidelines for AI in cybersecurity stress the need to address these novel risks [S55]; AI dramatically shortens attack development cycles, shifting the threat landscape from months to minutes [S56]; at the same time, AI can reinforce critical infrastructure protection through advanced detection and response capabilities [S58]; and IGF discussions call for specific regulatory measures to manage AI-driven cyber risks [S62].
Unexpected Differences
Different regulatory visions for sandbox utilisation
Speakers: Praveen Kamat, Murlidhar Manchala
Sandbox experimentation to test governance in a controlled environment (Praveen Kamat) Sandbox only for compliance breaches; propose expanded sandbox for monitoring and support (Murlidhar Manchala)
Both speakers are regulators, yet Praveen envisions the sandbox as a proactive experimental space for AI pilots across jurisdictions [192-199][284-291], while Murlidhar sees it primarily as a remedial tool triggered by suspected breaches and only later suggests a broader monitoring role [407-411]. This contrast in purpose was not anticipated given their shared regulatory background.
POLICY CONTEXT (KNOWLEDGE BASE)
While a broad international consensus underpins sandbox use for AI, national implementations diverge, exemplified by varying EU member-state approaches to the AI Act sandbox requirements [S51][S52]; financial and resource constraints for startups shape regulatory design choices [S53]; successful sandbox programmes depend on stakeholder engagement, risk mitigation, and robust monitoring frameworks [S60]; and positive assessments highlight sandboxes as mechanisms for safe AI experimentation [S61].
Whether AI fundamentally changes cybersecurity versus merely lowering attack barriers
Speakers: Vikram Kishore Bhattacharya, Ajay Kumar Chaudhary
Generative AI lowers attack barriers but does not change core security principles (Vikram Kishore Bhattacharya) AI integration with cybersecurity creates new attack vectors and requires anticipatory safeguards (Ajay Kumar Chaudhary)
Vikram asserts that despite AI-enabled threats, the foundational security controls remain the same [215-223]; Ajay counters that AI introduces novel adversarial threats, demanding new safeguards [78-80]. The divergence in assessing AI’s impact on security fundamentals was unexpected.
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence shows AI reduces the time needed to develop sophisticated attacks from months to minutes, indicating a fundamental shift in cyber threat dynamics [S56]; AI-generated deepfakes create entirely new categories of social-engineering attacks, beyond simple barrier reduction [S54]; IGF deliberations argue that AI introduces unique cyber risks that require dedicated regulatory responses, not just lower thresholds for existing threats [S62]; and ethical discourse reinforces the view of AI as a transformative factor in cybersecurity [S55].
Overall Assessment

The panel shows strong consensus on the need for trustworthy AI, inclusion, and interdisciplinary governance, but diverges on the suitability of risk‑based regulation, the classification of AI as infrastructure, the design of sandbox mechanisms, and the extent to which AI reshapes cybersecurity. These disagreements reflect differing views on regulatory flexibility versus predictability and on the balance between innovation and systemic risk.

Moderate disagreement: while participants align on overarching goals, they propose contrasting regulatory tools and conceptual framings, indicating that achieving a unified AI governance model will require negotiation between risk‑based, ex‑ante, and experimental approaches.

Partial Agreements
All speakers share the goal of trustworthy, safe AI deployment, but differ on the mechanisms: Ajay stresses embedded governance throughout the AI lifecycle [46-48][50-54]; Sanjeev calls for ex‑ante accountability and kill‑switches [180-185][238-244]; Murlidhar focuses on guardrails, transparency and incident reporting as tools [204-207][296-302]; Vikram emphasizes standards, audits and skill development [224-233].
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal, Murlidhar Manchala, Vikram Kishore Bhattacharya
Governance must be built‑in by design, not an after‑thought (Ajay Kumar Chaudhary) Ex‑ante responsibility and “skin‑in‑the‑game” for algorithm creators (Sanjeev Sanyal) Guardrails should be an enabling instrument, with glass‑box transparency and incident reporting (Murlidhar Manchala) Adoption of standards, third‑party audits and upskilling are essential to manage AI‑related threats (Vikram Kishore Bhattacharya)
All agree that AI should broaden financial access, yet differ on implementation: Ajay calls for representative training data and impact audits [42-44][91-93]; Priyanka stresses policy design to ensure access and inclusion [85-93]; Sanjeev proposes bounded, compartmentalized AI to avoid systemic bias [238-244].
Speakers: Ajay Kumar Chaudhary, Priyanka Jain, Sanjeev Sanyal
AI can expand financial inclusion but must avoid reinforcing historical biases (Ajay Kumar Chaudhary) Inclusion must be designed into AI governance, with representativeness and redress (Priyanka Jain) Compartmentalized AI solving bounded problems can support inclusion while limiting bias (Sanjeev Sanyal)
Takeaways
Key takeaways
AI must be treated as core financial infrastructure and governed by design, not as an after‑thought overlay. Four pillars of embedded AI governance were highlighted: proportionality (risk‑based intensity), fairness/non‑discrimination, explainability/transparency, and clear accountability. Continuous monitoring for model drift, bias, and operational concentration risk is essential throughout the AI lifecycle. Ex‑ante “skin‑in‑the‑game” – algorithm creators and senior management must be held responsible before failures occur. Traditional risk‑based regulatory models are inadequate for emergent AI; they may be either too stringent or too lax. A supervisory relief or “safe‑harbor” regime is proposed for firms that implement robust, auditable AI controls and transparent redress mechanisms. Sandbox environments (especially in the IFSC) are seen as practical venues for testing AI models, governance frameworks, and cross‑regulatory interoperability. AI can dramatically expand financial inclusion, but only if training data are representative and impact audits with grievance redress are institutionalised. Sovereign data ownership and domestic AI stack (chips, cloud, foundation models) are critical for economic and national security; incentives for data‑centres and home‑grown models were noted. Generative AI heightens cybersecurity threats but does not overturn fundamental security principles; active AI‑in‑the‑loop defenses, standards compliance, and up‑skilling are required. India should craft a balanced, home‑grown AI governance model that draws lessons from US (ex‑post), EU (compliance‑led) and China (state‑led) approaches, emphasizing compartmentalisation and bounded problem solving.
Resolutions and action items
RBI and other regulators to consider a supervisory relief framework that rewards firms with documented AI governance (model inventories, bias testing, continuous monitoring). Expand the IFSC sandbox to allow interoperable testing across RBI, SEBI, IRDAI, and IFSC for AI‑driven financial products. Develop a consent‑backed API standard for data sharing, with stakeholder participation from data processors and regulators. Promote domestic investment in AI infrastructure (data centres, semiconductor manufacturing, cloud capacity) through policy incentives such as tax holidays. Introduce periodic, independent AI audit mechanisms (e.g., “chartered AI audit”) for high‑impact models. Create clear ex‑ante accountability matrices that assign responsibility for AI outcomes to algorithm developers, data owners, and senior management. Implement a framework for impact audits and redress mechanisms to ensure inclusive AI outcomes.
Unresolved issues
How to operationalise a risk‑based regulatory approach for AI when many risks are unknown and emergent. Legal framework for ownership of AI‑generated innovations and copyrighted material (prompt‑owner vs. data‑owner vs. model‑owner). Specific mechanisms for cross‑jurisdictional data handling between IFSC (foreign‑currency) and domestic Indian regulators. Detailed standards for AI‑related cybersecurity incident reporting and real‑time response automation. Exact criteria and thresholds for granting supervisory relief or safe‑harbor status to AI‑enabled firms. Procedures for ensuring representative training data and preventing bias in AI models at scale.
Suggested compromises
Adopt a proportional, risk‑based governance model that is flexible enough to evolve with AI, avoiding both over‑regulation and regulatory vacuum. Provide supervisory relief (lighter oversight) to firms that demonstrate robust, auditable AI controls while retaining the right to intervene if systemic risk emerges. Use compartmentalised AI deployments (bounded problem domains) to limit systemic exposure and energy consumption. Combine ex‑ante accountability (clear responsibility assignments) with ex‑post penalties for severe failures, balancing prevention and deterrence. Leverage sandbox environments as a middle ground for innovation and regulation, allowing controlled experimentation before full market rollout.
Thought Provoking Comments
I will use the word ‘Mano’ – humanity – instead of ‘responsible AI’ because it captures moral, ethical, sovereign, inclusive and accountable dimensions in a single word.
Reframes AI governance around a human‑centric value rather than a technical checklist, linking policy, ethics and national identity in one concept.
Set a unifying narrative for the rest of the discussion, prompting other panelists to address how AI can be aligned with broader societal goals rather than isolated compliance.
Speaker: Ajay Kumar Chaudhary
Governance cannot be an overlay applied after innovation has scaled; it must be embedded by design into the AI life‑cycle.
Highlights the necessity of proactive, design‑level controls rather than reactive regulation, a theme that recurs throughout the panel.
Provided a foundational premise that guided subsequent debates on risk‑based approaches, sandbox experimentation, and the need for continuous monitoring.
Speaker: Ajay Kumar Chaudhary
Historically, the Europeans dominated the world by taking technologies invented elsewhere (printing press, gunpowder, mathematics). India must engage now or risk losing AI leadership.
Uses a powerful historical analogy to illustrate the strategic risk of technological complacency, shifting the conversation from technical details to geopolitical stakes.
Prompted panelists to consider national sovereignty, data ownership, and the urgency of building domestic AI capabilities, leading to later remarks on data oil and sovereign AI stacks.
Speaker: Sanjeev Sanyal
A risk‑based regulatory system for AI is fundamentally flawed because the technology is emergent and its risks are unknowable ex‑ante; we need ex‑post accountability and clear pre‑defined responsibility.
Challenges the prevailing regulatory paradigm, arguing that traditional risk assessments cannot capture AI’s dynamic nature.
Shifted the tone from prescriptive risk frameworks to a discussion on compartmentalisation, ‘firewalls’, and assigning skin‑in‑the‑game, influencing later suggestions on auditability and liability.
Speaker: Sanjeev Sanyal
AI should be compartmentalised – run in bounded, well‑defined problems with clear ‘switch‑off’ buttons and Chinese walls – rather than an ‘AI of everything’ which could cause systemic failures.
Introduces the concept of modular AI deployment as a safety mechanism, borrowing from financial market safeguards.
Guided the conversation toward practical governance tools such as firewalls, audit trails, and the need for clear jurisdictional boundaries, echoed later by other speakers.
Speaker: Sanjeev Sanyal
GIF City, being a clean‑slate jurisdiction with its own regulator, can act as a sandbox for AI governance, allowing experimentation without legacy baggage.
Proposes a concrete institutional experiment to test governance models, linking policy to a real‑world testbed.
Opened a new line of discussion about sandbox frameworks, interoperability across regulators, and the practical steps needed to scale AI innovation safely.
Speaker: Praveen Kamat
Regulators should offer calibrated supervisory relief – a ‘safe harbour’ – for firms that embed robust controls, model inventories, bias testing and continuous monitoring.
Suggests an incentive‑based regulatory approach that rewards good governance rather than only penalising failures.
Prompted dialogue on balancing enforcement with encouragement, influencing later remarks on awards, glass‑box transparency, and the role of audits.
Speaker: Murlidhar Manchala
Generative AI lowers the barrier for threat actors but does not fundamentally change cybersecurity principles; we need active participation, standards like ISO/NIST, and AI‑in‑the‑loop for faster response.
Adds a security dimension to the governance conversation, emphasizing that existing cyber hygiene remains vital even as AI tools evolve.
Expanded the scope of the discussion beyond governance to operational resilience, leading to consensus on the need for continuous monitoring and skill development.
Speaker: Vikram Kishore Bhattacharya
We need to start thinking about copyright and ownership of AI‑generated outputs – who owns the innovation: the prompt writer, the data source, or the model creator?
Raises a novel legal challenge that has not been widely addressed in the financial AI context, highlighting future litigation and policy gaps.
Shifted the conversation toward intellectual property considerations, prompting the audience to contemplate regulatory frameworks for AI‑generated content.
Speaker: Sanjeev Sanyal
Data is the new oil; India must secure rights to its sovereign data and build the ‘oil rigs’ (data centres) and ‘refineries’ (AI models) to process it, rather than just hoarding raw data.
Frames data sovereignty in economic terms, linking infrastructure policy to AI capability building.
Reinforced earlier points about sovereign AI stacks, influencing the panel’s emphasis on domestic data centres, tax incentives, and the need for a national AI ecosystem.
Speaker: Sanjeev Sanyal (audience follow‑up)
Overall Assessment

The discussion was driven forward by a handful of pivotal insights that reframed AI governance from a technical checklist to a human‑centric, sovereign, and legally nuanced endeavor. Ajay Kumar Chaudhary’s ‘Mano’ framing and call for embedded governance set the conceptual foundation. Sanjeev Sanyal’s historical analogy and sharp critique of risk‑based regulation introduced a strategic urgency and challenged conventional regulatory thinking, prompting the panel to explore compartmentalisation, clear liability, and the need for ex‑post accountability. Praveen Kamat’s proposal of GIF City as a sandbox provided a tangible experimental venue, while Murlidhar Manchala’s safe‑harbour suggestion offered a pragmatic incentive model. Vikram Kishore’s security perspective broadened the scope to operational resilience, and Sanyal’s copyright and data‑sovereignty remarks opened new legal and economic dimensions. Collectively, these comments redirected the conversation from abstract policy to concrete mechanisms—sandboxing, audits, liability frameworks, and infrastructure investment—thereby deepening the analysis and shaping a multidimensional roadmap for AI governance in India’s financial sector.

Follow-up Questions
How can governance be embedded throughout the AI lifecycle in financial services (from design to post‑deployment monitoring) to ensure fairness, transparency, accountability and proportionality?
Embedding governance is essential to preserve trust, resilience and inclusion when AI systems influence credit decisions, fraud detection and other critical financial outcomes.
Speaker: Ajay Kumar Chaudhary
What strategies can mitigate operational concentration risk in AI infrastructure (semiconductor chips, cloud platforms, foundation models) to protect economic sovereignty and financial stability?
Reliance on a few global suppliers creates systemic vulnerability; diversifying supply chains is crucial for national security and stable financial markets.
Speaker: Ajay Kumar Chaudhary
How can AI literacy and governance capability be built at the board and senior‑management level across financial institutions?
Leadership must understand model architecture, validation, vendor dependence and ethical implications to exercise effective oversight and “skin‑in‑the‑game.”
Speaker: Ajay Kumar Chaudhary
What mechanisms (representative training data, periodic impact audits, community feedback) are needed to ensure AI systems are inclusive and do not perpetuate structural bias?
Without intentional design, AI may exclude marginalized groups, undermining the inclusive goals of India’s digital finance agenda.
Speaker: Ajay Kumar Chaudhary
Given the emergent nature of AI, how can a regulatory framework balance risk‑based oversight with ex‑post accountability without stifling innovation?
Traditional risk‑based models may be too rigid for AI’s unknown risks; a flexible, adaptive approach is required.
Speaker: Sanjeev Sanyal
How should AI systems be compartmentalized (e.g., firewalls, “Chinese walls”) to limit systemic spill‑over and energy consumption?
Compartmentalization reduces the chance that a failure in one AI module cascades across the financial system.
Speaker: Sanjeev Sanyal
How can ex‑ante responsibility (“skin‑in‑the‑game”) be assigned among algorithm developers, data providers and end‑users for AI outcomes?
Clear liability incentives encourage careful design and prevent blame‑shifting after failures.
Speaker: Sanjeev Sanyal
What audit and explainability standards (e.g., chartered AI audit) should apply to AI systems that cross predefined risk thresholds?
Regular, independent audits can detect drift, bias or unsafe behavior before systemic damage occurs.
Speaker: Sanjeev Sanyal
Should regulators provide a safe‑harbor or calibrated supervisory relief for firms that implement robust AI controls and experience first‑time lapses?
Rewarding proactive governance encourages firms to adopt best practices without fear of disproportionate penalties.
Speaker: Murlidhar Manchala
Which cybersecurity standards and verification processes (ISO, NIST, third‑party reports) are needed for cloud service providers and banks to secure AI‑enabled financial services?
Generative AI lowers attack barriers; consistent standards are vital to protect the financial ecosystem.
Speaker: Vikram Kishore Bhattacharya
How can regulators and cloud providers develop active participation mechanisms to stay nimble as AI technologies evolve?
A proactive stance is required to adapt quickly to new threats and leverage AI for faster incident response.
Speaker: Vikram Kishore Bhattacharya
How should India position its AI governance strategy amid divergent global models (US innovation‑led, EU compliance‑led, China state‑led) to ensure access, inclusion and competitiveness?
Strategic positioning will affect India’s ability to capture AI benefits while safeguarding its citizens.
Speaker: Sanjeev Sanyal
What reforms are needed in copyright law to address ownership of AI‑generated innovations and data‑derived outputs?
Clarifying intellectual‑property rights is essential for legal certainty and incentivizing AI development.
Speaker: Sanjeev Sanyal
How can India leverage its sovereign data assets for AI model development while ensuring data rights, privacy and processing capabilities?
India’s massive data pool is a strategic asset; proper rights and processing infrastructure are needed to turn it into AI value.
Speaker: Sanjeev Sanyal (responding to audience)
How can the IFSC sandbox be extended to inter‑operate with other regulators (RBI, SEBI, IRDAI) for cross‑jurisdictional AI pilots?
A unified sandbox would enable broader experimentation and harmonised regulatory oversight across financial sectors.
Speaker: Praveen Kamat; Murlidhar Manchala
What regulatory framework could define consent‑backed APIs for data consumption, giving data processors a seat at the table in rule‑making?
Clear consent‑based data sharing standards are needed to balance innovation with privacy and compliance.
Speaker: Audience member (Aditya) and discussed by Praveen Kamat & Murlidhar Manchala
What are the under‑estimated risks of AI in finance (e.g., systemic model drift, concentration, emergent behavior) that need deeper investigation?
Identifying overlooked risks is critical for designing effective safeguards before adverse outcomes materialise.
Speaker: Murlidhar Manchala; Vikram Kishore Bhattacharya

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.