How the Global South Is Accelerating AI Adoption_ Finance Sector Insights

20 Feb 2026 15:00h - 16:00h

How the Global South Is Accelerating AI Adoption_ Finance Sector Insights

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened with John Tass-Parker framing the shift from “frontier AI” to “institutional AI,” emphasizing that in finance the critical challenge is establishing legitimacy and trust rather than raw model capability [4-10][18-19]. He argued that trust is the business model of financial institutions, and that systems demonstrating reliability, auditability and resilience will be rewarded [5-8][10-13].


Bharat then introduced Suvendu K. Pati of the RBI to discuss India’s regulatory stance on AI in finance [24-27]. Pati explained that the RBI’s approach is to enable responsible AI adoption through a technology-neutral, principle-based framework that focuses on innovation, risk mitigation and enhancing trust [30-36][38-41]. He stressed that liability rests with the regulated entities deploying AI, requiring a “glass-box” transparency where customers are informed of AI interaction and institutions must audit bias, drift and degradation [46-52][184-188]. To support the ecosystem, the RBI runs regular FinQuery/Finteract engagements, conducts surveys, and is developing an AI sandbox to provide data and compute resources to smaller fintechs [280-292].


Tara Lyons highlighted JPMorgan’s long-standing AI deployment across fraud detection, payments, markets and compliance, noting that the sector’s principle-based, technology-agnostic regulation has enabled extensive experimentation [73-80][82-84]. Ashutosh Sharma described AI’s strategic value for India’s fintechs through improved unit economics, the ability to thicken thin credit files using unstructured data, and expanding reach via conversational interfaces, while recommending human-in-the-loop controls and strict data-privacy safeguards [106-118][119-126][127-133]. Harshil Mathur added that AI accelerates processing of large data volumes and that “agentic commerce” – voice-first, multilingual conversational purchasing – can unlock the billions of Indians currently excluded from online shopping [134-141][143-166].


Both regulators and industry agreed that deployers must act as custodians of trust, ensuring transparency, auditability and governance, with the RBI focusing guidance on regulated entities rather than model developers [171-188][196-199]. Participants identified key regulatory challenges such as data-residency requirements, limited access to cutting-edge models in Indian data centres, and the risk of LLM hallucinations, which constrain deployment in financial services [300-319][322-329]. The discussion concluded with a shared vision that AI can dramatically enhance financial inclusion by providing language- and voice-based banking, assistive technologies for the disabled, and personalized advisory services to all citizens [340-347][368-371][373-376].


Overall, the panel underscored that building trustworthy, transparent AI infrastructures and collaborative regulator-industry engagement are essential to realizing AI’s potential for inclusive finance in the global south [11-13][20-21][376-380].


Keypoints


Major discussion points


Trust and legitimacy are the core “currency” for institutional AI in finance.


John highlighted that while model performance is important, the scarce attribute in the sector is legitimacy – the ability to demonstrate reliability, auditability and resilience, which determines whether regulators, boards and customers will adopt AI systems [4-10][12-19].


The RBI’s regulatory philosophy is to enable responsible AI adoption rather than prescribe technology-specific rules.


Suvendu explained a tech-neutral, principles-based approach that encourages innovation, mandates lifecycle governance, and introduces new tools such as an AI sandbox and the “seven sutras” to guide banks and fintechs [28-40][45-48][56-63][65-68][171-188][280-295].


Key AI use-cases in finance are already delivering value: fraud & scams remediation, payments, compliance, underwriting and new “agentic” commerce.


Terah listed fraud detection, payments and compliance as high-impact areas, while Ashutosh and Harshil described how AI can thicken thin credit files, enable voice-first conversational commerce, and automate large-scale data analysis [73-80][84-86][108-118][124-166][260-268][269-274].


Best-practice challenges for fintechs revolve around human-in-the-loop oversight, data privacy, residency requirements and model reliability (e.g., hallucinations).


Ashutosh stressed keeping a human in the loop and adhering to data-privacy guardrails [127-133]; Harshil added that Indian data-residency rules, lack of local compute, and the risk of erroneous LLM outputs are major operational hurdles [300-311][314-322].


A shared vision for the next five years is AI-driven financial inclusion through conversational, multilingual and assistive technologies.


Panelists repeatedly spoke of AI lowering service costs, delivering “voice-first” banking, expanding credit to the un-banked, and embedding language-aware assistants in every person’s pocket [338-346][350-357][368-374].


Overall purpose / goal of the discussion


The session was convened to explore how the financial sector-particularly in the Global South-can transition from “frontier AI” to “institutional AI” by building trustworthy, auditable systems, aligning regulatory frameworks (exemplified by the RBI’s approach), sharing practical use-cases, and charting a collaborative roadmap that enables responsible, scalable AI adoption across banks, fintechs and regulators.


Overall tone and its evolution


The conversation began with a formal, measured tone focused on the challenges of legitimacy and governance. As regulators and industry leaders presented concrete initiatives (AI sandbox, principles, use-cases), the tone shifted to optimistic and forward-looking, emphasizing innovation, partnership and the societal benefits of AI. Throughout, the dialogue remained collaborative and constructive, with occasional reiteration for emphasis but no overt conflict.


Speakers

John Tass-Parker


Role/Title: Lead, Policy Partnerships at JPMorgan Chase


Area of Expertise: Policy partnerships, AI governance in finance [S1]


Bharat


Role/Title: Moderator (affiliated with JPMorgan Chase)


Area of Expertise: Finance and AI moderation (not explicitly stated)


Harshil Mathur


Role/Title: Executive at Razorpay (company referenced in his remarks)


Area of Expertise: Fintech product development, AI-driven payments


Terah Lyons


Role/Title: JPMorgan Chase representative (speaker on trusted AI)


Area of Expertise: Trusted AI deployment, risk management in financial services


Ashutosh Sharma


Role/Title: Investor in India’s fintech ecosystem


Area of Expertise: Fintech investment, AI applications in finance [S9]


Suvendu K. Pati


Role/Title: Chief General Manager and Head of FinTech, Reserve Bank of India [S10][S11]


Area of Expertise: FinTech regulation, AI policy and governance in banking


Additional speakers:


– None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

John Tass-Parker opened the session by noting that public conversation on artificial intelligence often centres on model breakthroughs, speed and capability, but that finance is now moving from a “frontier AI” era to an “institutional AI” era where the hard problem is not raw capability but legitimacy and trust [4-10]. He argued that in financial services trust is not merely a feature but the very business model [5-7]; institutions will only adopt systems they can govern, boards can scale and regulators can supervise [8-9]. Consequently, the sector rewards AI that is reliable, auditable and resilient, and the emerging focus is on the infrastructure that enables model risk management, oversight, explainability, cyber-security and regulatory engagement [12-13][14-19].


Moderator Bharat then introduced the panel and turned to Suvendu K. Pati of the Reserve Bank of India (RBI), asking how India is approaching AI regulation in the highly regulated financial sector [21-27].


Suvendu K. Pati explained that the RBI’s stance is to enable responsible AI adoption rather than impose prescriptive, technology-specific rules. The regulator adopts a technology-neutral, principle-based framework that encourages innovation while managing risk, emphasising “innovation versus restraint” as a regulatory nudge [30-36][38-41][56-63]. A lifecycle-management mindset is required, with liability and accountability placed on the regulated entities that deploy AI, not on the model developers [46-52]. The RBI calls for a “glass-box” approach: customers must be informed when they are interacting with an AI system and must be able to opt for a non-AI alternative [181-188]; institutions must embed audit mechanisms to monitor bias, model drift and degradation [190-193].


To operationalise this philosophy, the RBI has instituted regular industry-engagement programmes such as FinQuery/Finteract, which convene over 2 000 entities and include a survey covering close to 600 entities, complemented by one-hour deep-dive engagements with about 75 entities [280-284][285-286]. Building on the “seven sutras” that have been adopted nationally [56-59], the RBI is developing an AI sandbox-in operation since 2019-that will provide access to curated data sets and affordable compute resources for smaller fintechs, thereby democratising AI capability [287-295][295-298]. The RBI also highlighted its own AI model, MuleHunter.ai, which is already deployed in 26 banks and is being rolled out to other entities [300-303]. It encourages self-regulatory organisations to create toolkits and benchmarking services for bias-free, transparent models [294-295].


Terah Lyons, representing JPMorgan Chase, highlighted that the bank has been deploying AI for over a decade across fraud and scams remediation, payments, market analytics and compliance [73-80]. She credited the sector’s principle-based, technology-agnostic regulation for allowing extensive experimentation while maintaining proportionate risk management [82-86]. This risk-aware governance, she argued, can serve as a template for other industries seeking trustworthy AI lifecycle oversight [82-86].


Ashutosh Sharma, a leading fintech investor, described AI’s strategic importance for India’s financial ecosystem. He noted that the Indian credit market, valued at US$2 trillion, incurs 3-5 % OPEX (≈US$60-100 billion annually), and AI can dramatically improve productivity and unit economics [106-112]. By leveraging unstructured data, AI can “thicken” thin credit files for the large un-formalised segment, enabling more inclusive underwriting [115-118]. He also stressed that conversational interfaces can broaden reach, but best practice demands a human-in-the-loop and strict adherence to data-privacy guardrails [127-136]. AI-led collection agents were cited as a way to enhance outreach while preserving oversight [260-265], and biometric payments in UPI were highlighted as an emerging use case [250-255].


Harshil Mathur expanded on the data-processing advantage, explaining that AI can analyse data at a thousand-fold speed compared with traditional tools, facilitating underwriting, risk management and fraud detection [134-141]. He introduced the concept of agentic commerce – voice-first, multilingual, conversational purchasing – as the next wave that could unlock the 300-400 million Indian UPI users who currently do not shop online [143-166]. Harshil emphasized that AI can dramatically lower the cost of servicing and enable voice-first, conversational experiences for villagers, while still recognising a role for human oversight in many processes [260-275].


Across the discussion, there was strong agreement that AI deployers must act as custodians of trust, providing glass-box transparency, auditability and board-level governance [171-188][196-199][10-13]. All speakers underscored that legitimacy, not merely model performance, is the scarce attribute for AI adoption in finance [4-10][12-19].


Disagreements were mild. Ashutosh stressed keeping a human in the loop and strict data-privacy compliance [127-136], whereas Harshil highlighted the potential of highly automated, voice-first agents to serve underserved villagers, while still acknowledging the need for safeguards [260-275]. A second tension concerned regulatory scope versus practical constraints: Suvendu stressed a tech-neutral, deployer-focused guidance [190-196][184-188], while Harshil pointed to India’s stringent data-residency rules and the limited availability of cutting-edge compute in domestic data centres, which impede the use of foreign large language models [300-307][310-311].


Both regulators and industry converged on a vision for financial inclusion. Suvendu envisaged AI-driven alternate-data underwriting and language-aware conversational banking to bring the unbanked into the formal credit system [340-347]; Ashutosh spoke of “Viksit Bharat”, an AI-led financial ecosystem reaching every citizen [364]; and Terah reiterated that AI could place a personal financial advisor in every pocket, extending services to the poorest and to people with disabilities [368-374][373-376].


The panel identified several action items. The RBI will operationalise the AI sandbox and continue monthly FinQuery/Finteract sessions [280-295][295-298]; regulated entities are urged to embed board-level AI policies, audit frameworks and glass-box disclosures [190-196][181-188]; fintechs should adopt human-in-the-loop designs and comply with DPDP data-privacy guardrails [127-136]; industry bodies are called upon to develop bias-assessment toolkits [294-295]; and JPMorgan’s risk-aware governance was highlighted as a model for scaling trusted AI deployments [82-86].


Unresolved challenges remain. Mitigating LLM hallucinations to meet the sector’s near-zero tolerance for erroneous outputs is an open research problem [317-329]; clarifying liability when AI models produce faulty decisions, especially for third-party developers, requires further legal framing [190-193]; reconciling data-residency requirements with the need for cutting-edge models calls for domestic model development or policy adjustments [300-307][310-311]; and defining concrete metrics for fraud-reduction, mis-selling and inclusion impact is still pending [317-329][260-268]. Suvendu also noted that AI is a probabilistic technology, so regulators should adopt a tolerant and differentiated approach when embedding it in financial services [310-313].


In closing, Bharat thanked the distinguished panel, reaffirmed the focus on super-charging AI adoption in the Global South, and highlighted the consensus that trustworthy, transparent AI infrastructure-supported by collaborative regulator-industry engagement-will be pivotal for inclusive finance [376-380]. The discussion left participants optimistic that, within the next five years, AI will substantially lower service costs, deliver personalised “N-of-1” experiences, and unlock a multilingual, voice-first financial ecosystem for billions of underserved users [338-346][350-357][368-371].


Session transcriptComplete transcript of the session
John Tass-Parker

Hello everyone, my name is, oh sorry we’ve got a photographer here now, so we’re going to take our photo. False start, sorry, bear with us. Well now that we’ve got the most important thing out of the way, we’ll get started. Hello everyone, my name is John Tass -Parker I lead policy partnerships at JPMorgan Chase and just wanted to firstly thank everyone for being here for this very important conversation when people talk about AI the conversation tends to focus on model breakthroughs speed, capability but in finance, which our wonderful panellists here represent that’s never been the real question we’re really moving from this era of frontier AI in our world certainly to an era of institutional AI and in this phase the hard problem is not actually the capability itself it’s legitimacy and trust financial services is one of the most regulated sectors in the global economy and yet it’s consistently been one of the earliest to be a part of the global economy and one of the first adopters of AI and all…

technologies. Why? Because in finance, trust is not a feature. It’s actually the business model. Institutions only absorb systems they trust. The C -suite can only scale what their boards can govern. Regulators can only enable what they can supervise. And increasingly, those that can demonstrate reliability, auditability, resilience, not just model performance, will be the ones that are rewarded. The more important story is coming into focus in rooms like this. It’s the infrastructure enabling institutional AI, model risk management, oversight, explainability, cyber security, regulatory engagement. Finance has had to learn how to deploy these incredibly powerful systems inside real world guardrails. And that’s why conversations matters beyond and beyond the door. And that’s why this conversation, frankly, not only matters for our financial and banking sectors, but also beyond that.

If we want AI to drive productivity for small business, for farmers, for teachers, for local government, for state government, for international, across the global south, then trusted deployment is what unlocks it. Capability is increasingly being commoditized. It’s the legitimacy that is the scarce attribute here. Today’s discussion is about how we build systems that institutions will actually absorb and how finance can help shape a framework for responsible, scalable adoption. With that, I’m delighted to hand it over to Bharat to set the broader context for how we think about safe and trusted AI globally.

Bharat

Thank you, John. It is my honor to moderate this discussion with a truly distinguished panel. So without further ado, let me just jump straight into it. Capitalizing the artificial intelligence moment for finance. The financial sector, as we all know, is one of the most regulated sectors in our country in India. and in most parts of the globe. So I think it’s appropriate to turn to the regulator from India, Mr. Swendu Pati from RBI, who’s to my right. Swendu ji, the financial sector has been one of the earliest adapters of AI, despite being one of the most regulated sectors, as I mentioned. Given this dichotomy, how is India approaching AI regulation in finance?

Suvendu K. Pati

Yeah, thank you, Bharat, and thank you, everyone, for having me here. I would begin by saying we are not exactly the phrasing I would entirely agree with, that regulating AI, but I would say that we are here to sort of enable responsible adoption of AI in the financial sector. That would be the overall approach to this technology, I would say, what Reserve Bank of India, you know, understand. and why I would say that it is clearly we recognizing the potential of this new technology, although it’s not very new in that sense, but it has really come to a limelight over the past five years. And that’s because, you know, data is one of the key ingredients which it thrives on.

And we had constituted an external expert committee of which I was a member to look at this sector and look at this technology, how it can be embedded into the financial services segment. So our approach when we looked at, you know, we wanted to be slightly more nudging towards enabling innovation in some sense. And unless we play around with this technology experiment enough, you would not ever utilize the full potential of it. So basically it is concentrated towards… you know, innovation, enablement, as well as risk mitigation. The risks that have been talked about, bias, accountability, auditability, explainability, these are pretty well known. And this needs to be managed in a way so that we ultimately we come out with the principles of enhancing trust, which was also a fundamental attribute of the financial sector.

And in terms of regulation, Reserve Bank’s approach has been largely tech neutral. It’s tech agnostic in some sense, because most of the times you would, you know, new technologies, new things would keep evolving. But for example, the safety or the consumer protection, not doing consumer harm, is a good stated objective to pursue irrespective of what technology you adopt. Similarly, on IT services, outsourcing guidelines, on, you know, managing concentration risk, there are already existing guidelines. which do provide the guidance to the regulated entities like banks and NBFCs, how do they manage their affairs. So in some sense, the consumer protection guidelines also do cover some of the safety aspects that we would generally talk about. So in some sense, there is a regulation which is in place.

There is guidance which is already in place. It’s only that because of this transformational technology, if there is a need to look at it from a new technology lens, any additional guidance that needs to be incremental guidance that needs to be provided. And that’s a precise point we have come out with in this report. And one of the things that we expect institutions to go forward is with the entire lifecycle management of AI, should be a thought process. The institutions need to look at… the liability and accountability framework in a much different way. Our expectation is that customers need to be protected in all cases. So it’s not a question, it’s about the model deployed by the entities rather than the model developers.

The responsibility should rest with the model deployers and which are the regulated entities in this case. And therefore, there are three or four additional dimensions which need to be looked at in terms of supervision, in terms of the internal audit assurance framework. How do you audit or how do you validate or improve your product approval process to capture the additional incremental risks on account of AI? So these are some of the additional things that we are looking at to provide some nudge. And one principle that we had come out. Within the report, there are seven principles or sutras that the report talks about. and these have been adopted. I’m happy to report that these have been adopted by the government of India for implementation across sectors.

So these are generic principles and they have found acceptance. So one of the principles that we have talked about there is innovation versus restraint. Everything else remaining constant, entity should prioritize innovation rather than restraint. So that is a nudge. That is an innovation enablement or a nudge that we are trying to give to the sector. They should feel comfortable with this. So our whole approach is optimistic. We want people to experiment, adopt it responsibly, but think creatively in terms of liability framework, revisiting the accountability framework, have a board governance policy in place, and improve their internal systems and processes to give the comfort to not only the people, not only their own set of employees, but to other stakeholders.

about this new technology. All said and done, this is a probabilistic technology. There are bound to be some mistakes here and there. So we need to have a very tolerant and differentiated approach when we embed this into the financial services where people’s money is involved. I will stop here, but we’ll talk something more later.

Bharat

Thank you, Swenduji, for that insight. If I could now turn towards the global view, our employer J .P. Morgan Chase is one of the world’s largest deployers of artificial intelligence. Tara, in terms of trust, what are some of the most impactful use cases trusted AI is being leveraged for in finance in your purview?

Terah Lyons

We joke that we shouldn’t worry about AI until we figure out AV. So I guess this is a perfect example of that. Thanks for the question, Arat. I think maybe the first thing to say about this, and this probably isn’t news to this room especially, but AI has been used in finance in deployed settings for over a decade. And at JPMorgan Chase, we’ve been using it, spanning use cases across our bank, starting first with the era of analytic tools, moving into machine learning capabilities, now in the direction of large language model deployment and sort of looking directionally towards the era of agentic capabilities and beyond. And spanning all of those, I think the most impactful use cases that we have seen, certainly in fraud and scams remediation, which is just a huge priority for the entire sector.

Payments, there’s some really exciting applications and in markets as well. And honestly, in compliance use cases for us too, just given the focus that we have on ensuring that we’re being compliant with our regulatory requirements. I think I also, I just want to pick up on a couple of things. that were previously mentioned that I think are worth underscoring. And one of those points was the point that you made, Mr. Patti, about one of the strengths of the financial sector regulatory approach being the principles -based technology -neutral approach that our regulators have taken. And I think it has allowed banks to experiment to a wide degree with the types of techniques that I just talked about.

Well, thinking about the proportionate risk of each one of those use cases as we are deploying. So I think that’s been really key. And I think the second point to underscore that you had mentioned previously, which I think was a really good one for us to address as well, is that there are, I think because of the strength of the financial sector’s approach to AI governance, really useful lessons that can be exported from this sector in considering questions of oversight and regulatory control. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point.

And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. That speaks to the sutures that you mentioned being adopted more widely across the economy in the RBI report that I think are really well aligned to wider consideration just beyond, you know, the banking

Bharat

Thank you, Tara. And I think now we move to the more important issue of putting money into this particular industry. Ashutosh is one of the leading deployers of finance in India’s fintech ecosystem. What makes AI so strategic in your view for the sector? And what are some of the best practices you see being adopted by fintechs to build particularly trust in AI?

Ashutosh Sharma

Super. Thank you so much for having me here. I think over the last two, three, four days, folks in the room. room have probably attended 5, 10, 15 such sessions, maybe more. And I think if there’s one takeaway that you have taken with you is that AI is going to change almost everything. And so it will the financial services sector. I think this is equally, in fact, more importantly applicable to India in a bigger measure than anywhere in the world. And the reason I would say is threefold. The first is unit economics. Let’s take an example. Indian credit market is $2 trillion in value. We spend anywhere from 3 % to 5 % on OPEX. Just on OPEX, we invest $60 to $100 billion a year.

And what AI can do strategically to improving productivity and therefore making these businesses much more healthy. It’s only a beginning of… of the journey we are taking. I think second strategic point of strategic importance is risk. A large section of our economy in India is unformalized. What I mean is that in credit parlance, it’s called a thin file issue, which is that for a large section of society, we don’t have enough data points, enough matrices to make the file thick enough for you to underwrite them. Now with AI, because of the technology’s ability to use unstructured data, you can actually very quickly and in a very cost -effective way make that thin file, thick file.

So again, I think underwriting risk for a large section of society in India will be possible now. with this. I think the last, not the last, one of the more important other points is reach. Buying a financial product is not like buying a shirt on Myntra or ordering food on Swiggy or ordering a saree on Mishra. This is a complex product. It needs engagement. The app or whatever platform you’re using asks you a bunch of questions. Before you even decide. Today, again for a large section of Indian society, it’s very hard to engage with that app. It’s complex. Now imagine a world tomorrow where you can speak to that app. And therefore now that enables reach of financial products, financial services to again a very large section of society.

So I think it’s extremely, extremely strategic from that standpoint. Also best practices look, we are too early. I mean we can only talk about practice. practices best or not only time will tell so so i mean look and look because we are early and because of what sir said um this is this is a high impact transaction for anyone a financial services transaction um and and therefore having a bot run a bank i think is not advisable so one of the practices that good fintech companies are using is keeping a human in the loop the technology can prepare a file but in the end it’s a human who kind of the second thing is again is is data is while data is of primary importance in the in the ai world but this is a lot of sensitive data that you as a fintech or financial services product provider you have that so ensuring at all times that you are following the dpdp guardrails i think is again something which is this is just a start we’ll evolve uh but i think it’s a good thing that we’re following the

Bharat

Thank you, Ashutosh. Turning now to the person who’s actually deploying the money, which is Harshal. That’s a pointy edge. Do you really believe that this is AI’s big moment in finance? I gather at Razorpay, you are rolling out AI -based payment solution models. How do you think this will transform the payments landscape?

Harshil Mathur

First of all, just from a back -end usage, like my colleague spoke about, I think finance typically deals with large volumes of data. Large volumes of data is generally harder for humans to really skim through. We always have to use machines and software to run through it. AI makes that job much, much easier. Anywhere where large volumes of data has to be interpreted, inference has to be drawn, I think you need systems to do that. AI is a system that allows you to do it at far more data points than it was possible in older systems. You can see, you can do as much analysis on Excel sheets or at . software, but with AI you can do 1000x more.

So I think just this advantage of that and things like underwriting and risk management and identifying fraud and multiple things that finance ecosystem has to do becomes increasingly important. So I think that’s why finance has been one of the earliest adopters because it’s just natural that the system is so much better than the previous systems. Coming to payments, I think one of the things that we’ve done is we’ve taken a very early bet on agentic commerce and the reason is fairly simple that there are 300 to 400 million Indian consumers who are on UPI today on district payments today. Less than 200 million of those actually do shopping online. But if you go, peel it even further and this is based on data that we see at Razorpay, less than 10 million of those users do 70 % of all commerce in India.

Just 10 million in a country of a billion and a half do 70 % of all commerce online. And that’s because, like he said, the commerce systems that we have built so far are not natural to most people in India. So we’ve built apps, we’ve built all the accesses. available, but while the access is there, the accessibility is missing. Because Indians don’t buy stuff the way Americans do. So the way we have built our apps is our American shop. It’s like a supermarket. Everything is available. You pick and choose yourself. Indians shop on retailers, where you go and talk. You say, hey, I want to buy this. He tells you, hey, why don’t you buy this, and so on.

We are conversational in commerce. And that’s why the app ecosystem we have so far has only penetrated 10 million or maybe 15 million. The rest of India needs conversations. Like take an example of travel. There are OTAs available everywhere. $50 billion of travel is purchased through agents on the ground, because people want to talk before they make a booking. 95 % of insurance in India is sold through offline brokers. There’s Policy Bazaar, and there’s so many brokers available, which will give you far cheaper, which will not missell you insurance. People still trust their local insurance broker. Because Indians want to converse before they buy. They want to ask 20 questions about what their and that’s hard to do in the apps that we have so far.

And I think agentic commerce is that next wave which will unlock the next form of commerce for the next billion people who have not really come, in spite of all the apps being available, who are not really shopping online, who are not really consuming online. They may be paying their bills online, but that’s it, just because they don’t want to stand in the line. But everything else they’re still doing through offline channels and if we can bridge that gap through agentic commerce, which is voice first, which is multilingual, which is conversational, I think we can unlock commerce for a large volume of Indians who have not come online properly.

Bharat

Thank you, Arshil. I think the next angle which I’d like to touch upon is elevating deployers as key custodians of trust. So Venduji, the RBI has traditionally been ahead of the curve in comparison to some other sectors due to key initiatives which you’ve promulgated such as the Free AI Committee and its very progressive policy recommendations. If I may ask, is there a distinction in your approach for regulating AI developers and…

Suvendu K. Pati

See, under the remit of the mandate given to the Reserve Bank of India, under the Reserve Bank of India Act or the Banking Regulation Act, our remit is towards the we can regulate only the regulated entities like the banks, non -banking financial companies or fintechs or so and so forth. So, model developers would strictly fit into the IT or technology companies. So in our remit or the official mandate that we have, we really cannot sort of regulate or prescribe rules for them. So what we are looking at is from a deployment point of view. And so our regulations or our guidance, I would refrain from using the word regulation in this sector. But in this context, but our guidance would be towards the deployers, which are the regulated entities.

And these, as I would say, are more, already some are in place through various, you know, guidelines. I have talked about IT outsourcing, third -party dependencies, and also on the customer engagement and things like that. So what we are looking at is how does the regulated entity be, you know, accountable. Once the regulated entity is providing a service to a customer, it is the complete responsibility of the entity to ensure the transparency, accountability, the way the customer engages with an AI system or a service. So from, you know, if I may loosely put it, from a typically black box is something which is associated with AI systems. You really do not know what happens inside and the result is produced.

But as far as the regulated entity. As far as the regulated entity is dealing with the customers are concerned, we would like. this to be a not a black box but a glass box. It’s completely should be customer should be knowing what they are getting. When they are engaging they should be clearly told upfront that they are engaging with an AI system. If they choose they should have the freedom to offer a non AI based engagement and transparency. Similarly for the accountability the institution should devise their audit systems to capture incremental risks arising out of the AI. How does the bias get removed? Is there a model drift? Is there a model degradation? Does it get addressed periodically?

So those kind of checks and balances regulated entities need to put as part of their board policy and set the implementation and some things like understandability by design. You know the course itself should ensure implementation. These are some of the things we have talked about. And over a period of time, we would like that this gets addressed and gets refined and gets embedded and implemented across their processes.

Bharat

Thank you, Sovenduji. Deployers are also fast emerging as key custodians of trust in the AI ecosystem. And, you know, frankly, it’s the responsibility to the global economy to get AI integration right for large financial services firms such as J .P. Morgan. So how is J .P. Morgan positioning itself in this debate?

Terah Lyons

Well, I think AI is not made useful unless it’s deployed, and it can’t be deployed at scale without trust and transparency. And so the way that we’re thinking about these questions really rests. It rests on, again, the strengths of the sort of, I think, the culture of risk management and oversight that we have grown into in financial services, deploying technology of all sorts, not just AI, but certainly AI more recently, as I mentioned. and having there really be a sort of a use case focus on the risks entailed in every single one of our deployments. I think a lot of the lessons that can be learned in financial services risk management, again, are applicable widely to other sectors, as we’ve talked a little bit about this afternoon, including in sort of AI lifecycle oversight and management in model risk management guidelines and principles, in the principles and practices of real transparency and auditability that we’ve spoken to up here and many, many others.

And so I think what that allows is, as we’ve spoken to, banks and financial service organizations are sort of uniquely positioned in many ways, given the nature of the data estates that we sit on top of, given the necessity of the business model. given customer demand and market demand and a host of other issues that I would say surround kind of the innovation envelope here. But I think the risk management practices that we have are a huge strength there, too. So, yeah, I would say that that’s all really key to engendering trust with customers and making sure that we’re doing right by the products and services that we’re delivering to them.

Suvendu K. Pati

providing what information they need to fill in while account opening, those kind of summarization effects may not be subjected to very, very elaborate degree of scrutiny or risk testing or template, those kind of processes. This is what I would feel personally, but yes. And just to make this more of a conversation, I’ll add one additional point, which is that I think it’s important to understand that the way that we’re dealing with this is not just about the data. It’s about the information that we’re getting from the data. It’s about the information that we’re getting from the data. And I think that’s what I would feel personally, but yes. And I think that’s what I would feel personally, but yes.

Harshil Mathur

If you’re a large company, it’s competing with a small, let’s say, retailer, and let’s say they’ve opened a new supermarket opposite to them. It’s hard for a small retailer to compete because they don’t have the intelligence available to the large supermarket in terms of what products to put in, what things to deploy, what marketing ideas to deploy, and so on. But now you can really open a chat GPT app, ask it to prepare a business plan for you, tell it how do I fight this, and it can really help you compete. So the advantage of having intelligence on demand really, I think, balances the scale than what was available before because it reduces the cost of intelligence.

A large company could always afford that intelligence, but now it’s available. Similar examples will be available later. let’s say it’s a farmer on the ground who is unable to figure out, like, which crop should he purchase this season, right? And I met companies recently who are essentially doing that, that they’re deploying AI models to be available to farmers on the ground, that, hey, you can ask it. It can tell you information that is generally not available to you. So I think that’s on the general side of things. Now, if you come to finance side of things, one of the biggest problems in finance is mis -selling, right? Or fraud, and fraud. Like, for example, I have told my dad, my dad is 70 years old, and I’ve told him, hey, if you’re making any expensive purchase decision, just give me a call.

I don’t know if you’re getting fraud aid, if you’re getting digital estate. I don’t know if you’re getting insurance sold, which you don’t need, or a financial instrument you don’t need. But, like, AI allows me to put something smarter than me in his own pocket, right? So he doesn’t need to call me now. He can open, and I taught my dad how to use chat GPT. Now he opens it up and asks it in his voice, like, I’m going to buy this. Should I buy this or should I not buy this? I can imagine a year or two years from now. Now, all of us will have an AI agent who is essentially your assistant.

So, when you’re shopping something, it’s searching for the best prices online. When you’re buying something, it’s searching for the best features. Is this the best product? Is something else the best product? It’s doing a research on Reddit, it’s doing a research on Twitter, telling you, hey, don’t buy this, buy this. Or you’re on this website, which is clearly mis -selling, which is fraudulent, the price looks too good to be true, so don’t buy it from here. I think having that intelligence available to every person on demand is a massive advantage. And I think the impact of it in society will be fairly positive. I think the people are worried about frauds happening because of AI.

And I think that’s true in the short term when the ecosystem is getting prepared. But in longer term, frauds and mis -selling and all of that will go down significantly because everyone will have an intelligent agent who’s extremely smart and who can tell things far better than a human can. So I think that can really bring a massive

Suvendu K. Pati

Just in case, Harshil, you ask your dad to be aware about hallucinations. . .

Bharat

Thank you, Harshil. Thank you, Harshil. Thank you. So innovation and commitment are key in any new technology, as we all know. So, Ashutosh, what are some of the promising business models you are excited about in FinTech and with the AI space? Which ones do you see gaining more traction in the global south? And in your view as an investor, do some key gaps still exist which are currently unaddressed and could benefit in some way from an AI solution?

Ashutosh Sharma

I’m always excited about interesting ideas, Bharat. Now, with that said, I think the adoption is all across the subsectors of financial services. In subsectors where India has naturally been at the forefront of innovation and payments come to mind, right? UPI is a very good example. I think India is leading the innovation wave. even with the advent of AI. Right about the time when the Indian e -commerce platforms were getting embedded or connected with the large foundational models, about the same evening, Indian payments companies were launching products, as Harsh said, that could enable you to buy from within the model or even within the chat experience that you are having in the e -commerce app, Swiggy or Flipkart, whatever you call it.

So within payments, we are at the forefront. Talking generally, I think the most use of AI I see today is in two areas. One is productivity. This is related to the unit economics point that I made previously. I think that’s happening. But more importantly, also in customer experience. And I’ll give you two examples. One is the use of AI. In UPI, we are now moving from this kind of OTP world to a biometric world, wherein you don’t need to just using your biometrics, you can make a payment, right? In part, that is enabled by AI. And imagine how nice the customer experience now will be with this, rather than waiting for something to come to you.

In lending, almost 60 to 70 % of collection for the first 30 days is now moved to an AI -led agent. Us as humans, we get irritated calling 20 people all day. And by the end of the day, the human agent is upset and the customer is upset and the conversation is like the collection is not happening. Whereas with an agent, the agent can be empathetic. Agent will call you, can remember. this is the time when Ashutosh is free let me call him and so I think we are seeing a lot of kind of movement there in the customer experience domain as well as for gaps I think there is one thing that I feel where India is slightly kind of behind is that the west has probably 50 60 years of customer data whereas in India UPI credit card all that is a new phenomenon so for us to there is no right answer for us to get to levels of underwriting which are closer to what west may enable with AI that ability of that availability of multi cyclical deep data maybe something we have a lot of data of data hundreds of millions of customers.

But the depth of that is something that I think we need to consider as

Bharat

Well, as they’re saying, those data is the new gold. So you need to keep it with you as much as possible. And I think that’s going to be something which is going to be challenging for a country of a billion and 400 million. So, Venduji, are there any engagement pathways which RBI is using to engage and partner with the industry to promote AI adoption in finance? And, you know, in the Indian startup ecosystem, are there any specific initiatives you’ve seen to promote AI adoption that banking sector can support this diffusion?

Suvendu K. Pati

Yeah, good point. And first of all, during the last couple of years, we have had multiple engagements. In fact, we have a scheduled monthly engagement with FinTech, and that’s titled as FinQuery and Finteract. So these events do take place at very regular intervals and across cities and through a hybrid channel as well. And roughly about 2 ,000 plus entities have engaged with us in the last one and a half years. And specifically on AI, we did a survey across more than close to 600 entities, including banks and NBFCs. That was a dipstick survey and deep engagement of about one hour each with around more than 75 entities to understand their adoption and what areas they see the potential implementation and what challenges they are witnessing.

So there is a constant. And after the report, free AI committee report has been released on our website in August, we have had around three rounds of consultations with various stakeholders, including FinTechs, to take their inputs on board. So it’s a continuous process. It’s a constant engagement. And I would also like to draw attention to the. regulatory sandbox framework which has been put in place since 2019 and entities are welcome to partner with us and experiment under the regulatory sandbox whenever they require any regulatory dispensation or a regulatory relaxation and as articulated in our recommendations we are one of the key constraints that we see especially the smaller fintechs is the lack of access to the affordable compute infra as well as the lack of access to data based on which they can you know innovate and build models so this is on top of our mind that we are sort of committed to design and operationalize what we would call that a ai sandbox that’s not exactly a regulatory sandbox but it will have access to the data and compute and sort of with the overall aim to democratize you know the data and compute and sort of with the overall aim to democratize you know the data and compute and sort of with the overall aim to democratize ai across you know smaller institutions A bank like JP Morgan or State Bank or HDFC may have enough data, bandwidth, and resources to build their models, but what about the smaller fintechs and other entities?

So with that vision, we would be operationalizing the AI sandbox, which would engage, put these people have access to those resources to innovate. And on top of that, we ourselves are building models like MuleHunter .ai, which is already implemented across 26 banks, and it’s getting implemented across other entities as well. And this engagement is a continuous process, and we would like them to partner with us, submit proposals, and work with us. And we also expect the industry bodies, like the self -regulatory organizations, which has already been recognized, one has been recognized, they have to come up with, we expect that they need to come up with the toolkits or benchmarking services. which the AI, you know, the models can sort of test themselves and see that whether they’re, you know, bias -free and they meet the expectation and transparency standards.

So it has to be expected that fintech industry itself comes up with those kind of standards and benchmarks and toolkits which would support the innovation.

Bharat

Thank you. As we all know, regulatory engagement is critical to promoting innovation. So, Harshal, for a company such as yours, what are the key regulatory challenges you are facing in the deployment of AI in finance? And how does your engagement with government and regulatory bodies actually address these? And do you find any public -private partnership model which could be helpful in taking the industry to the next level?

Harshil Mathur

See, I think the core aspects of regulation, as sir said, I generally don’t go into technology or which technology to use. I think there are general principles of regulation, and then you can use any technology to apply. the same principles. I think in most cases we have been fairly successful in deploying AI models and while meeting the requirements of regulators. I think the few areas where it sometimes becomes a challenge is I think we have a very strong data residency requirement in India which is rightly so and a lot of AI models are coming from the West which don’t meet the data residency requirements today for India. So I think in that context having and the open source models are all coming from China which makes it harder to deploy.

So I think we don’t have the right like we don’t have enough deployment of the cutting edge models in India data centers today and I think that sometimes delays deployments because we can’t really use them as a regulated company. I think the good part is I mean there are three language models that were announced in the AI summit today which are from India. So I think that can be a good way for at least financial companies in India who want to deploy models within India data centers and within Indian boundaries. I think they can at least those models are available and that can be a starting point and then we are hoping that the global companies will bring some of those the cutting edge models to India data centers.

centers as well, so they can be deployed. I think that’s one challenge just on the infrastructure itself, that the cutting -edge model infrastructure is not available. So we can use it for coding, we can use it for multiple internal purposes, but we can’t really use it for anything that touches customer data, anything that touches PII. We can’t use those models till they’re deployed in India data centers, and hopefully that is going to change. The second aspect is, like, related to it is, as a financial company, as she said, I think the biggest challenge for you is controlling where the data goes and where it flows out. I think AI models, as somebody said earlier, it’s a black box.

Once the data enters, you don’t know where it comes out and when, and I think drawing clear boundaries on that is hard. So that is one big challenge, just with LLMs, but there are other forms of AI where that works fine, because there are other forms of AI models or specific targeted models that you can apply where those guardrails are available. Just LLMs don’t have guardrails in terms of where data goes in and where it comes out. hallucinations. Anything to do with financial data, trust is very, very critical. So I’m okay if the system fails 10 % of the time, but it should not be wrong 10 % of the time. So it’s okay if the system says, hey, I can’t do this analysis.

But if it gives the wrong analysis and you use it as a source of truth and you act on it, and then you deliver that information to the customer and you say a commitment is successful, but it actually isn’t, even if it happens 1 % of the time, it creates a massive issue for you. So I think that’s the third piece and I think it’s less to do with regulation, it’s just how the, what is expected of financial players that you can’t be saying something that is not true. And LLM’s model by default can say things which are not true and even if it happens in 1 % or 2 % of cases, it can become a massive liability risk for financial companies.

So I think those are the three big aspects and I think the solutions available to the some of those, the first one is fairly easily solvable and global companies will probably solve it or Indian sovereign models will get there. The second is partly solvable because you can put guardrails around and use the right kind of AI models where that is possible. The third is a fundamental problem of how LLM models, LMs models work. So I think that that part is going to be harder to solve. Yes, there are newer models which hallucinate less. But as I said, even if it hallucinates less than 0 .1%, I still can’t deploy an LLM model till I’m certain about it.

And I think that part will require us to either use alternate means or wait for LLM models that can solve that boundary.

Bharat

Ashutosh, you know, because you’re looking at investing companies across the spectrum, not necessarily only in finance, but in other areas which are using artificial intelligence. In your view, what are some of the key regulatory gaps highlighted by your investing companies in the fintech sector? And going forward, what progressive regulatory measures can the government consider to promote this more smoothly?

Ashutosh Sharma

I think RBI has in general been a very kind of progressive. not regulator but guider in this in this situation the seven sutras have been really helpful for people to understand at least what the direction of travel is and also I think the one one acceptance we need to make is that just the way we all are learning about AI its use cases etc etc the regulators also learn and things are changing fast and therefore I think the end situation of what the regulation looks like may be very different. I have a slightly different sort of ask policy ask adding to what Harshal was saying I think compute for my companies is a bigger problem than regulation and researchers for my companies I don’t think we can solve this by the way I mean like through regulation or policy but I think you were asking what do companies struggle with I think those two things are are are bigger problems at this time.

Bharat

Thanks. I’m conscious of the time and I would use my moderator’s prerogative to ask one final round of questions I think to all our distinguished panelists. What is one big bet you would like to take on how AI will transform finance in the next five years? We start with Subinduji.

Suvendu K. Pati

Okay. I know that really time is up but yes it’s not a bet, it’s a wish list rather. Already I’m glad that Ashutosh has already covered some of that in a very, very elaborate way. One thing I would like to see is that how AI can bring about substantive improvement in financial inclusion. You know, bringing people to formal institutional credit which through alternate data analytics and bringing new underwriting models how we can bring them on board and it will be a big unlock. for a country like India. Second aspect I would like to emphasize is which already Harshal has also touched upon is all our fintech apps or everything are now designed for very, very digitally savvy people.

How do we use AI to bring language, voice -based banking, conversational banking, payments? I don’t have to fill a form. I just need to instruct and that translates. So uneducated but literate or sort of logical -minded people who are using WhatsApp voice and all that to transmit messages, they should be able to come on board and using AI come to the financial fold. We should focus more research on assistive technologies. For example, a disabled person, person who can’t see, can’t hear, how do we use AI to bring them or provide information? Make them access financial services in a more efficient way. manner. These are the areas where this technology is going to play a role and we would like to see this getting to that point where it really bridges this otherwise so -called digital divide which is the risk is widening.

We should bring it back to that and AI can prove it in a point and there I would say very, very optimistic about this but a lot of work needs to be done in these areas.

Bharat

Thank you. Harshil?

Harshil Mathur

I completely agree. I think the ability to bring the cost of servicing down significantly so that you can deliver personalization with the N of 1 at an individual level I think can have varied impacts. Like I said typically in India for example when HNI’s open a bank account you don’t fill a form. A guy comes to you fills the form for you, just asks for the 5 documents and asks you for signature and it’s done. But actually who needs this most is the villager. Because he really can’t fill a form. But he’s asked to stand in line, fill a form. AI can allow us to deliver that experience to the villager on the ground. And I think that is going to be the one biggest change that finance can do, is allow the cost of servicing to come down drastically, personalization to happen at an individual level, and then voice -based interactions to drive.

And as somebody said earlier, that’s what’s natural to us. That’s what’s natural to Indians, that if we can make it all voice -based.

Bharat

Arshatosh?

Ashutosh Sharma

I think AI -led financial services leading us to Vixit Bharat would be my bet.

Bharat

I think that’s an aspiration for all of us. And Tara, there’s a lady on the panel. The last word is yours.

Terah Lyons

I would underscore all the answers already provided. I think the financial inclusion potential, the accessibility potential here is massive. Imagine a world in which we can not just expand the credit envelope, but put a financial advisor in every single person’s pocket that normally only the wealthiest in society today are able to afford. So I look forward to that world being.

Bharat

Thank you.

Suvendu K. Pati

And the last word, if I may slip it, language. India is a country with diverse languages. We can leverage on our language, AI to play on the language.

Bharat

Well, I’d like to thank our distinguished panel for a truly enlightening discussion. And I think the topic was supercharging AI adoption in the global south. And I think many of the thoughts of this panel would go a very long way in achieving that goal. Thank you very much once again. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“John Tass‑Parker leads policy partnerships at JPMorgan Chase and is a key voice in discussions about AI trust in finance.”

The knowledge base lists John Tass-Parker as leading policy partnerships at JPMorgan Chase, confirming his role in AI-related finance discussions [S1].

Additional Contextmedium

“In financial services, trust is becoming a measurable and central component of business models.”

A World Economic Forum source notes that trust is now measurable through provenance, authenticity and verification, emphasizing that “it’s going to be about trust” in finance [S84].

Confirmedhigh

“The RBI adopts a technology‑neutral, principle‑based framework for AI regulation, emphasizing responsible adoption over prescriptive rules.”

A session summary highlights that regulatory approaches are leaning toward a technology-neutral legislative stance, matching the RBI’s principle-based framework description [S88].

Confirmedmedium

“The RBI warns that widespread AI use in banking could create financial stability risks, prompting a responsible‑AI stance.”

The RBI Governor has publicly highlighted AI-related risks to financial stability in the banking and private-credit markets, supporting the claim of a risk-aware, responsible-AI approach [S87].

External Sources (92)
S1
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -John Tass-Parker- Leads policy partnerships at JPMorgan Chase
S2
[Online Event] Cables, Novels and Nobels: The Journey of Diplomacy and Literature  — Paolo Trichilo:Yes. Well, indeed, this book is a special biography of each of these writers and claims with their diplom…
S3
Announcement of New Delhi Frontier AI Commitments — -Bharat: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified …
S4
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S5
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — -Amish Devagon: Role/Title not explicitly mentioned, appears to be an interviewer or journalist conducting the discussio…
S6
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – Ashutosh Sharma- Harshil Mathur – Suvendu K. Pati- Ashutosh Sharma- Harshil Mathur – Terah Lyons- Harshil Mathur
S7
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – John Tass-Parker- Terah Lyons – Terah Lyons- Harshil Mathur
S8
The Power of Satellites in Emergency Alerting and Protecting Lives — Alexandre Vallet: Thank you very much Dr. Zavazava. Thank you very much both of you for this introductory remark. I will…
S9
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -Ashutosh Sharma- Investor in India’s fintech ecosystem, described as one of the leading deployers of finance in fintech
S10
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — -Suvendu Pati- Chief General Manager and Head of FinTech at the Reserve Bank of India
S11
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – Suvendu K. Pati- Bharat – Suvendu K. Pati- Harshil Mathur
S12
Bottom-up AI and the right to be humanly imperfect (DiploFoundation) — From the analysis of these arguments, it can be inferred that while third-party tools offer convenience and efficiency i…
S13
Advancing Scientific AI with Safety Ethics and Responsibility — All of these things don’t respect national borders, right? So, how it’s going to spread. If people using VPN or other th…
S14
GOVERNING AI FOR HUMANITY — – 190 Discussions about AI often resolve into extremes. In our consultations around the world, we engaged with those who…
S15
AI Meets Agriculture Building Food Security and Climate Resilien — And this is not proprietary. It is being designed as a replicable public infrastructure model for India and the entire g…
S17
Multistakeholder platform regulation and the Global South | IGF 2023 Town Hall #170 — In conclusion, the analysis indicates a negative sentiment towards cooperation between authorities, pointing out the pot…
S18
PART II — – A data protection impact assessment referred to in subsection (1) shall in particular be required in the case of: (4) …
S19
Annex 1 — 2The existence of a high risk, in particular when using new technologies, depends on the nature, extent, circumstances a…
S20
https://dig.watch/event/india-ai-impact-summit-2026/how-the-global-south-is-accelerating-ai-adoption_-finance-sector-insights — A large company could always afford that intelligence, but now it’s available. Similar examples will be available later….
S21
Conversational AI in low income & resource settings | IGF 2023 — Digital patient engagement is crucial for maintaining relationships with patients even after they leave the hospital. Pl…
S22
WS #262 Innovative Financing Mechanisms to Bridge the Digital Divide — – Knowledge sharing between community networks – Lowering costs through community involvement 3. An interactive Q&A se…
S23
How to make AI governance fit for purpose? — Shan emphasized international collaboration through the ITU and global standards development, expressing concern about p…
S24
Agents of Change AI for Government Services & Climate Resilience — “…they can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately…
S25
Secure Finance Risk-Based AI Policy for the Banking Sector — It may be appreciated that an India first approach is not inward looking. It is context aware. It ensures that governanc…
S26
Agentic AI in Focus Opportunities Risks and Governance — Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have be…
S27
The International Observatory on Information and Democracy | IGF 2023 Town Hall #128 — Additionally, Nnenna Nwakanma’s perspective on regulation is explored further. She emphasises the significance of regula…
S28
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Overall, the sentiment towards implementing principles and regulation in AI is positive. Although the analysis does not …
S29
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S30
EUROPEAN COMMISSION — The security of retail payments is a crucial prerequisite for payment users and merchants alike. Consumers…
S31
https://dig.watch/event/india-ai-impact-summit-2026/building-inclusive-societies-with-ai — I think actually his productivity is quite high. The problem is his realizations are not that high. What he’s able to re…
S32
tABle of Contents — These productivity gains benefit the entire economy. Investment in information and communications technologies accounted…
S33
AI as critical infrastructure for continuity in public services — Inclusivity of all affected stakeholders creates legitimacy and trust. Transparency, public comment periods and accounta…
S34
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — And the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innova…
S35
Conversation: 01 — This articulates a fundamentally different regulatory philosophy – starting with adoption and gradually adding restricti…
S36
Discussion Summary: US AI Governance Strategy Under the Trump Administration — Regarding US-China competition, Ball emphasized that America should win through superior adoption and development of AI …
S37
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S38
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — Legal and regulatory | Economic Online moderator 1 seeks identification of exemplary AI policies that successfully bala…
S39
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S40
Press Conference: The Future of Global Fintech — Another area of concern is the coordination among regulators. The analysis reveals that 27% of fintechs rated the coordi…
S41
A Fintech future for all? (SOMO) — Another challenge highlighted in the analysis is the aggressive data gathering by fintech companies for profit-making se…
S42
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Key to this trajectory are collaborat…
S43
Artificial intelligence — Inclusive finance Multilingualism
S44
How to make AI governance fit for purpose? — The Vice Minister highlighted this as a critical governance challenge, noting the urgency of coordinated development whi…
S45
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S46
Secure Finance Risk-Based AI Policy for the Banking Sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S47
The rise of AI in financial services: balancing opportunities and challenges — According to industry executives, AIis increasingly seenas a game-changer in the financial services sector, offering sig…
S48
Embracing the future of e-commerce and AI now (WEF) — In conclusion, the implementation of advanced technology, particularly AI, in Cambodia’s customs system brings numerous …
S49
European Central Bank advocates monitoring and regulation of AI in finance — The European Central Bank(ECB) has issued a call for increased vigilance and potential regulation regarding the use of A…
S50
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Compute infrastructure and research talent shortages present bigger obstacles than regulatory constraints Data residenc…
S51
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Examples include network slicing implementation, software-defined networks, and qualities of service like reliability, r…
S52
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — Effective regulation requires acknowledging uncertainty about future technological developments while maintaining framew…
S53
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Eltjo Poort: thank you Isadora yeah and thanks for giving me the opportunity to say a few things I there’s a little bit …
S54
FOREWORD — Sweden has taken a different approach and has created a virtual embassy, the Second House of Sweden, in Second Life. But…
S55
Open Forum #60 Cooperating for Digital Resilience and Prosperity — Development | Cybersecurity Effectiveness of existing versus new frameworks Luca expresses frustration that many excel…
S56
Diplomatic policy analysis — Digital divides:Not all countries have equal access to advanced analytical tools, perpetuating inequalities in diplomati…
S57
Report outlines risks and benefits of AI for financial institutions — The Financial Stability Board (FSB), an international institution that makes recommendations concerning the global finan…
S58
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to …
S59
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — Different sectors show varying risk tolerance levels, with Ekudden noting that enterprise risk assessment has become “qu…
S60
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S61
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — It is human in the lead, not human in the loop. Well, I think Julie talked about must-wins for Visa. Agentic commerce i…
S62
Agentic AI in Focus Opportunities Risks and Governance — -Enterprise Guardrails and Risk Management: Panelists emphasized the critical importance of implementing robust safety m…
S63
Building the Next Wave of AI_ Responsible Frameworks & Standards — “human in the loop is a first class feature not a failure point … design the system … transition … to a human”[79]…
S64
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Human-in-the-loop governance is essential – accountability cannot be outsourced to algorithms
S65
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — By fostering interaction, investigation, and the exchange of ideas, sandboxes serve as a stepping stone towards implemen…
S66
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — In conclusion, sandboxes are valuable tools for testing and implementing regulatory policies. The Brazil case highlights…
S67
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — “It’s the legitimacy that is the scarce attribute here.”[5]”It’s actually the business model.”[1]”Because in finance, tr…
S68
Secure Finance Risk-Based AI Policy for the Banking Sector — Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to …
S69
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — And the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innova…
S70
Conversation: 01 — This articulates a fundamentally different regulatory philosophy – starting with adoption and gradually adding restricti…
S71
Generative AI: Steam Engine of the Fourth Industrial Revolution? — In terms of regulating technology, it is suggested that focus should be placed on regulating use cases rather than the t…
S72
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S73
Agentic AI in Focus Opportunities Risks and Governance — Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have be…
S74
Banks and insurers pivot to AI agents at scale, Capgemini finds — Agentic AI is expected todeliver up to $450 billion in valueby 2028, as financial institutions shift frontline processes…
S75
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S76
A Fintech future for all? (SOMO) — Another challenge highlighted in the analysis is the aggressive data gathering by fintech companies for profit-making se…
S77
Press Conference: The Future of Global Fintech — Another area of concern is the coordination among regulators. The analysis reveals that 27% of fintechs rated the coordi…
S78
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — For example,sustainable financemodels, such as green bonds and ESG-linked financial products, are expected to grow signi…
S79
WS #462 Bridging the Compute Divide a Global Alliance for AI — Ivy Lau-Schindewolf: Sure. Yeah, it’s kind of hard to go after, you know, Elena. And that was a very, very good point an…
S80
World Economic Forum 2025 at Davos — During the Davos 2025 discussions, the topic of governance mechanisms for AI, including monitoring, reporting, verificat…
S81
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — The shift in conversation has coincided with advancements in Artificial Intelligence
S82
Technology Rewiring Global Finance: A Panel Discussion Summary — Forbes opened by identifying a disconnect at Davos: whilst some conversations focused on concerning geopolitical and mac…
S83
Shaping the Future AI Strategies for Jobs and Economic Development — The emphasis on collaboration over displacement provides a framework for managing workforce transitions while capturing …
S84
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S85
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S86
https://dig.watch/event/india-ai-impact-summit-2026/ai-collaboration-across-borders_-india-israel-innovation-roundtable — But just to make my point. So my first question to you is like very, very, and the foundational level is that is like, i…
S87
RBI highlights risks of AI in banking and private credit markets — The increasing use of AI and machine learning in financial services globally could lead to financial stabilityrisks, acc…
S88
Session — Taking into account technological evolutions like artificial intelligence and immersive virtual environments such as the…
S89
Global Enterprises Show How to Scale Responsible AI — Technology regulation should focus on technical standards agreed upon by technologists rather than geography-specific ru…
S90
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In conclusion, generative AI technology has the potential for positive impacts in multiple industries. It enhances commu…
S91
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic Regulation and innovation must work together, not in opposition Regulation vs Innovati…
S92
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — ## Lifecycle Management and User Behavior Matthias Hudobnik: Thanks, Maartin. Hello, everyone. Yeah, it’s a pleasure to…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
John Tass-Parker
1 argument122 words per minute407 words199 seconds
Argument 1
Institutional AI focus
EXPLANATION
John argues that the finance sector is moving from a frontier AI era to an institutional AI era, where the key challenge is not model capability but legitimacy and trust. He emphasizes that trust is the business model for financial institutions and will determine which AI systems are adopted at scale.
EVIDENCE
He explains that while AI breakthroughs are often highlighted, finance needs to focus on legitimacy, auditability, resilience, and governance to earn trust, noting that institutions only adopt systems they trust and that boards and regulators can only scale what they can govern and supervise [4-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on trust, auditability and governance matches calls for auditability and principle-based regulation in the financial sector [S24] and the principle-based regulatory approach highlighted in [S27]; the context-aware, India-first governance model also aligns with the need for legitimacy and trust [S25].
MAJOR DISCUSSION POINT
Institutional AI focus
AGREED WITH
Suvendu K. Pati, Terah Lyons
B
Bharat
1 argument142 words per minute869 words366 seconds
Argument 1
Call for global‑south focus and regulatory‑industry partnership
EXPLANATION
Bharat frames the discussion by urging a focus on how AI can be responsibly adopted in the global south, especially through collaboration between regulators and industry. He highlights the need for partnerships that enable safe AI deployment in finance across emerging economies.
EVIDENCE
He opens the session by noting the importance of the financial sector’s regulation in India and asks the regulator about India’s AI approach, then later calls for a global-south perspective and regulatory-industry cooperation [21-27][96-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A strengthened role for the Global South in AI decision-making is advocated in the summit call [S16]; multistakeholder regulation perspectives and concerns about cooperation are discussed in [S17]; bridging digital divides through knowledge sharing is highlighted in [S22]; and the Global South finance AI adoption insights directly echo this focus [S1].
MAJOR DISCUSSION POINT
Call for global‑south focus and regulatory‑industry partnership
AGREED WITH
Suvendu K. Pati, John Tass-Parker
H
Harshil Mathur
4 arguments216 words per minute2189 words606 seconds
Argument 1
Large‑scale data processing for underwriting, risk, fraud
EXPLANATION
Harshil points out that finance deals with massive data volumes that are difficult for humans to process, and AI dramatically speeds up analysis, enabling better underwriting, risk management, and fraud detection.
EVIDENCE
He describes how AI can handle large data sets far more efficiently than manual methods, allowing 1000× more analysis and improving underwriting, risk management, and fraud detection tasks [134-142].
MAJOR DISCUSSION POINT
Large‑scale data processing for underwriting, risk, fraud
AGREED WITH
Ashutosh Sharma, Harshil Mathil, John Tass-Parker
Argument 2
Voice‑first, multilingual “agentic commerce” to unlock mass market
EXPLANATION
Harshil argues that India’s payment landscape can be transformed by agentic, voice‑first, multilingual commerce, which would make online shopping accessible to the majority of consumers who prefer conversational interactions.
EVIDENCE
He explains that only 10 million of 300-400 million UPI users conduct most commerce, and that building conversational, voice-first experiences can bridge the gap for the billions who still shop offline, citing examples from travel and insurance where people rely on agents [143-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential of voice-first, multilingual, agentic commerce in India is documented in the Global South finance report [S1] and the shift from recommendation to action in agentic AI is described in [S26].
MAJOR DISCUSSION POINT
Voice‑first, multilingual “agentic commerce” to unlock mass market
AGREED WITH
John Tass-Parker, Ashutosh Sharma, Suvendu K. Pati, Bharat
DISAGREED WITH
Ashutosh Sharma
Argument 3
Conversational banking as a bridge for low‑digital‑savvy users
EXPLANATION
Harshil emphasizes that Indian consumers prefer talking to a person rather than using app‑based interfaces, so conversational banking is essential to increase adoption among low‑digital‑savvy users.
EVIDENCE
He notes that Indian shopping habits differ from Western ones, describing how people rely on personal brokers and agents, and that current apps are “American-style” supermarkets that lack the conversational element needed for broader adoption [149-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for conversational interfaces for low-digital-savvy users is supported by low-income conversational AI use cases [S21] and the same Global South finance insights on conversational commerce [S1].
MAJOR DISCUSSION POINT
Conversational banking as a bridge for low‑digital‑savvy users
Argument 4
Data residency, compute infrastructure, and model hallucination risks
EXPLANATION
Harshil outlines three major technical challenges: strict Indian data‑residency rules, limited access to cutting‑edge compute infrastructure, and the risk of large language models hallucinating or providing incorrect outputs.
EVIDENCE
He details how foreign AI models often violate data-residency requirements, the scarcity of high-performance models in Indian data centres, and the difficulty of guaranteeing that LLMs do not produce false answers, which could create liability for financial firms [300-329].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risks of model hallucination and the need for governance are highlighted in AI governance fit-for-purpose analysis [S23] and auditability concerns [S24]; data-protection impact assessment requirements for high-risk processing are outlined in [S18] and [S19]; data residency challenges are mentioned in the Global South finance report [S1].
MAJOR DISCUSSION POINT
Data residency, compute infrastructure, and model hallucination risks
AGREED WITH
Suvendu K. Pati, John Tass-Parker
DISAGREED WITH
Suvendu K. Pati
T
Terah Lyons
3 arguments171 words per minute796 words277 seconds
Argument 1
Risk‑aware governance and auditability
EXPLANATION
Terah stresses that the financial sector’s risk‑aware governance framework, built on principles‑based regulation, enables trustworthy AI deployment through auditability and oversight.
EVIDENCE
She highlights that regulators’ principles-based, technology-neutral approach allows banks to experiment while managing proportional risk, and that this governance supports auditability and transparency of AI systems [82-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for risk-aware governance and auditability aligns with the auditability and governance recommendations in [S24] and the principle-based regulatory framework discussed in [S27] and [S28].
MAJOR DISCUSSION POINT
Risk‑aware governance and auditability
AGREED WITH
Ashutosh Sharma, Suvendu K. Pati, John Tass-Parker
Argument 2
Importance of principles‑based regulation for experimentation
EXPLANATION
Terah notes that the principle‑based, tech‑neutral regulatory stance in finance encourages experimentation with AI while keeping risk proportional to use‑case importance.
EVIDENCE
She references the regulator’s safety and consumer-protection principles that apply regardless of technology, enabling banks to test AI under a proportionate risk framework [82-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on principle-based, technology-neutral regulation matches the discussions of principle-based regulatory approaches in [S27] and [S28].
MAJOR DISCUSSION POINT
Importance of principles‑based regulation for experimentation
AGREED WITH
Suvendu K. Pati
Argument 3
Fraud detection, payments, markets, compliance
EXPLANATION
Terah lists the most impactful AI use cases at JPMorgan Chase, emphasizing fraud and scams remediation, payments, market applications, and compliance as priority areas.
EVIDENCE
She explicitly mentions fraud and scams remediation, exciting payment applications, market use cases, and compliance workloads as the key domains where AI adds value [78-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The focus on fraud detection and secure payments corresponds with the importance of retail payment security in the financial sector [S30] and the broader auditability requirements for AI systems [S24].
MAJOR DISCUSSION POINT
Fraud detection, payments, markets, compliance
A
Ashutosh Sharma
5 arguments138 words per minute1228 words530 seconds
Argument 1
Productivity gains and underwriting risk reduction
EXPLANATION
Ashutosh argues that AI can dramatically improve productivity in finance and enable better underwriting, especially for thin‑file borrowers, by leveraging unstructured data.
EVIDENCE
He cites India’s $2 trillion credit market, high OPEX spending, and explains that AI can boost productivity and allow rapid, cost-effective thickening of thin credit files, improving underwriting for large population segments [106-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Productivity gains from AI in finance are quantified in ICT-driven growth studies [S32]; AI’s role in fraud detection and risk management is also noted in payment security literature [S30]; the Global South finance insights reinforce the productivity narrative [S1].
MAJOR DISCUSSION POINT
Productivity gains and underwriting risk reduction
AGREED WITH
Harshil Mathil, John Tass-Parker
Argument 2
Unit‑economics boost and cost reduction
EXPLANATION
Ashutosh highlights that AI can lower operating expenses, thereby improving unit economics for financial firms and making them healthier.
EVIDENCE
He notes that Indian credit OPEX is 3-5 % of a $2 trillion market, amounting to $60-100 billion annually, and that AI-driven productivity is only the beginning of cost reduction efforts [106-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cost reduction and improved unit-economics from AI adoption are discussed in productivity impact analyses for the economy [S32].
MAJOR DISCUSSION POINT
Unit‑economics boost and cost reduction
AGREED WITH
Harshil Mathil, John Tass-Parker
Argument 3
Solving thin‑file credit challenges to expand inclusion
EXPLANATION
Ashutosh explains that AI’s ability to process unstructured data can convert thin credit files into thick ones, enabling underwriting for large unbanked populations.
EVIDENCE
He describes the “thin-file” problem in India and how AI can quickly and cheaply enrich data to create robust credit profiles for previously underserved borrowers [115-118].
MAJOR DISCUSSION POINT
Solving thin‑file credit challenges to expand inclusion
AGREED WITH
John Tass-Parker, Harshil Mathur, Suvendu K. Pati, Bharat
Argument 4
Conversational interfaces to reach unbanked/underbanked
EXPLANATION
Ashutosh envisions voice‑enabled conversational apps that let users interact with financial services without navigating complex forms, thereby extending reach to low‑digital‑savvy users.
EVIDENCE
He paints a scenario where users can simply speak to an app to obtain financial products, turning a complex, multi-question onboarding process into a conversational experience [124-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voice-enabled conversational apps for unbanked users are highlighted in the Global South finance report [S1] and in low-income conversational AI case studies [S21].
MAJOR DISCUSSION POINT
Conversational interfaces to reach unbanked/underbanked
AGREED WITH
John Tass-Parker, Harshil Mathur, Suvendu K. Pati, Bharat
Argument 5
Need for human‑in‑the‑loop and data‑privacy safeguards
EXPLANATION
Ashutosh stresses that, despite AI’s capabilities, human oversight remains essential and that fintechs must adhere to data‑privacy regulations such as India’s DPDP framework.
EVIDENCE
He mentions best practices like keeping a human in the loop for final decisions and ensuring compliance with data-privacy guardrails throughout AI development and deployment [127-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Consumer-centric safeguards, transparent disclosure and human oversight are advocated in [S29]; data-protection impact assessment guidance for high-risk processing is provided in [S18] and [S19]; auditability and governance concerns are also raised in [S24].
MAJOR DISCUSSION POINT
Need for human‑in‑the‑loop and data‑privacy safeguards
S
Suvendu K. Pati
5 arguments149 words per minute2325 words933 seconds
Argument 1
Deployers as custodians of trust, need for glass‑box transparency
EXPLANATION
Suvendu asserts that regulated entities, not model developers, must ensure AI systems are transparent (“glass‑box”) so customers know when they are interacting with AI and can opt out if desired.
EVIDENCE
He explains that while AI can be a black box, regulated entities should provide clear disclosure to customers and maintain transparency, accountability, and audit mechanisms for AI-driven services [181-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The requirement for transparent, glass-box AI systems and clear disclosure aligns with auditability and governance recommendations in [S24] and consumer-centric transparency guidelines in [S29].
MAJOR DISCUSSION POINT
Deployers as custodians of trust, need for glass‑box transparency
AGREED WITH
Harshil Mathur, John Tass-Parker
Argument 2
Enablement‑first, tech‑neutral principles
EXPLANATION
Suvendu describes RBI’s approach as technology‑neutral, focusing on enabling innovation while safeguarding consumer protection and existing IT‑outsourcing guidelines.
EVIDENCE
He notes that RBI’s policy is tech-agnostic, emphasizing safety and consumer protection irrespective of the underlying technology, and that existing guidelines already cover many AI-related risks [38-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RBI’s tech-neutral, enablement-first approach is described in the policy analysis [S25] and reinforced by principle-based regulatory discussions in [S27] and [S28].
MAJOR DISCUSSION POINT
Enablement‑first, tech‑neutral principles
DISAGREED WITH
Harshil Mathur
Argument 3
Seven “sutras” adopted as national AI policy
EXPLANATION
Suvendu reports that RBI’s seven AI principles (“sutras”) have been formally adopted by the Indian government for cross‑sector implementation, providing a generic yet accepted framework.
EVIDENCE
He states that the seven principles were included in the RBI report and have been adopted by the Government of India for implementation across sectors [56-59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The adoption of AI principles (the “sutras”) across sectors is documented in the India-first, context-aware governance report [S25].
MAJOR DISCUSSION POINT
Seven “sutras” adopted as national AI policy
Argument 4
Ongoing industry engagement and AI sandbox initiative
EXPLANATION
Suvendu outlines RBI’s continuous engagement with fintechs through monthly forums, surveys, and the development of an AI sandbox that offers data and compute resources to smaller players.
EVIDENCE
He details monthly FinQuery/Finteract events, a survey of 600 entities, three rounds of stakeholder consultations, and plans to operationalise an AI sandbox providing data and compute access, as well as building models like MuleHunter.ai for banks [280-294].
MAJOR DISCUSSION POINT
Ongoing industry engagement and AI sandbox initiative
AGREED WITH
Bharat, John Tass-Parker
Argument 5
Regulatory clarity on liability and accountability
EXPLANATION
Suvendu emphasizes that responsibility for AI outcomes lies with the deploying financial institution, requiring clear liability and accountability frameworks and internal audit processes.
EVIDENCE
He explains that the regulator expects the model deployer (the regulated entity) to be accountable, and that institutions must develop liability, accountability, and audit mechanisms to capture AI-related risks [48-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for clear liability, accountability and audit mechanisms echo the auditability and governance concerns in [S24] and the consumer-centric safeguards on transparent disclosure and accountability in [S29].
MAJOR DISCUSSION POINT
Regulatory clarity on liability and accountability
AGREED WITH
Bharat, John Tass-Parker
Agreements
Agreement Points
Trust, legitimacy and auditability are essential for AI adoption in finance
Speakers: John Tass-Parker, Suvendu K. Pati, Terah Lyons
Institutional AI focus Deployers as custodians of trust, need for glass-box transparency Risk-aware governance and auditability
All three speakers stress that beyond model performance, finance needs trustworthy, auditable AI systems; John highlights legitimacy as the scarce attribute [4-19], Suvendu calls for a glass-box approach and clear disclosure to customers [181-188], and Terah points to risk-aware, principle-based governance that ensures auditability [82-86].
POLICY CONTEXT (KNOWLEDGE BASE)
Consumer-centric safeguards such as transparent disclosure and auditability are highlighted as essential to maintain public trust in AI-driven finance, echoing emerging governance guidelines [S46] and the European Central Bank’s call for risk mitigation to protect market stability [S49].
Principle‑based, technology‑neutral regulatory frameworks enable responsible AI innovation
Speakers: Suvendu K. Pati, Terah Lyons
Enablement-first, tech-neutral principles Importance of principles‑based regulation for experimentation
Both speakers note that a tech-agnostic, principle-based stance lets regulators safeguard consumers while allowing banks to experiment; Suvendu describes RBI’s tech-neutral, consumer-protection focus [38-41] and Terah emphasizes how principle-based, proportionate risk management supports AI experimentation [82-86].
POLICY CONTEXT (KNOWLEDGE BASE)
Regulators are urged to adopt flexible, technology-agnostic frameworks that can accommodate future advances, a stance articulated in recent digital-governance discussions [S52] and reflected in the ECB’s preference for principle-based oversight of AI in finance [S49].
AI can drive financial inclusion and reach underserved populations
Speakers: John Tass-Parker, Ashutosh Sharma, Harshil Mathur, Suvendu K. Pati, Bharat
Productivity gains and underwriting risk reduction Conversational interfaces to reach unbanked/underbanked Voice‑first, multilingual “agentic commerce” to unlock mass market Solving thin‑file credit challenges to expand inclusion
The panel agrees AI will expand access: John links trusted AI to productivity for small businesses and the Global South [16-18]; Ashutosh describes AI thickening thin credit files and improving underwriting for the unbanked [115-118]; Harshil proposes voice-first, multilingual agentic commerce to bring billions online [143-166]; Suvendu envisions AI unlocking financial inclusion through alternate data and language support [340-342]; Bharat frames the discussion around a Global-South focus [96-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry analyses note AI’s potential to expand services to underserved clients and accelerate adoption in the Global South, despite infrastructure gaps, underscoring its role in financial inclusion [S47][S50].
Human‑in‑the‑loop and strong governance safeguards are required
Speakers: Ashutosh Sharma, Suvendu K. Pati, Terah Lyons, John Tass-Parker
Need for human‑in‑the‑loop and data privacy safeguards Deployers as custodians of trust, need for glass‑box transparency Risk‑aware governance and auditability
All agree that AI systems must be overseen by humans and governed rigorously: Ashutosh stresses keeping a human in the loop and respecting data-privacy guardrails [127-136]; Suvendu calls for glass-box transparency and accountability [181-188]; Terah highlights auditability and risk-aware governance [82-86]; John also mentions the need for reliability and auditability [10-13].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple governance frameworks place human oversight at the core of AI safety, emphasizing human-in-the-loop mechanisms as a primary safeguard for finance [S46][S64][S63].
AI improves productivity and reduces operating costs in finance
Speakers: Ashutosh Sharma, Harshil Mathil, John Tass-Parker
Productivity gains and underwriting risk reduction Unit‑economics boost and cost reduction Large‑scale data processing for underwriting, risk, fraud
The speakers concur that AI drives efficiency: Ashutosh cites OPEX savings and productivity gains in the $2 trillion credit market [106-112]; Harshil notes AI can analyse data 1000× faster, cutting costs in underwriting, risk and fraud detection [134-142]; John links trusted AI to productivity for small businesses and broader economies [16-18].
POLICY CONTEXT (KNOWLEDGE BASE)
Sector reports document efficiency gains and cost reductions from AI-driven automation across banking and financial services [S47][S57][S58].
Regulators and industry must collaborate through ongoing engagement and sandbox initiatives
Speakers: Suvendu K. Pati, Bharat, John Tass-Parker
Ongoing industry engagement and AI sandbox initiative Regulatory clarity on liability and accountability Call for global‑south focus and regulatory‑industry partnership
There is consensus on partnership: Suvendu details monthly FinQuery/Finteract forums, surveys and an upcoming AI sandbox to support fintechs [280-294]; Bharat explicitly asks for regulator-industry cooperation and a Global-South perspective [21-27][96-99]; John notes that boards and regulators can only scale what they can govern and supervise [8-9].
POLICY CONTEXT (KNOWLEDGE BASE)
Sandboxes are promoted as multi-stakeholder tools for testing regulatory approaches and fostering responsible innovation, with calls for continuous regulator-industry dialogue [S65][S66][S52].
Model reliability, drift and hallucination are critical risk areas that must be managed
Speakers: Harshil Mathur, Suvendu K. Pati, John Tass-Parker
Data residency, compute infrastructure, and model hallucination risks Deployers as custodians of trust, need for glass‑box transparency
All three flag reliability concerns: Harshil warns that LLM hallucinations can cause liability even at 0.1 % error rates [317-329]; Suvendu mentions the need to monitor model drift, degradation and bias [190-193]; John stresses that institutions must demonstrate reliability and resilience to be rewarded [10-13].
POLICY CONTEXT (KNOWLEDGE BASE)
Hallucination risk is identified as a major trust issue, prompting regulators to require robust monitoring of model drift and reliability in financial AI systems [S45][S46][S57].
Similar Viewpoints
Both argue that a technology‑neutral, principle‑based regulatory stance enables innovation while protecting consumers [38-41][82-86].
Speakers: Suvendu K. Pati, Terah Lyons
Enablement-first, tech-neutral principles Importance of principles‑based regulation for experimentation
Both see conversational, voice‑first AI as the key to bring financial services to the large, low‑digital‑savvy Indian population [124-130][143-166].
Speakers: Ashutosh Sharma, Harshil Mathur
Conversational interfaces to reach unbanked/underbanked Voice‑first, multilingual “agentic commerce” to unlock mass market
Both highlight AI’s potential to boost productivity and transform finance, especially for small businesses and the broader economy [16-18][106-112].
Speakers: John Tass-Parker, Ashutosh Sharma
Institutional AI focus Productivity gains and underwriting risk reduction
Both stress that transparency, accountability and model reliability (including hallucination risk) are essential for safe AI deployment in finance [317-329][181-188].
Speakers: Harshil Mathur, Suvendu K. Pati
Data residency, compute infrastructure, and model hallucination risks Deployers as custodians of trust, need for glass‑box transparency
Unexpected Consensus
Regulators and fintechs both treat AI hallucination risk as a core liability concern
Speakers: Harshil Mathur, Suvendu K. Pati
Data residency, compute infrastructure, and model hallucination risks Deployers as custodians of trust, need for glass‑box transparency
While regulators usually focus on consumer protection, Suvendu explicitly calls for monitoring model drift and degradation, aligning with Harshil’s detailed warning about LLM hallucinations and their liability impact [317-329][190-193]. This convergence of technical risk focus between regulator and industry was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Both policymakers and industry cite AI hallucinations as a liability exposure, reflected in governance discussions and risk-based policy drafts [S45][S46].
Overall Assessment

The panel shows strong convergence on the need for trustworthy, auditable AI governed by principle‑based, tech‑neutral regulation; on AI’s role in expanding financial inclusion through conversational and voice‑first interfaces; on productivity gains; and on the importance of regulator‑industry collaboration via sandboxes and ongoing engagement.

High consensus – the speakers from regulators, fintechs, and large banks largely agree on the same strategic priorities, suggesting that coordinated policy actions and industry initiatives are feasible and likely to accelerate responsible AI adoption in the Global South finance sector.

Differences
Different Viewpoints
Degree of automation versus human oversight in AI‑driven financial services
Speakers: Ashutosh Sharma, Harshil Mathur
Need for human‑in‑the‑loop and data privacy safeguards Voice‑first, multilingual “agentic commerce” to unlock mass market
Ashutosh stresses that fintechs should keep a human in the loop and follow data-privacy guardrails before AI-generated decisions are final [127-136]. Harshil, by contrast, envisions AI agents replacing human staff – e.g., AI-led collection agents and voice-first conversational commerce that can serve villagers without any human mediation [269-273][354-362].
POLICY CONTEXT (KNOWLEDGE BASE)
The balance between autonomous systems and human oversight is debated, with panels highlighting human-in-the-loop versus human-on-the-loop models and concerns about fading agency in automated finance [S60][S61][S62].
Regulatory focus: tech‑neutral guidance for deployers versus strict data‑residency and infrastructure constraints
Speakers: Suvendu K. Pati, Harshil Mathur
Enablement‑first, tech‑neutral principles Data residency, compute infrastructure, and model hallucination risks
Suvendu argues that RBI’s approach is technology-neutral, providing guidance to regulated entities (deployers) while leaving model developers outside its remit, and stresses a “glass-box” transparency model [171-178][184-188]. Harshil points to India’s stringent data-residency rules and the lack of cutting-edge compute in Indian data centres, which he says block the deployment of many foreign AI models and create practical regulatory bottlenecks [300-307][310-311].
POLICY CONTEXT (KNOWLEDGE BASE)
Tension exists between calls for technology-neutral frameworks and national data-residency requirements that limit deployment, especially in emerging markets facing compute constraints [S50][S52].
Acceptable level of AI error (risk tolerance) in financial applications
Speakers: Suvendu K. Pati, Harshil Mathur
All said and done, this is a probabilistic technology Data residency, compute infrastructure, and model hallucination risks
Suvendu acknowledges that AI is probabilistic and calls for a tolerant, differentiated regulatory approach that accepts occasional mistakes [66-68]. Harshil counters that even a 0.1 % hallucination rate is unacceptable for financial services and that models must be virtually error-free before deployment [327-329].
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-tolerance thresholds vary across sectors; regulators are working to define acceptable error margins for AI in finance, as discussed in risk-assessment literature [S59][S57].
Unexpected Differences
Voice‑first agentic commerce versus human‑in‑the‑loop best practice
Speakers: Harshil Mathur, Ashutosh Sharma
Voice‑first, multilingual “agentic commerce” to unlock mass market Need for human‑in‑the‑loop and data privacy safeguards
Harshil argues that a voice-first, conversational AI layer will replace the need for human agents and unlock the majority of Indian consumers who currently shop offline [143-166]. Ashutosh, while also promoting conversational interfaces, insists that a human must ultimately validate AI-generated decisions and that data-privacy guardrails are mandatory [127-136]. The contrast between a fully automated agentic vision and a human-oversight-centric approach was not anticipated given the overall consensus on trust.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on agentic AI in e-commerce stress the need for safeguards despite a push for voice-first autonomous transactions, highlighting the human-in-the-loop versus agentic debate [S61][S62][S60].
Perceived sufficiency of policy versus practical compute constraints
Speakers: Suvendu K. Pati, Ashutosh Sharma
Seven “sutras” adopted as national AI policy Call for global‑south focus and regulatory‑industry partnership
Suvendu highlights that RBI’s seven AI principles have been formally adopted by the Indian government, suggesting a robust policy foundation [56-59]. Ashutosh, however, points out that for fintechs the biggest hurdle is access to compute infrastructure rather than regulation, indicating that policy adoption has not yet translated into practical capability for many players [333-334]. This gap between policy confidence and on-the-ground resource constraints was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Practitioners report that compute and talent shortages pose larger barriers than existing policy frameworks, echoing observations from Global South AI adoption studies and critiques of policy-practice gaps [S50][S55].
Overall Assessment

The panel largely agrees that trust, governance and regulator‑industry collaboration are essential for AI in finance. However, clear fault lines emerge around (i) the extent of automation versus human oversight, (ii) how strictly data‑residency and infrastructure constraints should shape regulation, and (iii) the acceptable level of AI error. These disagreements reflect a tension between a rapid, innovation‑first agenda and a risk‑averse, compliance‑driven stance.

Moderate to high – while the shared goal of trustworthy AI is strong, divergent views on risk tolerance, regulatory scope and automation depth could slow consensus on concrete policy measures, potentially leading to fragmented implementation across the Global South.

Partial Agreements
All speakers concur that legitimacy, auditability and trustworthy governance are the core prerequisites for AI adoption in finance, even though they differ on implementation details. John frames trust as the business model of finance [4-19]; Suvendu stresses that regulated entities must provide glass‑box transparency and accountability [181-188]; Terah highlights the importance of principle‑based, risk‑aware governance that enables auditability [82-86]; Ashutosh adds that human oversight and data‑privacy safeguards are essential best practices [127-136].
Speakers: John Tass‑Parker, Suvendu K. Pati, Terah Lyons, Ashutosh Sharma
Institutional AI focus Deployers as custodians of trust, need for glass‑box transparency Risk‑aware governance and auditability Need for human‑in‑the‑loop and data privacy safeguards
There is shared agreement that close collaboration between regulators and industry is vital for responsible AI rollout. Bharat frames the need for global‑south‑focused partnership [21-27][96-99]; Suvendu describes RBI’s regular FinQuery/Finteract forums, surveys and the planned AI sandbox to foster industry dialogue [280-294]; Harshil notes that his firm works with regulators to address data‑residency and compliance challenges [300-303].
Speakers: Bharat, Suvendu K. Pati, Harshil Mathur
Call for global‑south focus and regulatory‑industry partnership Ongoing industry engagement and AI sandbox initiative Regulatory challenges and engagement with government
Takeaways
Key takeaways
In finance, AI success hinges on trust, legitimacy and governance rather than raw capability; institutions act as custodians of that trust. RBI’s approach is principle‑based and technology‑neutral, focusing on enabling innovation while managing risk through seven “sutras” now adopted nationally. RBI will deepen industry engagement via regular FinQuery/Finteract sessions and an AI sandbox that provides data and compute resources to smaller fintechs. Key AI use cases in finance include fraud detection, payments, market analytics, compliance, underwriting of thin‑file customers, and productivity gains. Conversational, voice‑first and multilingual “agentic commerce” is seen as a major lever to reach the unbanked and under‑banked in India. Challenges identified are data‑residency requirements, limited affordable compute, model hallucinations, and the need for human‑in‑the‑loop and privacy safeguards. Regulators expect a “glass‑box” transparency model where AI deployment is accountable to the institution, not the model developer. Financial institutions can export their risk‑management and model‑risk‑governance practices as templates for broader AI oversight.
Resolutions and action items
RBI to operationalize an AI sandbox that offers access to data sets and compute infrastructure for fintechs and smaller entities. RBI to continue monthly FinQuery/Finteract engagements and conduct periodic surveys to monitor AI adoption and challenges. Regulated entities (banks, NBFCs, fintechs) to embed board‑level AI governance policies, audit frameworks, and glass‑box transparency disclosures. Fintechs to adopt human‑in‑the‑loop designs and ensure compliance with data‑privacy (DPDP) guardrails. Industry bodies (self‑regulatory organisations) to develop toolkits, benchmarking services and standards for bias‑free, transparent AI models. JPMorgan Chase to leverage its existing risk‑management culture to scale trusted AI deployments and share best practices with the sector.
Unresolved issues
How to reliably mitigate LLM hallucinations and ensure zero‑tolerance for incorrect outputs in high‑risk financial decisions. Clarification of liability and accountability when AI models produce erroneous results—especially for third‑party model developers. Scalable solutions for data residency and compute constraints that prevent use of cutting‑edge foreign models in India. Defining concrete metrics and standards for AI explainability and auditability that satisfy both regulators and innovators. Pathways to rapidly build deep, multi‑cyclical credit data in India to match the richness of western datasets for advanced underwriting.
Suggested compromises
RBI’s principle‑based, tech‑neutral framework – encouraging innovation while nudging firms toward responsible AI (innovation vs restraint). Allowing experimentation within a regulated sandbox environment rather than imposing prescriptive rules upfront. Requiring glass‑box transparency and human‑in‑the‑loop controls as a middle ground between full automation and manual processes. Balancing data‑residency requirements with the development of domestic AI models and compute resources to reduce reliance on foreign LLMs.
Thought Provoking Comments
We’re moving from an era of frontier AI to an era of institutional AI – the hard problem is not capability but legitimacy and trust. In finance, trust is the business model, and institutions will only absorb systems they can trust, audit, and govern.
Sets a paradigm shift from focusing on model performance to emphasizing legitimacy, framing the entire discussion around trust as the scarce resource in financial AI adoption.
Established the central theme, prompting all subsequent speakers to address trust, governance, and regulatory frameworks rather than just technical breakthroughs.
Speaker: John Tass-Parker
Our approach is to enable responsible adoption of AI – we are tech‑neutral, focus on innovation versus restraint, and place the responsibility on the model deployers (the regulated entities) rather than the developers.
Introduces a nuanced regulatory stance that balances encouragement of innovation with risk mitigation, and clarifies the locus of accountability.
Shifted the conversation toward practical governance, leading panelists to discuss audit frameworks, liability, and the need for ‘glass‑box’ transparency.
Speaker: Suvendu K. Pati
The principles‑based, technology‑neutral regulatory approach has allowed banks to experiment widely while managing proportional risk for each use case.
Highlights how a flexible regulatory philosophy can foster innovation without stifling it, reinforcing the regulator’s earlier point and providing a concrete example from JPMorgan.
Validated the regulator’s stance, encouraging other participants to discuss how similar frameworks can be exported globally and applied to AI governance.
Speaker: Terah Lyons
AI can turn thin‑file credit data into thick files by leveraging unstructured data, dramatically improving underwriting for the large unformalized segment of India’s economy.
Identifies a specific, high‑impact application of AI that addresses a core financial inclusion challenge in India, linking technology to socioeconomic outcomes.
Steered the discussion toward concrete use‑cases in credit risk, prompting further dialogue on unit economics and the strategic value of AI for fintechs.
Speaker: Ashutosh Sharma
Agentic commerce – voice‑first, multilingual, conversational interfaces – is the next wave that will unlock online commerce for the 300‑400 million Indian UPI users who currently don’t shop online.
Introduces a transformative vision for payments that goes beyond traditional UI/UX, tying AI to mass adoption and cultural buying habits.
Shifted the focus from backend data processing to customer‑facing experiences, leading to discussions on accessibility, language diversity, and the role of AI in bridging the digital divide.
Speaker: Harshil Mathur
We need a ‘glass‑box’ approach: customers must be told they are interacting with an AI system and should have the option to opt for a non‑AI interaction; institutions must embed auditability, bias mitigation, and model‑drift monitoring into board policies.
Moves the abstract notion of trust into concrete operational requirements, emphasizing transparency and continuous governance.
Prompted panelists to discuss practical implementation steps such as human‑in‑the‑loop, audit frameworks, and the challenges of LLM hallucinations.
Speaker: Suvendu K. Pati
Regulatory sandbox and the upcoming AI sandbox will democratize access to data and compute for smaller fintechs, enabling them to innovate without prohibitive resource constraints.
Proposes a concrete mechanism to level the playing field, addressing a major barrier for fintech innovation in the Global South.
Generated interest from other participants about infrastructure challenges and led to Harshil’s remarks on data residency and compute availability.
Speaker: Suvendu K. Pati
AI can act as a personal financial assistant that reduces fraud and mis‑selling for everyday consumers – even my 70‑year‑old father could get real‑time advice before making a purchase.
Personalizes the broader societal benefit of AI, illustrating a tangible consumer‑level impact beyond institutional efficiency.
Humanized the discussion, reinforcing the earlier points about trust and prompting further reflection on user‑centric design and risk of hallucinations.
Speaker: Harshil Mathur
Financial inclusion can be accelerated by AI‑driven language and voice‑based banking, making services accessible to illiterate or differently‑abled users and bridging the digital divide.
Links AI capabilities directly to inclusive policy goals, expanding the conversation from technical adoption to societal transformation.
Served as a concluding thematic thread, influencing the final “big bet” round where multiple panelists echoed the vision of AI‑enabled inclusive finance.
Speaker: Suvendu K. Pati (later reiterated)
My bet: AI‑led financial services will create a ‘Vixit Bharat’ – a fully AI‑driven financial ecosystem that reaches every citizen.
Summarizes the aspirational potential of AI for the nation, tying together earlier points on inclusion, language, and accessibility.
Provided a rallying statement that encapsulated the discussion’s optimism, reinforcing the forward‑looking tone of the closing remarks.
Speaker: Ashutosh Sharma
Overall Assessment

The discussion was anchored by John Tass‑Parker’s framing of legitimacy over capability, which set a trust‑centric agenda. The regulator’s tech‑neutral, innovation‑first stance (Suvendu) and its concrete proposals (glass‑box transparency, AI sandbox) acted as pivotal turning points, steering the conversation from abstract policy to actionable governance. Panelists then layered depth by presenting high‑impact use‑cases—credit underwriting for thin‑file borrowers, agentic commerce for mass‑market payments, and personal AI assistants—to illustrate how trust translates into real‑world value. Each of these insights sparked new sub‑threads (risk management, infrastructure constraints, inclusion) and collectively shaped a narrative that moved from regulatory philosophy to concrete pathways for AI‑driven financial inclusion in the Global South.

Follow-up Questions
How can regulators ensure AI model transparency to customers (glass‑box approach) and what mechanisms are needed?
Suvendu emphasized the need for customers to know they are interacting with AI and to have the option for non‑AI engagement, highlighting a gap in current practice.
Speaker: Suvendu K. Pati
What frameworks are required to audit AI model drift, bias, and degradation over time within financial institutions?
He mentioned the importance of periodic checks on model performance and incremental risks, indicating a need for systematic audit processes.
Speaker: Suvendu K. Pati
How effective will the proposed AI sandbox be in democratizing access to data and compute resources for smaller fintechs?
Suvendu described plans for an AI sandbox to address compute and data access constraints, but its impact remains to be evaluated.
Speaker: Suvendu K. Pati
What standards, toolkits, or benchmarking services should industry bodies develop to assess AI model bias, transparency, and compliance?
He called on self‑regulatory organizations to create such tools, suggesting a research gap in practical implementation.
Speaker: Suvendu K. Pati
How can data residency requirements be reconciled with the deployment of cutting‑edge large language models that are hosted abroad?
Harshil highlighted regulatory data‑locality rules that block use of foreign LLMs, indicating a need for solutions or policy adjustments.
Speaker: Harshil Mathur
What techniques can mitigate hallucinations in LLMs to meet the financial sector’s low‑tolerance for erroneous outputs?
He expressed concern that even a 0.1% hallucination rate is unacceptable for finance, pointing to a research need for more reliable models or guardrails.
Speaker: Harshil Mathur
What AI‑driven methods can improve underwriting for thin‑file customers in India?
Ashutosh noted AI’s ability to use unstructured data to thicken thin files, but practical approaches and validation remain open questions.
Speaker: Ashutosh Sharma
How can AI enable language‑ and voice‑based conversational banking for illiterate, low‑literacy, or differently‑abled users?
He stressed the potential of assistive AI to bridge the digital divide, requiring research into inclusive interfaces.
Speaker: Suvendu K. Pati
How can lessons from financial sector AI governance be transferred to other industries?
Tara suggested that the sector’s strong risk‑management practices could be exported, but concrete pathways need exploration.
Speaker: Tara Lyons
How can the principle of ‘innovation versus restraint’ be operationalized in real‑world AI deployments?
He cited this principle as a regulatory nudge, yet practical implementation guidelines are lacking.
Speaker: Suvendu K. Pati
What metrics should be used to evaluate AI’s impact on fraud reduction and mis‑selling in finance?
Harshil discussed AI’s potential to curb fraud and mis‑selling but did not specify measurement frameworks.
Speaker: Harshil Mathur
Beyond regulatory sandboxes, how can compute and data access constraints for fintechs be addressed?
He identified compute scarcity as a major hurdle for fintech innovation, indicating a need for broader infrastructure solutions.
Speaker: Ashutosh Sharma
What public‑private partnership models could accelerate AI adoption in the financial sector?
He asked whether such collaborations could help overcome regulatory and technical challenges, suggesting a research avenue.
Speaker: Harshil Mathur
How can AI‑driven personalized ‘N of 1’ services be scaled cost‑effectively for rural and low‑income populations?
He highlighted the potential of AI to lower service costs and personalize experiences, but scalability and affordability need study.
Speaker: Harshil Mathur
How should accountability and liability be structured for AI models deployed by regulated entities?
He noted that responsibility rests with model deployers, raising questions about legal and governance frameworks.
Speaker: Suvendu K. Pati

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.