How the Global South Is Accelerating AI Adoption_ Finance Sector Insights
20 Feb 2026 15:00h - 16:00h
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights
Summary
The panel opened with John Tass-Parker framing the shift from “frontier AI” to “institutional AI,” emphasizing that in finance the critical challenge is establishing legitimacy and trust rather than raw model capability [4-10][18-19]. He argued that trust is the business model of financial institutions, and that systems demonstrating reliability, auditability and resilience will be rewarded [5-8][10-13].
Bharat then introduced Suvendu K. Pati of the RBI to discuss India’s regulatory stance on AI in finance [24-27]. Pati explained that the RBI’s approach is to enable responsible AI adoption through a technology-neutral, principle-based framework that focuses on innovation, risk mitigation and enhancing trust [30-36][38-41]. He stressed that liability rests with the regulated entities deploying AI, requiring a “glass-box” transparency where customers are informed of AI interaction and institutions must audit bias, drift and degradation [46-52][184-188]. To support the ecosystem, the RBI runs regular FinQuery/Finteract engagements, conducts surveys, and is developing an AI sandbox to provide data and compute resources to smaller fintechs [280-292].
Tara Lyons highlighted JPMorgan’s long-standing AI deployment across fraud detection, payments, markets and compliance, noting that the sector’s principle-based, technology-agnostic regulation has enabled extensive experimentation [73-80][82-84]. Ashutosh Sharma described AI’s strategic value for India’s fintechs through improved unit economics, the ability to thicken thin credit files using unstructured data, and expanding reach via conversational interfaces, while recommending human-in-the-loop controls and strict data-privacy safeguards [106-118][119-126][127-133]. Harshil Mathur added that AI accelerates processing of large data volumes and that “agentic commerce” – voice-first, multilingual conversational purchasing – can unlock the billions of Indians currently excluded from online shopping [134-141][143-166].
Both regulators and industry agreed that deployers must act as custodians of trust, ensuring transparency, auditability and governance, with the RBI focusing guidance on regulated entities rather than model developers [171-188][196-199]. Participants identified key regulatory challenges such as data-residency requirements, limited access to cutting-edge models in Indian data centres, and the risk of LLM hallucinations, which constrain deployment in financial services [300-319][322-329]. The discussion concluded with a shared vision that AI can dramatically enhance financial inclusion by providing language- and voice-based banking, assistive technologies for the disabled, and personalized advisory services to all citizens [340-347][368-371][373-376].
Overall, the panel underscored that building trustworthy, transparent AI infrastructures and collaborative regulator-industry engagement are essential to realizing AI’s potential for inclusive finance in the global south [11-13][20-21][376-380].
Keypoints
Major discussion points
– Trust and legitimacy are the core “currency” for institutional AI in finance.
John highlighted that while model performance is important, the scarce attribute in the sector is legitimacy – the ability to demonstrate reliability, auditability and resilience, which determines whether regulators, boards and customers will adopt AI systems [4-10][12-19].
– The RBI’s regulatory philosophy is to enable responsible AI adoption rather than prescribe technology-specific rules.
Suvendu explained a tech-neutral, principles-based approach that encourages innovation, mandates lifecycle governance, and introduces new tools such as an AI sandbox and the “seven sutras” to guide banks and fintechs [28-40][45-48][56-63][65-68][171-188][280-295].
– Key AI use-cases in finance are already delivering value: fraud & scams remediation, payments, compliance, underwriting and new “agentic” commerce.
Terah listed fraud detection, payments and compliance as high-impact areas, while Ashutosh and Harshil described how AI can thicken thin credit files, enable voice-first conversational commerce, and automate large-scale data analysis [73-80][84-86][108-118][124-166][260-268][269-274].
– Best-practice challenges for fintechs revolve around human-in-the-loop oversight, data privacy, residency requirements and model reliability (e.g., hallucinations).
Ashutosh stressed keeping a human in the loop and adhering to data-privacy guardrails [127-133]; Harshil added that Indian data-residency rules, lack of local compute, and the risk of erroneous LLM outputs are major operational hurdles [300-311][314-322].
– A shared vision for the next five years is AI-driven financial inclusion through conversational, multilingual and assistive technologies.
Panelists repeatedly spoke of AI lowering service costs, delivering “voice-first” banking, expanding credit to the un-banked, and embedding language-aware assistants in every person’s pocket [338-346][350-357][368-374].
Overall purpose / goal of the discussion
The session was convened to explore how the financial sector-particularly in the Global South-can transition from “frontier AI” to “institutional AI” by building trustworthy, auditable systems, aligning regulatory frameworks (exemplified by the RBI’s approach), sharing practical use-cases, and charting a collaborative roadmap that enables responsible, scalable AI adoption across banks, fintechs and regulators.
Overall tone and its evolution
The conversation began with a formal, measured tone focused on the challenges of legitimacy and governance. As regulators and industry leaders presented concrete initiatives (AI sandbox, principles, use-cases), the tone shifted to optimistic and forward-looking, emphasizing innovation, partnership and the societal benefits of AI. Throughout, the dialogue remained collaborative and constructive, with occasional reiteration for emphasis but no overt conflict.
Speakers
– John Tass-Parker
– Role/Title: Lead, Policy Partnerships at JPMorgan Chase
– Area of Expertise: Policy partnerships, AI governance in finance [S1]
– Bharat
– Role/Title: Moderator (affiliated with JPMorgan Chase)
– Area of Expertise: Finance and AI moderation (not explicitly stated)
– Harshil Mathur
– Role/Title: Executive at Razorpay (company referenced in his remarks)
– Area of Expertise: Fintech product development, AI-driven payments
– Terah Lyons
– Role/Title: JPMorgan Chase representative (speaker on trusted AI)
– Area of Expertise: Trusted AI deployment, risk management in financial services
– Ashutosh Sharma
– Role/Title: Investor in India’s fintech ecosystem
– Area of Expertise: Fintech investment, AI applications in finance [S9]
– Suvendu K. Pati
– Role/Title: Chief General Manager and Head of FinTech, Reserve Bank of India [S10][S11]
– Area of Expertise: FinTech regulation, AI policy and governance in banking
Additional speakers:
– None identified beyond the listed speakers.
John Tass-Parker opened the session by noting that public conversation on artificial intelligence often centres on model breakthroughs, speed and capability, but that finance is now moving from a “frontier AI” era to an “institutional AI” era where the hard problem is not raw capability but legitimacy and trust [4-10]. He argued that in financial services trust is not merely a feature but the very business model [5-7]; institutions will only adopt systems they can govern, boards can scale and regulators can supervise [8-9]. Consequently, the sector rewards AI that is reliable, auditable and resilient, and the emerging focus is on the infrastructure that enables model risk management, oversight, explainability, cyber-security and regulatory engagement [12-13][14-19].
Moderator Bharat then introduced the panel and turned to Suvendu K. Pati of the Reserve Bank of India (RBI), asking how India is approaching AI regulation in the highly regulated financial sector [21-27].
Suvendu K. Pati explained that the RBI’s stance is to enable responsible AI adoption rather than impose prescriptive, technology-specific rules. The regulator adopts a technology-neutral, principle-based framework that encourages innovation while managing risk, emphasising “innovation versus restraint” as a regulatory nudge [30-36][38-41][56-63]. A lifecycle-management mindset is required, with liability and accountability placed on the regulated entities that deploy AI, not on the model developers [46-52]. The RBI calls for a “glass-box” approach: customers must be informed when they are interacting with an AI system and must be able to opt for a non-AI alternative [181-188]; institutions must embed audit mechanisms to monitor bias, model drift and degradation [190-193].
To operationalise this philosophy, the RBI has instituted regular industry-engagement programmes such as FinQuery/Finteract, which convene over 2 000 entities and include a survey covering close to 600 entities, complemented by one-hour deep-dive engagements with about 75 entities [280-284][285-286]. Building on the “seven sutras” that have been adopted nationally [56-59], the RBI is developing an AI sandbox-in operation since 2019-that will provide access to curated data sets and affordable compute resources for smaller fintechs, thereby democratising AI capability [287-295][295-298]. The RBI also highlighted its own AI model, MuleHunter.ai, which is already deployed in 26 banks and is being rolled out to other entities [300-303]. It encourages self-regulatory organisations to create toolkits and benchmarking services for bias-free, transparent models [294-295].
Terah Lyons, representing JPMorgan Chase, highlighted that the bank has been deploying AI for over a decade across fraud and scams remediation, payments, market analytics and compliance [73-80]. She credited the sector’s principle-based, technology-agnostic regulation for allowing extensive experimentation while maintaining proportionate risk management [82-86]. This risk-aware governance, she argued, can serve as a template for other industries seeking trustworthy AI lifecycle oversight [82-86].
Ashutosh Sharma, a leading fintech investor, described AI’s strategic importance for India’s financial ecosystem. He noted that the Indian credit market, valued at US$2 trillion, incurs 3-5 % OPEX (≈US$60-100 billion annually), and AI can dramatically improve productivity and unit economics [106-112]. By leveraging unstructured data, AI can “thicken” thin credit files for the large un-formalised segment, enabling more inclusive underwriting [115-118]. He also stressed that conversational interfaces can broaden reach, but best practice demands a human-in-the-loop and strict adherence to data-privacy guardrails [127-136]. AI-led collection agents were cited as a way to enhance outreach while preserving oversight [260-265], and biometric payments in UPI were highlighted as an emerging use case [250-255].
Harshil Mathur expanded on the data-processing advantage, explaining that AI can analyse data at a thousand-fold speed compared with traditional tools, facilitating underwriting, risk management and fraud detection [134-141]. He introduced the concept of agentic commerce – voice-first, multilingual, conversational purchasing – as the next wave that could unlock the 300-400 million Indian UPI users who currently do not shop online [143-166]. Harshil emphasized that AI can dramatically lower the cost of servicing and enable voice-first, conversational experiences for villagers, while still recognising a role for human oversight in many processes [260-275].
Across the discussion, there was strong agreement that AI deployers must act as custodians of trust, providing glass-box transparency, auditability and board-level governance [171-188][196-199][10-13]. All speakers underscored that legitimacy, not merely model performance, is the scarce attribute for AI adoption in finance [4-10][12-19].
Disagreements were mild. Ashutosh stressed keeping a human in the loop and strict data-privacy compliance [127-136], whereas Harshil highlighted the potential of highly automated, voice-first agents to serve underserved villagers, while still acknowledging the need for safeguards [260-275]. A second tension concerned regulatory scope versus practical constraints: Suvendu stressed a tech-neutral, deployer-focused guidance [190-196][184-188], while Harshil pointed to India’s stringent data-residency rules and the limited availability of cutting-edge compute in domestic data centres, which impede the use of foreign large language models [300-307][310-311].
Both regulators and industry converged on a vision for financial inclusion. Suvendu envisaged AI-driven alternate-data underwriting and language-aware conversational banking to bring the unbanked into the formal credit system [340-347]; Ashutosh spoke of “Viksit Bharat”, an AI-led financial ecosystem reaching every citizen [364]; and Terah reiterated that AI could place a personal financial advisor in every pocket, extending services to the poorest and to people with disabilities [368-374][373-376].
The panel identified several action items. The RBI will operationalise the AI sandbox and continue monthly FinQuery/Finteract sessions [280-295][295-298]; regulated entities are urged to embed board-level AI policies, audit frameworks and glass-box disclosures [190-196][181-188]; fintechs should adopt human-in-the-loop designs and comply with DPDP data-privacy guardrails [127-136]; industry bodies are called upon to develop bias-assessment toolkits [294-295]; and JPMorgan’s risk-aware governance was highlighted as a model for scaling trusted AI deployments [82-86].
Unresolved challenges remain. Mitigating LLM hallucinations to meet the sector’s near-zero tolerance for erroneous outputs is an open research problem [317-329]; clarifying liability when AI models produce faulty decisions, especially for third-party developers, requires further legal framing [190-193]; reconciling data-residency requirements with the need for cutting-edge models calls for domestic model development or policy adjustments [300-307][310-311]; and defining concrete metrics for fraud-reduction, mis-selling and inclusion impact is still pending [317-329][260-268]. Suvendu also noted that AI is a probabilistic technology, so regulators should adopt a tolerant and differentiated approach when embedding it in financial services [310-313].
In closing, Bharat thanked the distinguished panel, reaffirmed the focus on super-charging AI adoption in the Global South, and highlighted the consensus that trustworthy, transparent AI infrastructure-supported by collaborative regulator-industry engagement-will be pivotal for inclusive finance [376-380]. The discussion left participants optimistic that, within the next five years, AI will substantially lower service costs, deliver personalised “N-of-1” experiences, and unlock a multilingual, voice-first financial ecosystem for billions of underserved users [338-346][350-357][368-371].
Hello everyone, my name is, oh sorry we’ve got a photographer here now, so we’re going to take our photo. False start, sorry, bear with us. Well now that we’ve got the most important thing out of the way, we’ll get started. Hello everyone, my name is John Tass -Parker I lead policy partnerships at JPMorgan Chase and just wanted to firstly thank everyone for being here for this very important conversation when people talk about AI the conversation tends to focus on model breakthroughs speed, capability but in finance, which our wonderful panellists here represent that’s never been the real question we’re really moving from this era of frontier AI in our world certainly to an era of institutional AI and in this phase the hard problem is not actually the capability itself it’s legitimacy and trust financial services is one of the most regulated sectors in the global economy and yet it’s consistently been one of the earliest to be a part of the global economy and one of the first adopters of AI and all…
technologies. Why? Because in finance, trust is not a feature. It’s actually the business model. Institutions only absorb systems they trust. The C -suite can only scale what their boards can govern. Regulators can only enable what they can supervise. And increasingly, those that can demonstrate reliability, auditability, resilience, not just model performance, will be the ones that are rewarded. The more important story is coming into focus in rooms like this. It’s the infrastructure enabling institutional AI, model risk management, oversight, explainability, cyber security, regulatory engagement. Finance has had to learn how to deploy these incredibly powerful systems inside real world guardrails. And that’s why conversations matters beyond and beyond the door. And that’s why this conversation, frankly, not only matters for our financial and banking sectors, but also beyond that.
If we want AI to drive productivity for small business, for farmers, for teachers, for local government, for state government, for international, across the global south, then trusted deployment is what unlocks it. Capability is increasingly being commoditized. It’s the legitimacy that is the scarce attribute here. Today’s discussion is about how we build systems that institutions will actually absorb and how finance can help shape a framework for responsible, scalable adoption. With that, I’m delighted to hand it over to Bharat to set the broader context for how we think about safe and trusted AI globally.
Thank you, John. It is my honor to moderate this discussion with a truly distinguished panel. So without further ado, let me just jump straight into it. Capitalizing the artificial intelligence moment for finance. The financial sector, as we all know, is one of the most regulated sectors in our country in India. and in most parts of the globe. So I think it’s appropriate to turn to the regulator from India, Mr. Swendu Pati from RBI, who’s to my right. Swendu ji, the financial sector has been one of the earliest adapters of AI, despite being one of the most regulated sectors, as I mentioned. Given this dichotomy, how is India approaching AI regulation in finance?
Yeah, thank you, Bharat, and thank you, everyone, for having me here. I would begin by saying we are not exactly the phrasing I would entirely agree with, that regulating AI, but I would say that we are here to sort of enable responsible adoption of AI in the financial sector. That would be the overall approach to this technology, I would say, what Reserve Bank of India, you know, understand. and why I would say that it is clearly we recognizing the potential of this new technology, although it’s not very new in that sense, but it has really come to a limelight over the past five years. And that’s because, you know, data is one of the key ingredients which it thrives on.
And we had constituted an external expert committee of which I was a member to look at this sector and look at this technology, how it can be embedded into the financial services segment. So our approach when we looked at, you know, we wanted to be slightly more nudging towards enabling innovation in some sense. And unless we play around with this technology experiment enough, you would not ever utilize the full potential of it. So basically it is concentrated towards… you know, innovation, enablement, as well as risk mitigation. The risks that have been talked about, bias, accountability, auditability, explainability, these are pretty well known. And this needs to be managed in a way so that we ultimately we come out with the principles of enhancing trust, which was also a fundamental attribute of the financial sector.
And in terms of regulation, Reserve Bank’s approach has been largely tech neutral. It’s tech agnostic in some sense, because most of the times you would, you know, new technologies, new things would keep evolving. But for example, the safety or the consumer protection, not doing consumer harm, is a good stated objective to pursue irrespective of what technology you adopt. Similarly, on IT services, outsourcing guidelines, on, you know, managing concentration risk, there are already existing guidelines. which do provide the guidance to the regulated entities like banks and NBFCs, how do they manage their affairs. So in some sense, the consumer protection guidelines also do cover some of the safety aspects that we would generally talk about. So in some sense, there is a regulation which is in place.
There is guidance which is already in place. It’s only that because of this transformational technology, if there is a need to look at it from a new technology lens, any additional guidance that needs to be incremental guidance that needs to be provided. And that’s a precise point we have come out with in this report. And one of the things that we expect institutions to go forward is with the entire lifecycle management of AI, should be a thought process. The institutions need to look at… the liability and accountability framework in a much different way. Our expectation is that customers need to be protected in all cases. So it’s not a question, it’s about the model deployed by the entities rather than the model developers.
The responsibility should rest with the model deployers and which are the regulated entities in this case. And therefore, there are three or four additional dimensions which need to be looked at in terms of supervision, in terms of the internal audit assurance framework. How do you audit or how do you validate or improve your product approval process to capture the additional incremental risks on account of AI? So these are some of the additional things that we are looking at to provide some nudge. And one principle that we had come out. Within the report, there are seven principles or sutras that the report talks about. and these have been adopted. I’m happy to report that these have been adopted by the government of India for implementation across sectors.
So these are generic principles and they have found acceptance. So one of the principles that we have talked about there is innovation versus restraint. Everything else remaining constant, entity should prioritize innovation rather than restraint. So that is a nudge. That is an innovation enablement or a nudge that we are trying to give to the sector. They should feel comfortable with this. So our whole approach is optimistic. We want people to experiment, adopt it responsibly, but think creatively in terms of liability framework, revisiting the accountability framework, have a board governance policy in place, and improve their internal systems and processes to give the comfort to not only the people, not only their own set of employees, but to other stakeholders.
about this new technology. All said and done, this is a probabilistic technology. There are bound to be some mistakes here and there. So we need to have a very tolerant and differentiated approach when we embed this into the financial services where people’s money is involved. I will stop here, but we’ll talk something more later.
Thank you, Swenduji, for that insight. If I could now turn towards the global view, our employer J .P. Morgan Chase is one of the world’s largest deployers of artificial intelligence. Tara, in terms of trust, what are some of the most impactful use cases trusted AI is being leveraged for in finance in your purview?
We joke that we shouldn’t worry about AI until we figure out AV. So I guess this is a perfect example of that. Thanks for the question, Arat. I think maybe the first thing to say about this, and this probably isn’t news to this room especially, but AI has been used in finance in deployed settings for over a decade. And at JPMorgan Chase, we’ve been using it, spanning use cases across our bank, starting first with the era of analytic tools, moving into machine learning capabilities, now in the direction of large language model deployment and sort of looking directionally towards the era of agentic capabilities and beyond. And spanning all of those, I think the most impactful use cases that we have seen, certainly in fraud and scams remediation, which is just a huge priority for the entire sector.
Payments, there’s some really exciting applications and in markets as well. And honestly, in compliance use cases for us too, just given the focus that we have on ensuring that we’re being compliant with our regulatory requirements. I think I also, I just want to pick up on a couple of things. that were previously mentioned that I think are worth underscoring. And one of those points was the point that you made, Mr. Patti, about one of the strengths of the financial sector regulatory approach being the principles -based technology -neutral approach that our regulators have taken. And I think it has allowed banks to experiment to a wide degree with the types of techniques that I just talked about.
Well, thinking about the proportionate risk of each one of those use cases as we are deploying. So I think that’s been really key. And I think the second point to underscore that you had mentioned previously, which I think was a really good one for us to address as well, is that there are, I think because of the strength of the financial sector’s approach to AI governance, really useful lessons that can be exported from this sector in considering questions of oversight and regulatory control. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point.
And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. That speaks to the sutures that you mentioned being adopted more widely across the economy in the RBI report that I think are really well aligned to wider consideration just beyond, you know, the banking
Thank you, Tara. And I think now we move to the more important issue of putting money into this particular industry. Ashutosh is one of the leading deployers of finance in India’s fintech ecosystem. What makes AI so strategic in your view for the sector? And what are some of the best practices you see being adopted by fintechs to build particularly trust in AI?
Super. Thank you so much for having me here. I think over the last two, three, four days, folks in the room. room have probably attended 5, 10, 15 such sessions, maybe more. And I think if there’s one takeaway that you have taken with you is that AI is going to change almost everything. And so it will the financial services sector. I think this is equally, in fact, more importantly applicable to India in a bigger measure than anywhere in the world. And the reason I would say is threefold. The first is unit economics. Let’s take an example. Indian credit market is $2 trillion in value. We spend anywhere from 3 % to 5 % on OPEX. Just on OPEX, we invest $60 to $100 billion a year.
And what AI can do strategically to improving productivity and therefore making these businesses much more healthy. It’s only a beginning of… of the journey we are taking. I think second strategic point of strategic importance is risk. A large section of our economy in India is unformalized. What I mean is that in credit parlance, it’s called a thin file issue, which is that for a large section of society, we don’t have enough data points, enough matrices to make the file thick enough for you to underwrite them. Now with AI, because of the technology’s ability to use unstructured data, you can actually very quickly and in a very cost -effective way make that thin file, thick file.
So again, I think underwriting risk for a large section of society in India will be possible now. with this. I think the last, not the last, one of the more important other points is reach. Buying a financial product is not like buying a shirt on Myntra or ordering food on Swiggy or ordering a saree on Mishra. This is a complex product. It needs engagement. The app or whatever platform you’re using asks you a bunch of questions. Before you even decide. Today, again for a large section of Indian society, it’s very hard to engage with that app. It’s complex. Now imagine a world tomorrow where you can speak to that app. And therefore now that enables reach of financial products, financial services to again a very large section of society.
So I think it’s extremely, extremely strategic from that standpoint. Also best practices look, we are too early. I mean we can only talk about practice. practices best or not only time will tell so so i mean look and look because we are early and because of what sir said um this is this is a high impact transaction for anyone a financial services transaction um and and therefore having a bot run a bank i think is not advisable so one of the practices that good fintech companies are using is keeping a human in the loop the technology can prepare a file but in the end it’s a human who kind of the second thing is again is is data is while data is of primary importance in the in the ai world but this is a lot of sensitive data that you as a fintech or financial services product provider you have that so ensuring at all times that you are following the dpdp guardrails i think is again something which is this is just a start we’ll evolve uh but i think it’s a good thing that we’re following the
Thank you, Ashutosh. Turning now to the person who’s actually deploying the money, which is Harshal. That’s a pointy edge. Do you really believe that this is AI’s big moment in finance? I gather at Razorpay, you are rolling out AI -based payment solution models. How do you think this will transform the payments landscape?
First of all, just from a back -end usage, like my colleague spoke about, I think finance typically deals with large volumes of data. Large volumes of data is generally harder for humans to really skim through. We always have to use machines and software to run through it. AI makes that job much, much easier. Anywhere where large volumes of data has to be interpreted, inference has to be drawn, I think you need systems to do that. AI is a system that allows you to do it at far more data points than it was possible in older systems. You can see, you can do as much analysis on Excel sheets or at . software, but with AI you can do 1000x more.
So I think just this advantage of that and things like underwriting and risk management and identifying fraud and multiple things that finance ecosystem has to do becomes increasingly important. So I think that’s why finance has been one of the earliest adopters because it’s just natural that the system is so much better than the previous systems. Coming to payments, I think one of the things that we’ve done is we’ve taken a very early bet on agentic commerce and the reason is fairly simple that there are 300 to 400 million Indian consumers who are on UPI today on district payments today. Less than 200 million of those actually do shopping online. But if you go, peel it even further and this is based on data that we see at Razorpay, less than 10 million of those users do 70 % of all commerce in India.
Just 10 million in a country of a billion and a half do 70 % of all commerce online. And that’s because, like he said, the commerce systems that we have built so far are not natural to most people in India. So we’ve built apps, we’ve built all the accesses. available, but while the access is there, the accessibility is missing. Because Indians don’t buy stuff the way Americans do. So the way we have built our apps is our American shop. It’s like a supermarket. Everything is available. You pick and choose yourself. Indians shop on retailers, where you go and talk. You say, hey, I want to buy this. He tells you, hey, why don’t you buy this, and so on.
We are conversational in commerce. And that’s why the app ecosystem we have so far has only penetrated 10 million or maybe 15 million. The rest of India needs conversations. Like take an example of travel. There are OTAs available everywhere. $50 billion of travel is purchased through agents on the ground, because people want to talk before they make a booking. 95 % of insurance in India is sold through offline brokers. There’s Policy Bazaar, and there’s so many brokers available, which will give you far cheaper, which will not missell you insurance. People still trust their local insurance broker. Because Indians want to converse before they buy. They want to ask 20 questions about what their and that’s hard to do in the apps that we have so far.
And I think agentic commerce is that next wave which will unlock the next form of commerce for the next billion people who have not really come, in spite of all the apps being available, who are not really shopping online, who are not really consuming online. They may be paying their bills online, but that’s it, just because they don’t want to stand in the line. But everything else they’re still doing through offline channels and if we can bridge that gap through agentic commerce, which is voice first, which is multilingual, which is conversational, I think we can unlock commerce for a large volume of Indians who have not come online properly.
Thank you, Arshil. I think the next angle which I’d like to touch upon is elevating deployers as key custodians of trust. So Venduji, the RBI has traditionally been ahead of the curve in comparison to some other sectors due to key initiatives which you’ve promulgated such as the Free AI Committee and its very progressive policy recommendations. If I may ask, is there a distinction in your approach for regulating AI developers and…
See, under the remit of the mandate given to the Reserve Bank of India, under the Reserve Bank of India Act or the Banking Regulation Act, our remit is towards the we can regulate only the regulated entities like the banks, non -banking financial companies or fintechs or so and so forth. So, model developers would strictly fit into the IT or technology companies. So in our remit or the official mandate that we have, we really cannot sort of regulate or prescribe rules for them. So what we are looking at is from a deployment point of view. And so our regulations or our guidance, I would refrain from using the word regulation in this sector. But in this context, but our guidance would be towards the deployers, which are the regulated entities.
And these, as I would say, are more, already some are in place through various, you know, guidelines. I have talked about IT outsourcing, third -party dependencies, and also on the customer engagement and things like that. So what we are looking at is how does the regulated entity be, you know, accountable. Once the regulated entity is providing a service to a customer, it is the complete responsibility of the entity to ensure the transparency, accountability, the way the customer engages with an AI system or a service. So from, you know, if I may loosely put it, from a typically black box is something which is associated with AI systems. You really do not know what happens inside and the result is produced.
But as far as the regulated entity. As far as the regulated entity is dealing with the customers are concerned, we would like. this to be a not a black box but a glass box. It’s completely should be customer should be knowing what they are getting. When they are engaging they should be clearly told upfront that they are engaging with an AI system. If they choose they should have the freedom to offer a non AI based engagement and transparency. Similarly for the accountability the institution should devise their audit systems to capture incremental risks arising out of the AI. How does the bias get removed? Is there a model drift? Is there a model degradation? Does it get addressed periodically?
So those kind of checks and balances regulated entities need to put as part of their board policy and set the implementation and some things like understandability by design. You know the course itself should ensure implementation. These are some of the things we have talked about. And over a period of time, we would like that this gets addressed and gets refined and gets embedded and implemented across their processes.
Thank you, Sovenduji. Deployers are also fast emerging as key custodians of trust in the AI ecosystem. And, you know, frankly, it’s the responsibility to the global economy to get AI integration right for large financial services firms such as J .P. Morgan. So how is J .P. Morgan positioning itself in this debate?
Well, I think AI is not made useful unless it’s deployed, and it can’t be deployed at scale without trust and transparency. And so the way that we’re thinking about these questions really rests. It rests on, again, the strengths of the sort of, I think, the culture of risk management and oversight that we have grown into in financial services, deploying technology of all sorts, not just AI, but certainly AI more recently, as I mentioned. and having there really be a sort of a use case focus on the risks entailed in every single one of our deployments. I think a lot of the lessons that can be learned in financial services risk management, again, are applicable widely to other sectors, as we’ve talked a little bit about this afternoon, including in sort of AI lifecycle oversight and management in model risk management guidelines and principles, in the principles and practices of real transparency and auditability that we’ve spoken to up here and many, many others.
And so I think what that allows is, as we’ve spoken to, banks and financial service organizations are sort of uniquely positioned in many ways, given the nature of the data estates that we sit on top of, given the necessity of the business model. given customer demand and market demand and a host of other issues that I would say surround kind of the innovation envelope here. But I think the risk management practices that we have are a huge strength there, too. So, yeah, I would say that that’s all really key to engendering trust with customers and making sure that we’re doing right by the products and services that we’re delivering to them.
providing what information they need to fill in while account opening, those kind of summarization effects may not be subjected to very, very elaborate degree of scrutiny or risk testing or template, those kind of processes. This is what I would feel personally, but yes. And just to make this more of a conversation, I’ll add one additional point, which is that I think it’s important to understand that the way that we’re dealing with this is not just about the data. It’s about the information that we’re getting from the data. It’s about the information that we’re getting from the data. And I think that’s what I would feel personally, but yes. And I think that’s what I would feel personally, but yes.
If you’re a large company, it’s competing with a small, let’s say, retailer, and let’s say they’ve opened a new supermarket opposite to them. It’s hard for a small retailer to compete because they don’t have the intelligence available to the large supermarket in terms of what products to put in, what things to deploy, what marketing ideas to deploy, and so on. But now you can really open a chat GPT app, ask it to prepare a business plan for you, tell it how do I fight this, and it can really help you compete. So the advantage of having intelligence on demand really, I think, balances the scale than what was available before because it reduces the cost of intelligence.
A large company could always afford that intelligence, but now it’s available. Similar examples will be available later. let’s say it’s a farmer on the ground who is unable to figure out, like, which crop should he purchase this season, right? And I met companies recently who are essentially doing that, that they’re deploying AI models to be available to farmers on the ground, that, hey, you can ask it. It can tell you information that is generally not available to you. So I think that’s on the general side of things. Now, if you come to finance side of things, one of the biggest problems in finance is mis -selling, right? Or fraud, and fraud. Like, for example, I have told my dad, my dad is 70 years old, and I’ve told him, hey, if you’re making any expensive purchase decision, just give me a call.
I don’t know if you’re getting fraud aid, if you’re getting digital estate. I don’t know if you’re getting insurance sold, which you don’t need, or a financial instrument you don’t need. But, like, AI allows me to put something smarter than me in his own pocket, right? So he doesn’t need to call me now. He can open, and I taught my dad how to use chat GPT. Now he opens it up and asks it in his voice, like, I’m going to buy this. Should I buy this or should I not buy this? I can imagine a year or two years from now. Now, all of us will have an AI agent who is essentially your assistant.
So, when you’re shopping something, it’s searching for the best prices online. When you’re buying something, it’s searching for the best features. Is this the best product? Is something else the best product? It’s doing a research on Reddit, it’s doing a research on Twitter, telling you, hey, don’t buy this, buy this. Or you’re on this website, which is clearly mis -selling, which is fraudulent, the price looks too good to be true, so don’t buy it from here. I think having that intelligence available to every person on demand is a massive advantage. And I think the impact of it in society will be fairly positive. I think the people are worried about frauds happening because of AI.
And I think that’s true in the short term when the ecosystem is getting prepared. But in longer term, frauds and mis -selling and all of that will go down significantly because everyone will have an intelligent agent who’s extremely smart and who can tell things far better than a human can. So I think that can really bring a massive
Just in case, Harshil, you ask your dad to be aware about hallucinations. . .
Thank you, Harshil. Thank you, Harshil. Thank you. So innovation and commitment are key in any new technology, as we all know. So, Ashutosh, what are some of the promising business models you are excited about in FinTech and with the AI space? Which ones do you see gaining more traction in the global south? And in your view as an investor, do some key gaps still exist which are currently unaddressed and could benefit in some way from an AI solution?
I’m always excited about interesting ideas, Bharat. Now, with that said, I think the adoption is all across the subsectors of financial services. In subsectors where India has naturally been at the forefront of innovation and payments come to mind, right? UPI is a very good example. I think India is leading the innovation wave. even with the advent of AI. Right about the time when the Indian e -commerce platforms were getting embedded or connected with the large foundational models, about the same evening, Indian payments companies were launching products, as Harsh said, that could enable you to buy from within the model or even within the chat experience that you are having in the e -commerce app, Swiggy or Flipkart, whatever you call it.
So within payments, we are at the forefront. Talking generally, I think the most use of AI I see today is in two areas. One is productivity. This is related to the unit economics point that I made previously. I think that’s happening. But more importantly, also in customer experience. And I’ll give you two examples. One is the use of AI. In UPI, we are now moving from this kind of OTP world to a biometric world, wherein you don’t need to just using your biometrics, you can make a payment, right? In part, that is enabled by AI. And imagine how nice the customer experience now will be with this, rather than waiting for something to come to you.
In lending, almost 60 to 70 % of collection for the first 30 days is now moved to an AI -led agent. Us as humans, we get irritated calling 20 people all day. And by the end of the day, the human agent is upset and the customer is upset and the conversation is like the collection is not happening. Whereas with an agent, the agent can be empathetic. Agent will call you, can remember. this is the time when Ashutosh is free let me call him and so I think we are seeing a lot of kind of movement there in the customer experience domain as well as for gaps I think there is one thing that I feel where India is slightly kind of behind is that the west has probably 50 60 years of customer data whereas in India UPI credit card all that is a new phenomenon so for us to there is no right answer for us to get to levels of underwriting which are closer to what west may enable with AI that ability of that availability of multi cyclical deep data maybe something we have a lot of data of data hundreds of millions of customers.
But the depth of that is something that I think we need to consider as
Well, as they’re saying, those data is the new gold. So you need to keep it with you as much as possible. And I think that’s going to be something which is going to be challenging for a country of a billion and 400 million. So, Venduji, are there any engagement pathways which RBI is using to engage and partner with the industry to promote AI adoption in finance? And, you know, in the Indian startup ecosystem, are there any specific initiatives you’ve seen to promote AI adoption that banking sector can support this diffusion?
Yeah, good point. And first of all, during the last couple of years, we have had multiple engagements. In fact, we have a scheduled monthly engagement with FinTech, and that’s titled as FinQuery and Finteract. So these events do take place at very regular intervals and across cities and through a hybrid channel as well. And roughly about 2 ,000 plus entities have engaged with us in the last one and a half years. And specifically on AI, we did a survey across more than close to 600 entities, including banks and NBFCs. That was a dipstick survey and deep engagement of about one hour each with around more than 75 entities to understand their adoption and what areas they see the potential implementation and what challenges they are witnessing.
So there is a constant. And after the report, free AI committee report has been released on our website in August, we have had around three rounds of consultations with various stakeholders, including FinTechs, to take their inputs on board. So it’s a continuous process. It’s a constant engagement. And I would also like to draw attention to the. regulatory sandbox framework which has been put in place since 2019 and entities are welcome to partner with us and experiment under the regulatory sandbox whenever they require any regulatory dispensation or a regulatory relaxation and as articulated in our recommendations we are one of the key constraints that we see especially the smaller fintechs is the lack of access to the affordable compute infra as well as the lack of access to data based on which they can you know innovate and build models so this is on top of our mind that we are sort of committed to design and operationalize what we would call that a ai sandbox that’s not exactly a regulatory sandbox but it will have access to the data and compute and sort of with the overall aim to democratize you know the data and compute and sort of with the overall aim to democratize you know the data and compute and sort of with the overall aim to democratize ai across you know smaller institutions A bank like JP Morgan or State Bank or HDFC may have enough data, bandwidth, and resources to build their models, but what about the smaller fintechs and other entities?
So with that vision, we would be operationalizing the AI sandbox, which would engage, put these people have access to those resources to innovate. And on top of that, we ourselves are building models like MuleHunter .ai, which is already implemented across 26 banks, and it’s getting implemented across other entities as well. And this engagement is a continuous process, and we would like them to partner with us, submit proposals, and work with us. And we also expect the industry bodies, like the self -regulatory organizations, which has already been recognized, one has been recognized, they have to come up with, we expect that they need to come up with the toolkits or benchmarking services. which the AI, you know, the models can sort of test themselves and see that whether they’re, you know, bias -free and they meet the expectation and transparency standards.
So it has to be expected that fintech industry itself comes up with those kind of standards and benchmarks and toolkits which would support the innovation.
Thank you. As we all know, regulatory engagement is critical to promoting innovation. So, Harshal, for a company such as yours, what are the key regulatory challenges you are facing in the deployment of AI in finance? And how does your engagement with government and regulatory bodies actually address these? And do you find any public -private partnership model which could be helpful in taking the industry to the next level?
See, I think the core aspects of regulation, as sir said, I generally don’t go into technology or which technology to use. I think there are general principles of regulation, and then you can use any technology to apply. the same principles. I think in most cases we have been fairly successful in deploying AI models and while meeting the requirements of regulators. I think the few areas where it sometimes becomes a challenge is I think we have a very strong data residency requirement in India which is rightly so and a lot of AI models are coming from the West which don’t meet the data residency requirements today for India. So I think in that context having and the open source models are all coming from China which makes it harder to deploy.
So I think we don’t have the right like we don’t have enough deployment of the cutting edge models in India data centers today and I think that sometimes delays deployments because we can’t really use them as a regulated company. I think the good part is I mean there are three language models that were announced in the AI summit today which are from India. So I think that can be a good way for at least financial companies in India who want to deploy models within India data centers and within Indian boundaries. I think they can at least those models are available and that can be a starting point and then we are hoping that the global companies will bring some of those the cutting edge models to India data centers.
centers as well, so they can be deployed. I think that’s one challenge just on the infrastructure itself, that the cutting -edge model infrastructure is not available. So we can use it for coding, we can use it for multiple internal purposes, but we can’t really use it for anything that touches customer data, anything that touches PII. We can’t use those models till they’re deployed in India data centers, and hopefully that is going to change. The second aspect is, like, related to it is, as a financial company, as she said, I think the biggest challenge for you is controlling where the data goes and where it flows out. I think AI models, as somebody said earlier, it’s a black box.
Once the data enters, you don’t know where it comes out and when, and I think drawing clear boundaries on that is hard. So that is one big challenge, just with LLMs, but there are other forms of AI where that works fine, because there are other forms of AI models or specific targeted models that you can apply where those guardrails are available. Just LLMs don’t have guardrails in terms of where data goes in and where it comes out. hallucinations. Anything to do with financial data, trust is very, very critical. So I’m okay if the system fails 10 % of the time, but it should not be wrong 10 % of the time. So it’s okay if the system says, hey, I can’t do this analysis.
But if it gives the wrong analysis and you use it as a source of truth and you act on it, and then you deliver that information to the customer and you say a commitment is successful, but it actually isn’t, even if it happens 1 % of the time, it creates a massive issue for you. So I think that’s the third piece and I think it’s less to do with regulation, it’s just how the, what is expected of financial players that you can’t be saying something that is not true. And LLM’s model by default can say things which are not true and even if it happens in 1 % or 2 % of cases, it can become a massive liability risk for financial companies.
So I think those are the three big aspects and I think the solutions available to the some of those, the first one is fairly easily solvable and global companies will probably solve it or Indian sovereign models will get there. The second is partly solvable because you can put guardrails around and use the right kind of AI models where that is possible. The third is a fundamental problem of how LLM models, LMs models work. So I think that that part is going to be harder to solve. Yes, there are newer models which hallucinate less. But as I said, even if it hallucinates less than 0 .1%, I still can’t deploy an LLM model till I’m certain about it.
And I think that part will require us to either use alternate means or wait for LLM models that can solve that boundary.
Ashutosh, you know, because you’re looking at investing companies across the spectrum, not necessarily only in finance, but in other areas which are using artificial intelligence. In your view, what are some of the key regulatory gaps highlighted by your investing companies in the fintech sector? And going forward, what progressive regulatory measures can the government consider to promote this more smoothly?
I think RBI has in general been a very kind of progressive. not regulator but guider in this in this situation the seven sutras have been really helpful for people to understand at least what the direction of travel is and also I think the one one acceptance we need to make is that just the way we all are learning about AI its use cases etc etc the regulators also learn and things are changing fast and therefore I think the end situation of what the regulation looks like may be very different. I have a slightly different sort of ask policy ask adding to what Harshal was saying I think compute for my companies is a bigger problem than regulation and researchers for my companies I don’t think we can solve this by the way I mean like through regulation or policy but I think you were asking what do companies struggle with I think those two things are are are bigger problems at this time.
Thanks. I’m conscious of the time and I would use my moderator’s prerogative to ask one final round of questions I think to all our distinguished panelists. What is one big bet you would like to take on how AI will transform finance in the next five years? We start with Subinduji.
Okay. I know that really time is up but yes it’s not a bet, it’s a wish list rather. Already I’m glad that Ashutosh has already covered some of that in a very, very elaborate way. One thing I would like to see is that how AI can bring about substantive improvement in financial inclusion. You know, bringing people to formal institutional credit which through alternate data analytics and bringing new underwriting models how we can bring them on board and it will be a big unlock. for a country like India. Second aspect I would like to emphasize is which already Harshal has also touched upon is all our fintech apps or everything are now designed for very, very digitally savvy people.
How do we use AI to bring language, voice -based banking, conversational banking, payments? I don’t have to fill a form. I just need to instruct and that translates. So uneducated but literate or sort of logical -minded people who are using WhatsApp voice and all that to transmit messages, they should be able to come on board and using AI come to the financial fold. We should focus more research on assistive technologies. For example, a disabled person, person who can’t see, can’t hear, how do we use AI to bring them or provide information? Make them access financial services in a more efficient way. manner. These are the areas where this technology is going to play a role and we would like to see this getting to that point where it really bridges this otherwise so -called digital divide which is the risk is widening.
We should bring it back to that and AI can prove it in a point and there I would say very, very optimistic about this but a lot of work needs to be done in these areas.
Thank you. Harshil?
I completely agree. I think the ability to bring the cost of servicing down significantly so that you can deliver personalization with the N of 1 at an individual level I think can have varied impacts. Like I said typically in India for example when HNI’s open a bank account you don’t fill a form. A guy comes to you fills the form for you, just asks for the 5 documents and asks you for signature and it’s done. But actually who needs this most is the villager. Because he really can’t fill a form. But he’s asked to stand in line, fill a form. AI can allow us to deliver that experience to the villager on the ground. And I think that is going to be the one biggest change that finance can do, is allow the cost of servicing to come down drastically, personalization to happen at an individual level, and then voice -based interactions to drive.
And as somebody said earlier, that’s what’s natural to us. That’s what’s natural to Indians, that if we can make it all voice -based.
Arshatosh?
I think AI -led financial services leading us to Vixit Bharat would be my bet.
I think that’s an aspiration for all of us. And Tara, there’s a lady on the panel. The last word is yours.
I would underscore all the answers already provided. I think the financial inclusion potential, the accessibility potential here is massive. Imagine a world in which we can not just expand the credit envelope, but put a financial advisor in every single person’s pocket that normally only the wealthiest in society today are able to afford. So I look forward to that world being.
Thank you.
And the last word, if I may slip it, language. India is a country with diverse languages. We can leverage on our language, AI to play on the language.
Well, I’d like to thank our distinguished panel for a truly enlightening discussion. And I think the topic was supercharging AI adoption in the global south. And I think many of the thoughts of this panel would go a very long way in achieving that goal. Thank you very much once again. Thank you. Thank you. Thank you.
“It’s the legitimacy that is the scarce attribute here.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/how-the-global-south-is-accelerating-ai-adoption_-finance-sector-insights?diplo-dee…
EventCompliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to emerging threats in real time. The diffusion of AI across the financial value ch…
EventAnd the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innovation safe, trusted, and scalable through governance and regulation, evidence gen…
EventThis articulates a fundamentally different regulatory philosophy – starting with adoption and gradually adding restrictions rather than starting with restrictions and gradually allowing adoption. It c…
EventIn terms of regulating technology, it is suggested that focus should be placed on regulating use cases rather than the technology itself. Regulating the technology itself can stifle innovation, so it …
EventGalvez suggests that countries should consider their local needs and existing regulations when developing AI governance frameworks. She argues for complementing existing regulations rather than creati…
EventAbsolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have been leveraging AI for decades to make our payment network safer and more secure f…
EventAgentic AI is expected todeliver up to $450 billion in valueby 2028, as financial institutions shift frontline processes to AI agents, according to Capgemini’s estimates. Banks start with customer ser…
UpdatesAI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search interest for ‘AI agents’ surged throughout the year, reflecting a broader shift…
UpdatesAnother challenge highlighted in the analysis is the aggressive data gathering by fintech companies for profit-making services. Data from social media and other sources are used for credit scoring, an…
EventAnother area of concern is the coordination among regulators. The analysis reveals that 27% of fintechs rated the coordination among multiple regulators as poor. This lack of coordination could hinder…
EventFor example,sustainable financemodels, such as green bonds and ESG-linked financial products, are expected to grow significantly, with the green bond market projected to reachUS$2 trillion in 2025. Ho…
BlogIvy Lau-Schindewolf: Sure. Yeah, it’s kind of hard to go after, you know, Elena. And that was a very, very good point and compelling. You know OpenAIR is just one company in this big world, in this ec…
Event“John Tass‑Parker leads policy partnerships at JPMorgan Chase and is a key voice in discussions about AI trust in finance.”
The knowledge base lists John Tass-Parker as leading policy partnerships at JPMorgan Chase, confirming his role in AI-related finance discussions [S1].
“In financial services, trust is becoming a measurable and central component of business models.”
A World Economic Forum source notes that trust is now measurable through provenance, authenticity and verification, emphasizing that “it’s going to be about trust” in finance [S84].
“The RBI adopts a technology‑neutral, principle‑based framework for AI regulation, emphasizing responsible adoption over prescriptive rules.”
A session summary highlights that regulatory approaches are leaning toward a technology-neutral legislative stance, matching the RBI’s principle-based framework description [S88].
“The RBI warns that widespread AI use in banking could create financial stability risks, prompting a responsible‑AI stance.”
The RBI Governor has publicly highlighted AI-related risks to financial stability in the banking and private-credit markets, supporting the claim of a risk-aware, responsible-AI approach [S87].
The panel shows strong convergence on the need for trustworthy, auditable AI governed by principle‑based, tech‑neutral regulation; on AI’s role in expanding financial inclusion through conversational and voice‑first interfaces; on productivity gains; and on the importance of regulator‑industry collaboration via sandboxes and ongoing engagement.
High consensus – the speakers from regulators, fintechs, and large banks largely agree on the same strategic priorities, suggesting that coordinated policy actions and industry initiatives are feasible and likely to accelerate responsible AI adoption in the Global South finance sector.
The panel largely agrees that trust, governance and regulator‑industry collaboration are essential for AI in finance. However, clear fault lines emerge around (i) the extent of automation versus human oversight, (ii) how strictly data‑residency and infrastructure constraints should shape regulation, and (iii) the acceptable level of AI error. These disagreements reflect a tension between a rapid, innovation‑first agenda and a risk‑averse, compliance‑driven stance.
Moderate to high – while the shared goal of trustworthy AI is strong, divergent views on risk tolerance, regulatory scope and automation depth could slow consensus on concrete policy measures, potentially leading to fragmented implementation across the Global South.
The discussion was anchored by John Tass‑Parker’s framing of legitimacy over capability, which set a trust‑centric agenda. The regulator’s tech‑neutral, innovation‑first stance (Suvendu) and its concrete proposals (glass‑box transparency, AI sandbox) acted as pivotal turning points, steering the conversation from abstract policy to actionable governance. Panelists then layered depth by presenting high‑impact use‑cases—credit underwriting for thin‑file borrowers, agentic commerce for mass‑market payments, and personal AI assistants—to illustrate how trust translates into real‑world value. Each of these insights sparked new sub‑threads (risk management, infrastructure constraints, inclusion) and collectively shaped a narrative that moved from regulatory philosophy to concrete pathways for AI‑driven financial inclusion in the Global South.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

