Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion
20 Feb 2026 13:00h - 14:00h
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion
Summary
The summit’s “Adoption and Acceleration of Artificial Intelligence” panel brought together leaders from philanthropy, finance, and government to discuss how AI can be deployed equitably and responsibly worldwide [5-8][9-14]. Moderator Rudra Chaudhry framed the discussion around the tension between policy and large-scale adoption, citing recent calls from India’s prime minister and France’s president for responsible diffusion [22-27].
Rwandan Minister Paula Ingabire explained that Rwanda adopts an adaptive, use-case-driven regulatory posture, building rules only after concrete AI applications reveal specific risks, rather than imposing abstract frameworks [35-44]. She emphasized that partnerships must include capacity-building, co-development, and data-sovereignty safeguards such as a national data hub and a pre-existing data protection law [45-53][54].
When asked about a global AI compact, Ingabire affirmed its feasibility but stressed that standards must be contextualized to diverse cultural and linguistic settings and tied to the concrete problems nations aim to solve [61-66]. Tara Lyons noted that while the fundamental policy questions raised during the Obama administration-fairness, transparency, interoperability-remain unchanged, the field has shifted from theoretical debate to applied challenges faced by deploying organizations [71-80]. She argued that the hardest issues are now human and institutional, requiring trustworthy, responsibly scaled AI that delivers real value to users rather than purely technical breakthroughs [82-88].
John Palfrey, representing the MacArthur Foundation, reiterated that AI should serve humanity, calling for stable regulatory regimes that keep humans at the centre and for philanthropy to fund civil-society voices that can shape those rules [95-99][101-108]. He highlighted that the foundation has mobilised over a billion dollars in AI-focused philanthropy, underscoring the sector’s role in supporting research, governance, and inclusive innovation [110-121].
Tara added that the finance sector’s long-standing risk-management experience offers a model for use-case-level governance, and she advocated for greater regulatory harmonisation to enable consistent global deployment [156-169][170-174]. Both speakers called for broader multi-stakeholder participation, urging that future panels include representatives from retail, energy, and manufacturing to showcase concrete value creation for citizens [179-183].
Ingabire described Rwanda’s concrete AI benefits in health, education, and agriculture-improving diagnosis, lesson planning, and farmer data services-while noting that financial sustainability will be measured through service quality rather than direct OPEX returns [129-146]. She also stressed the importance of measuring impact, expanding South-South cooperation, and hosting future summit activities in Kigali to ensure African voices shape the emerging AI governance landscape [205-216].
The discussion concluded that coordinated, use-case-specific regulation, inclusive partnerships, and sustained philanthropic and financial support are essential to realise AI’s promise while managing its risks on a global scale [55-60][156-174][179-186].
Keypoints
Major discussion points
– Rwanda’s adaptive, use-case-driven regulatory model and capacity-building partnerships – The minister explained that Rwanda prioritises identifying high-impact AI use cases first and then crafts specific regulations, rather than imposing abstract rules, and that partnerships are structured to transfer skills and ensure local ownership [35-44][45-53].
– Feasibility of a global AI compact that respects diverse contexts – Rudra asked whether a worldwide agreement on AI risks is possible, and Paula responded that a compact can exist but must embed non-negotiable shared standards while allowing contextual adaptation for each nation’s problems [59-66].
– Human-centred AI, trust and responsible diffusion – Both John and Tara stressed that AI should serve people, not the opposite, and that the hardest challenges are not technical but about making AI trustworthy and useful in everyday life, which is essential for broad adoption [95-98][71-80][82-88].
– Philanthropy and multi-stakeholder collaboration as a catalyst for responsible AI – John highlighted the need for sustained philanthropic funding to give civil society a voice, to support research and implementation, and to bridge gaps between innovation and regulation [101-110][120-121].
– Regulatory harmonisation and risk-management at scale – Tara described how a global financial institution manages AI risk at the use-case level, stresses the importance of sector-specific oversight, and calls for cross-border regulatory alignment to enable safe, large-scale deployment [156-169][172-174].
Overall purpose / goal of the discussion
The panel was convened to explore how AI can be adopted and accelerated responsibly worldwide, sharing concrete experiences (e.g., Rwanda’s approach, financial-sector risk management) and debating the need for common governance frameworks, funding mechanisms, and multi-stakeholder collaboration that can translate policy into real-world impact.
Overall tone
The conversation began with a formal, courteous opening and an optimistic framing of AI’s potential. As the dialogue progressed, the tone became more probing and analytical, with speakers questioning feasibility (global compact) and highlighting challenges (trust, regulation, financing). Throughout, the tone remained constructive and forward-looking, ending on a collaborative note that invited continued cooperation and concrete next steps (e.g., South-South cooperation, future summit venues).
Speakers
– John Palfrey – President of the John D. and Catherine T. MacArthur Foundation; law professor; expertise in philanthropy, AI policy, and law. [S1]
– Rudra Chaudhry – Vice President of Observer Research Foundation; moderator of the panel; expertise in AI policy and governance.
– Speaker 1 – Opening host/moderator of the summit; specific role or title not provided.
– Terah Lyons – Managing Director and Global Head of AI and Data Policy at JPMorgan Chase; expertise in AI policy, finance, and risk management.
– Paula Ingabire – Minister of ICT and Innovation, Rwanda; expertise in digital governance, AI adoption, and data sovereignty. [S12]
Additional speakers:
– Stephen Bird – Global Head of Thematic Research at Morgan Stanley; expertise in investment research and AI market assessment.
Opening & Panel Introduction – Speaker 1 thanked the host, invoked the “AI for all” vision, announced a lost-rupee card, and introduced the panel: John Palfrey (MacArthur Foundation), Terah Lyons (JPMorgan Chase), Her Excellency Paula Ingabire (Rwanda), and moderator Rudra Chaudhry (Observer Research Foundation). Stephen Bird was named in the introduction but did not speak. [1-18]
Framing the Discussion – Rudra opened his 25-minute segment by describing the tension between policy and large-scale adoption and citing recent calls from India’s prime minister and France’s president for responsible AI diffusion in the Global South. He then asked how Rwanda balances governance with population-scale deployment. [19-34]
Rwanda’s Adaptive Regulatory Approach – Paula explained that Rwanda follows an “adaptive” strategy built around concrete use-cases. The government first identifies applications that can deliver the greatest societal benefit, then crafts use-case-specific regulations that evolve as evidence accumulates. Partnerships are required to co-develop solutions and train Rwandan staff, creating a closed loop between capacity-building and regulation. Rwanda is also establishing a national data hub and has enacted a data-protection and privacy law to safeguard data sovereignty. [35-54]
Possibility of a Global AI Compact – Rudra asked whether a global AI compact is realistic. Paula affirmed its feasibility, stressing that any agreement must contain non-negotiable shared standards while allowing cultural, linguistic and contextual adaptation for each nation’s specific problems. [55-66]
Historical Perspective on AI Policy (Obama Era) – Terah traced the origins of modern AI policy to the Obama administration, which first raised issues of fairness, transparency, bias mitigation and interoperability. She noted that the field has moved from theoretical debate to applied challenges. [67-78]
Current Hard Problems – Human & Institutional – Terah argued that the toughest challenges now are human and institutional: building trust, ensuring responsible scaling, and delivering real value to organisations and end-users rather than pursuing purely technical breakthroughs. [79-88]
Philanthropy & Human-Centred Regulation – John stressed that AI must serve humanity and called for a stable, human-centred regulatory regime to prevent the technology from being treated as “magical” or ungovernable. [89-99]
Funding the Ecosystem – John outlined the philanthropic sector’s contribution, citing the $500 million “Humanity AI” fund and a comparable commitment to the AI Collaborative, together amounting to over $1 billion for AI-for-humanity projects that support governance, research and inclusive innovation. [100-121]
Finance-Sector Experience & Need for Harmonisation – Terah described JPMorgan’s roughly $20 billion annual technology spend and a decade-long AI deployment journey that has progressed from analytics to large-language and agentic models. She highlighted sector-specific risk-management expertise and the importance of regulatory harmonisation across jurisdictions for multinational operators seeking “census-scale” deployment while maintaining consistent safeguards. [122-174]
Rwanda’s Value-Based Impact Metrics – Paula argued that AI value should be measured in health, education and agriculture outcomes rather than pure monetary ROI. She cited decision-support tools for community health workers, AI-enhanced lesson-planning for teachers, and data services for farmers that boost productivity and income. She emphasized that over 70 % of Rwanda’s population are youth, who are being trained to develop and maintain these solutions, reinforcing local ownership and trust. [175-190]
Future Directions & Requests
– Terah expressed a desire to see more “real-economy” deployers (retail, energy, manufacturing) featured on future panels. [191-193]
– John suggested that collaborations between philanthropy and frontier AI labs would be “exciting,” but did not commit to a specific partnership. [194-198]
– Paula invited the summit organisers to consider hosting a future meeting in Kigali to deepen South-South cooperation and amplify African perspectives, and called for the development of impact-measurement metrics that quantify AI’s benefits across sectors. [199-207]
– Rudra closed the session by thanking the panelists and the organisers for the discussion. [208-210]
Key Consensus Points – All participants endorsed: (a) an adaptive, use-case-specific regulatory approach anchored in human-centred values; (b) partnership models that embed capacity-building and data-sovereignty; (c) a flexible global compact with core non-negotiable standards; (d) the need for clear, sustainable financing mechanisms, whether philanthropic, commercial or OPEX-based; and (e) the development of systematic impact-measurement metrics to inform evidence-based policy. [35-54][55-66][79-88][100-121][122-174][175-190]
In sum, the panel highlighted that responsible AI diffusion depends on adaptive regulation, locally grounded partnerships, a shared yet culturally sensitive global framework, sustainable financing, and robust impact measurement. The forward-looking agenda calls for continued multi-stakeholder engagement, regulatory harmonisation, and South-South collaboration to ensure AI delivers equitable, trustworthy benefits worldwide while avoiding a false binary between regulation and innovation. [208-210]
Thank you so much, Your Excellency, Eta Bush, for your valuable insights and for elevating the summit. And it’s really interesting to listen to the perspectives of countries like Sweden, because when we talk of AI for all and global cooperation, the role of each and every country becomes very, very important. Ladies and gentlemen, before I move on, I need to announce that there’s a rupee card which we found. If somebody has lost this rupee card, though I don’t know how much money is there, but if you’ve lost this rupee card, kindly come to me and collect it from me. Thank you. And ladies and gentlemen, now we move to the next panel discussion, which is on adoption and acceleration of artificial intelligence.
The panelists joining us represent some of the most thoughtful voices on how AI is being built and adopted around the world. Mr. John Palfrey is the president of the John D. and Catherine T. MacArthur Foundation, one of the world’s most influential philanthropies, where he has championed the idea that technology must serve the public interest. His perspective on how AI can be deployed equitably, not just efficiently, is essential to the conversation. Ms. Tara Lyons is the managing director and global head of AI and data policy at JPMorgan Chase. AI at one of the world’s largest financial institutions, she is navigating the frontier where AI meets regulation, risk and responsible deployment, ensuring that AI in finance is not just powerful, but trustworthy.
Her Excellency Paula Njibar is the minister of ICT and innovation for the government of Rwanda. Under her leadership, Rwanda has emerged as one of Africa’s most ambitious digital economies, proving that visionary governance can leapfrog traditional development pathways. And we also have Mr. Stephen Bird as the panelist, who is the global head of thematic research at Morgan Stanley, bringing the investor’s lens to the question of which AI bets are real and which are hype. And this discussion will be moderated by Mr. Rudra Chaudhry, Vice President of Observer Research Foundation. Ladies and gentlemen, please join me in welcoming Mr. John Palfrey, Ms. Tara Leons, Her Excellency Paula Ngibar, and also Mr. Rudra Chaudhry. Please kindly come to the stage for this very interesting conversation, a panel on adoption and acceleration of AI.
Mr. Bird will be joining us very soon. Thank you.
All right. Hi, everyone. There’s a good bit of distance between me and the panelists, which might be a good thing. We’ll see. We’ve got about 25 minutes, so I’m going to keep it quite swift. The general panel is about policy on the one side, adoption on the other. And I wonder if that’s actually the case. Yesterday in the inaugural, the prime minister made very clear that adoption is a huge opportunity for India and other parts of the global south. But we have to do it responsibly. President Macron made a very similar pitch in his inaugural speech. And I want to start with that framing. And I want to come to you, Minister. Rwanda is a fascinating country in general.
But you’re particularly fascinating on the African continent because you were way ahead of the AI curve in a sense. You invested in a startup ecosystem. You were looking at scale before many of us thought of use case scales. Give us a sense of how Rwanda. Manages these minefields between governance policy on the one side and adoption at population scale on the other.
Thank you very much, Rudy, and great to see you all. I think for us, the decision has always been clear around how we leverage technology as a country to drive socioeconomic development. And so AI, like many other technologies that we’ve experimented with as a country, we took the same posture. And so the idea was figuring out how we leverage this particular technology to address societal challenges. And there were certain trade -offs that we had to make. When it comes to governance, it was a posture around, rather than try to focus more on regulating, we’d rather figure out where do we see AI creating the biggest benefits and gains for society. And then we’re able to build regulations according to the use cases that we’re implementing.
And so the regulatory posture that we take then is more adaptive. And it’s one where it’s evidenced best because we’re already building use cases, using that today. And so we’re able to determine what kind of regulations are needed, and they’re very specific to the problems that we are solving. as opposed to trying to create a very abstract regulatory framework, which may not necessarily address whatever risks and concerns that we foresee. The second one has always been on partnerships because that’s been key. The level of development, digital development that we’ve achieved as a country is thanks to the various partners that we’ve been able to attract into Rwanda. But partnerships, we also look at it very closely to determine how do we make sure that these partnerships are helping us to build capacity.
So, for example, we’re not going to acquire a foreign solution, invite them to train on our data and just leave us with an application. We want them to be able to train our people, co -develop this with our people so that at least we have the skill set and the mastery of what we’re trying to deploy, which will then create that closed loop around the regulatory environment that we put in place. And last, again, I think it’s a conversation that we’ve had throughout this week around sovereignty, thinking about data sovereignty. By design, we’re building our national data hub. and we’re really making sure we understand, you know, what are the guardrails that we put in place.
We don’t want to wait for a crisis to start, you know, worrying about who is using our data, what are they accessing that for. And so we started with already putting in place the data protection and privacy law that governs how you collect, use, and process data. And that has been the foundation through which we can then start to ensure that everything that we do from a data sovereignty perspective, we’re doing it by design.
So I’m going to come back to the question on the benefits of AI for all of you and for you, Minister, in a minute. You know, this entire summit process started with Bletchley, where I think the general philosophy was that can we come to some kind of a global compact when it comes to risk and risk aversion, when it comes to early warning systems. The institutional outcomes was these AI safety institutes that were built out. Can I ask a challenging question? Is, from your perspective, is a global compact on something like AI actually possible? Or are there norms that we should generally be thinking about and fitting into our national jurisdictions?
So I believe a global compact is possible. However, it has to reflect the different contexts, cultural, linguistic, everything. And so to a certain extent, what you’re looking at is what are some of those shared standards that we all subscribe to as countries, which are non -negotiables for everyone that is building and deploying AI products and solutions. And then obviously, you then get to contextualize it to whatever problems that you’re solving for. And so, again, it’s going to come back to what are nations deploying AI to solve for? And how do we make sure that these standards are reflective of what we’re looking to adopt through the global compact?
Dara, if I could come to you. You were… You’re leading AI at J .P. Morgan. you’ve been in the Obama administration in a very different office on science and technology and policy, way before the AI wave kind of hit us, although people have been working for AI for three decades now. Just give us a sense, just before I come to the immediate, take us a sense back to those second term of the Obama administration. Give us a sense of how were you thinking about AI?
Well, I would say that era was the first in which global governments started considering AI policy questions at all. And honestly, a lot of the same questions were being asked then as are being asked now. The question of global governance that the minister just spoke to, I think, was top of mind then as it is today. Questions of standards generation and interoperability were certainly part of the conversation. Issues of fairness, transparency, bias mitigation. sort of localization and other questions were all very much germane. So, you know, in many respects, the field has completely transformed, especially from a commercial perspective, given the level of investment that we’re seeing globally in the last five years, especially. But in many other respects, the foundational questions remain the same that policymakers were considering over 10 years ago.
And those questions, I think, are applicable in a lot of different directions. You know, I think one of the big differences in the current moment is that I really feel like we’ve moved from an era where these conversations have been more theoretical to an era in which they are much more applied and made much more real by the questions being asked by organizations like ours, for example, as AI deploying entities. Where the, you know, the issues of applied AI organizations are really where the rubber meets the road when it comes to these governance issues that we’re talking about from the stage and that policymakers have been considering for the last decade.
so I think if I talk to most people who’ve been for the first three summits and I talk to them about this summit there’s a lot of argument about there’s a lot of energy there’s a lot of discussion on use cases diffusion getting this out to humanity getting it out to people and now we have to work downstream and upstream and figure out how best to do the diffusion piece let me ask you a question is you’ve been here for three four days for the summit era what’s really struck you in terms of the diffusion argument the adoption argument and then if you put your policy regulatory lens to it what are you thinking right now
well I actually don’t think the hardest questions in this field maybe this is a controversial answer but I’ll try it on for size here I don’t think the hardest questions in this field are technical right now I think they are questions of human issues and institutional issues. And I hear that no matter where I am, talking to clients and other large enterprises, speaking to governments globally, whether in New York, California, Brussels, or Delhi here this week, where the hard problem really isn’t frontier advancement right now. It’s actually making this technology useful to real organizations and making it helpful to real people in their everyday lives. And core to that set of issues are the governance questions that have been so top of mind here at the summit, I think.
And questions of how we scale responsibly, how we engender trust in the technology, because in order for AI to be useful, it has to be applied. And in order for it to be applied and widely adopted, it needs to be trusted. And so these are, I think, are cornerstones of what we need to be thinking about when we’re actually thinking about the frontier of AI in many ways.
John, you run one of the most important organizations in the world, and you’re a largest philanthropic organization in the world. If there are students there, you should corner John afterwards for all sorts of things, if there are professors in the audience. But you’ve also got a very strong legal background. So the same question to Tara is, when you think of diffusion, when you think of impact use cases, and you think of what Paula said, which is we have to be adaptive about the regulatory architecture, where are you at?
Rudy, thank you. And first, let me please, on behalf of MacArthur Foundation, congratulate our hosts in India. What a wonderful global stage to be on, to be having this important conversation. The point of view that I come from as a law professor and as leader of a philanthropy MacArthur Foundation is, of course, that we need to make the technology, the AI, work for humans and to put humans at the center. And I’ve been delighted on this main stage and throughout the summit to hear that as the focus here in India and, of course, around the world. And I think the way to do that is not to treat the AI as something magical and separate, but rather connected to all of the things that we’re trying.
So whether it’s lifting people out of poverty or improving health care or… Thank you. a bank providing capital as needed, we need a stable regulatory regime that makes that possible and puts humans at the center rather than just seeking to advance the technology at all costs and then treating it as something magical and other than forms of mathematics, forms of science that we have been able through human history to regulate so that it serves humans, not for its own sake.
From your perspective in terms of philanthropy, but also from the perspective perhaps of peers that you talk to, is the current moment with the verb for adoption, the verb for getting this out to people, changing the way you’re thinking about grantees, partners, and the philosophical way in which you’re thinking about releasing money?
Yes and no. I think there are some constants in philanthropy that are very important and maybe more important than ever in this moment. You think about the amount of capital that is flowing towards AI and its development, mostly of course by the private sector. I think there are some constants in philanthropy that are very important in the private sector, sometimes by sovereign wealth funds and so forth. What we need to ensure is that civil society has a voice. And of course, again, I credit our hosts for including civil society in this conversation and continuing to do that from Bletchley to today and onward. And the civil society world doesn’t come for free. Somebody has to pay for it, right?
And philanthropy has been historically the source of funding that. And I’m very impressed by the Indian philanthropic environment that is developing. We’re excited to partnership with the Center for Exponential Change and others who are developing homegrown both philanthropy as well as ideas that are coming from India to the rest of the world. But if we don’t invest in civil society, there will be many, many fewer voices able to bring the kind of sensibility that we’re talking about to the world. It doesn’t come without actually thinking about it carefully. So no, we are thinking that long -term capital that is for academia, that is for organizations. And I think about, of course, the Observer Research Foundation, which you’re involved in, Partnership for AI, for which Tara was the founding ED.
These organizations, along with academia, are going to be able to bring the kind of sensibility that we’re talking about to the sensibility that we’re talking about to the world. And I think about the fact that we have to be world. And I think about the fact that we have to be able to bring the kind of sensibility that we’re talking about to the world. And I think about the fact that we have to be able to bring the kind of sensibility that we’re talking about to the world. And I think about the fact that we have to be able to bring the kind of sensibility that we have to be able to bring the kind of in a stable long -term way by philanthropy.
We’ve been able with colleagues to raise half a billion dollars for humanity AI and effort in the US, close to that amount for current AI led by Martin Tisnay and AI Collaborative for global efforts. So we’re over a billion dollars in commitments between these two efforts, but we have to be
Minister, let me ask you a question on, you talked about the benefits of AI in Rwanda. Can you open that box up for us a little bit? You know, one of the arguments has been, is that, and there are a lot of arguments about how is this stuff going to pay for itself? Use case and diffusion is all great, but is there an OPEX model or a revenue model for beneficial deployment? It needs to be sustainable over a period of time. And there’s another argument which says, when people actually start using things that are useful, and they see value in it, the rest will follow. What are your citizens in Rwanda feeling in terms of value?
So I’ll defer a little bit because I think value cannot just be seen in monetary terms and how are we going to have the return on investment? How do we just sustain this financially? It’s a good metric to use for sure. But I think the way we are looking and when I look at the use cases that we’ve already identified, one, it speaks to our government’s decision to make sure that we are delivering better services to our citizens. So whether it’s healthcare, whether it’s making sure that we’re giving quality education to our students in Rwanda, whether it’s making sure that a majority of our population, which is made up of farmers, have access to the right data and extension services that then ensure that they have a growth.
And productivity, which will translate essentially also in them being able to have more income and getting out of poverty and building wealth for their families. But a starting point for us has always been. what problem are we trying to solve? And is AI the best way to solve for this? Or is it a combination of AI and many other technologies that can solve for that? We’re a country that has been on a journey of digital transformation for more than 20 years. And so we’ve already started to see the benefit of that. So when I look at the education use cases, we are ranging from being able to facilitate teachers with assessment tools that can help with faster and better assessment.
We’re looking at AI solutions that support with better lesson planning. And so if you’re able to have better lesson planning, you’re able to deliver quality education and make sure that it’s similar across the country, then I think those are benefits that one can easily quantify. For the health sector, we’re looking at our frontline health workers or the community health workers delivering primary health care, giving them decision support tools that enable them to have better diagnosis, and at the same time to reduce the burden of the health care system. So we’re looking at AI solutions that support with better backlog of their in the health care references. him. Essentially, that’s also going to translate into less wastage, into better care, but also even bringing down the cost of care per person, if you look at it that way.
So for our people, they’re very optimistic. Obviously, like any other country, everyone has to wonder, okay, there’s lots of data that you’re going to be using. Some of it, a lot of it is going to be personal data. What guardrails are we putting in place? We have the data protection and privacy law that I talked about earlier. But the most important thing, even for people that they need, is how are we building capacity in -country? So that a lot of these things are not solutions we are acquiring from elsewhere, but we also have more than 70 % of our population that are in the youth bracket. It means these are already people that are very excited about technology, that if you train them the right way, they’ll also be part of building these solutions.
And so I think there’s a lot of optimism on what it can do. it doesn’t mean we’re shying away from what the risks are we think that’s why we’re doing everything by design use case by use case trying to understand for each use case that we are deploying what could be the risks that could be unique to that particular application and how we addressing it
no i think that’s fantastic i think the way you’re thinking about disaggregated risk rather than just one big banner sticker on top is perhaps the way we all need to go and as we think about how is this use case risky but how is it actually useful and adds value in different ways um so that’s fantastic um keeping an eye on the clock um tara i just want to talk a little bit about deployment and scale um we all love diffusion we want this stuff out to everybody how do we get it right when it comes to deployment and scale because none of this is going to be easy it’s going to require some kind of a sustainable financial model it’s going to require a lot of time and a lot of time and a lot of time and a lot of time and a lot of time and a lot of work across the board and across borders so just give us someone who works on scale and deployment give us a viewpoint
Sure. And maybe just a few words on census scale in our context here at JPMorgan Chase. We operate in over 100 countries globally. We spend close to $20 billion a year on technology. And we are investing really, really deeply in AI. So, you know, I think to answer your question, one of the paradigms from which we come to this issue is certainly from the unique risk management capabilities of finance and regulated banks specifically. We’ve been using AI technologies at the use case level for over 10 years, you know, starting first with more traditional analytic techniques, moving into the era of machine learning models, now introducing large language models, looking in the direction of agentic capabilities and beyond. And I think underscoring one of the points that John raised earlier, which I think is important here.
You know, the sort of risk management posture and considering what effective governance and controls looks like in order to scale in the way that you’re describing is something we have built muscles to do before. We know how to do this pretty well. And one of the superpowers, I think, that we have is sector -specific lens on regulation and oversight. I think that also speaks to some of the great points that the minister just made with respect to really evaluating risk at the use case level. You know, make this conversation about risk management grounded and practical in ways that address the real ways in which AI is getting deployed at the level of individual use cases.
And then making rules of the road that are applicable to that specific context. I think that’s really crucial. The other kind of piece of the equation is, and this speaks to the point I made at the top about our global operations. I think that’s really crucial. We really need regulatory harmonization to the extent possible in order to allow for consistency of rules across borders. And I think that there’s been a lot of really, really rich conversation this week at the summit about sovereign AI as a part of the global governance conversation. I think that that has its own unique and important goals, and I think it needs to be held in the same sort of space as a realization that we also need to be considering what a global baseline looks like, what clarity enables for global operators so that they can really get responsibility at scale right.
I’m going to ask you one question before I come back. What would you like to see going ahead? From this summit, the baton has been handed to Switzerland, and from Switzerland, there’s possibly another likely candidate. But what would you like this summit process to do in an institutional setting, perhaps, to keep these conversations going?
Well, I think that John’s earlier point about the need for multi -stakeholder diversity is really key. I think that looking across sectors, government, civil society, and industry is deeply important, and making sure all those voices are at the table is critical. I think a sub -point there, from my perspective, is that I would like to see more deployers sitting in seats like this one. We are one of the largest financial institutions in the world, and we use AI in really, really deep ways, as I mentioned before. But I want to see folks from retail, energy, I want to see people from manufacturing, I want to see folks who really represent the real economy sitting on stages like this one next year in Switzerland and speaking to how we deliver real value in the hands of customers and citizens every day using these technologies.
And John, very quickly to you, I’m going to ask you a cheeky question. The kind of philanthropy I think that we require now in AI is for MacArthur to be working with a frontier lab. That’s working with a local lab that’s deploying. Is that in your imagination?
Sure, Enredi, thank you. And I think it’s an exciting idea of going from here to Switzerland and imagining what could come next. And I think what could come next for philanthropy is absolutely an important piece of the story. And I think if you think about the way in which technology works, it often begets innovation in other sectors. So I think what’s exciting is that the technology itself can inform the way we practice philanthropy in ways you suggest, but it also can figure out how to regulate better. And it turns out, of course, regulation is not just against innovation. In fact, regulation sometimes prompts further innovation, and then this wonderful cycle can continue. So my sort of key point on this would be to say, let’s not have a false binary.
Either you regulate or you innovate. Let’s figure out the way that the regulation and the governance drives innovation. And I think that’s an exciting idea, not just for governments, as the minister said, or for banks. It’s true for philanthropy, too, which can improve its work a little bit along the way, too.
No, bang on. And Minister, last word to you. We would love to see the summit. hosted in Kigali. From your vantage point, and a lot of this is about South -South cooperation, a lot of it has been about global cooperation. What would you like to see between now and Switzerland? What can we all actively do to make this more palpable by the time we get to Zurich or Geneva or Davos or wherever it is?
I think it’s great that since we started with the Birchley Park convenings, we’re looking at safety, governance, and now it’s about impact, execution, implementation. It would be great that we start to quantify what that impact has looked like and also to create a way where these exchanges are truly happening. And I couldn’t agree more. If we have more of the people that are building and deploying some of these solutions here, we could have some of the communities that have either benefited positively or negatively here so we can have their voices. So as we go ahead with how large -scale adoption of this technology is going to be, I think it’s going to be a very, very important thing.
is going to happen across the world. We’re taking into consideration this conversation. And I think the last one for me is to make sure we have more voices coming from the African continent and elsewhere, so that we can sort of balance between where are we seeing the biggest impact? Is it in emerging economies? Is it in the middle economies or the big ones? And what could be the nuances as we continue to deploy massively? And I think to do that, we need to take this to the African continent sooner rather than later. And we’re happy to host you.
There you are. Good offer there. Minister, John, Tara, thank you so much. Thank you for being with us at the Impact Summit. And back to the organizers. Thank you.
Chair:Thank you, UNIDIR, for your statement and also for all the work that you do. Friends, it’s ten minutes to one, and I have no further speakers on capacity building. I just wanted to share some ve…
EventFocus on use case and sector-specific governance rather than blanket regulations
EventAlain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in the world before I contribute to this esteemed panel allow me to extend my heart…
EventA central point of discussion was the “Needs-Based Capacity Building Catalogue,” proposed by the Philippines. This proposal was aimed at creating a comprehensive approach to match the capacity buildin…
EventFrom the African perspective, Desire Kachenje highlighted that DPI development is government-driven but ecosystem-enabled, with countries like Tanzania building interoperable systems using DPGs like X…
EventSo I believe a global compact is possible. However, it has to reflect the different contexts, cultural, linguistic, everything. And so to a certain extent, what you’re looking at is what are some of t…
EventEsther Tetruashvily responded by describing OpenAI’s efforts to evaluate model performance across various languages and dialects, including specific testing for Indian linguistic diversity. However, s…
EventBaumann argues for a balanced approach that establishes shared global norms while allowing flexibility for countries to implement them according to their specific national contexts and capacities. She…
EventMalta:Thank you, President. And I thank the UK Presidency for holding today’s briefing on this highly topical issue. I also thank the Secretary General for enriching our discussion with his thoughts a…
EventTrust:AI developed and governed by humanitarian organisations, rather than opaque commercial platforms, can be aligned with humanitarian ethics and principles from the outset. Trust, as an inter-human…
BlogThis is a profound philosophical insight that reframes the entire trust discussion around AI. Rather than focusing on making machines trustworthy, it shifts the focus to human collective responsibilit…
EventBut to me, there’s no question that if you are, and when you are introducing agentic technology, you need to take the responsibility for your part. If your complete service consists of many different …
Event“How to make AI machine -centric and human -centric?”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion?dipl…
EventAcross these domains (conservation, disaster response, language preservation, small business, and agriculture), technologies and contexts differ, but the underlying patterns are remarkably consistent….
BlogBy utilizing a mix of tools and methods, it is possible to effectively address identified issues. Stakeholder cooperation is also considered to be of utmost importance in the realm of AI regulation. A…
EventThe speaker stressed that all stakeholders—government, industry, academia, and civil society—have important roles in shaping the future through standards work and consensus-building.
EventNeed for multi-stakeholder collaboration including governments, private sector, and civil society
EventMulti-stakeholder collaboration involving equality bodies, civil society, affected communities, and regulators is essential but requires adequate funding and institutional support
EventForming alliances in global digital governance is crucial. Initiatives such as the Coalition for Digital Environmental Sustainability (COBE) and the AI for the Planet Alliance aim to foster political …
EventIndia’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsibility. The objective is not to slow innovation, but to ensure that systemic risk…
Event– Jayee Koffey- Changpeng Zhao ING operates in 35 countries and faces different regulations. Examples include MiCA crypto regulation in Europe as positive enabling framework, versus AI Act focusing o…
EventImportance of cross-border regulatory coordination
EventWhat is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed multiple dimensions of AI that stakeholders believe require regulation, spannin…
Event“John Palfrey is the president of the John D. and Catherine T. MacArthur Foundation.”
The panel description identifies Mr. John Palfrey as the president of the MacArthur Foundation, confirming his role as stated in the report [S19].
“Rwanda’s adaptive regulatory approach is built around concrete use‑cases, partnerships, and co‑creation with stakeholders.”
Additional details describe Rwanda’s emphasis on co-creation with beneficiaries and experts, and its flexible stance on emerging measures, providing nuance to the adaptive strategy mentioned [S108] and [S109].
“The moderator opened a 25‑minute segment to frame the discussion.”
The opening remarks note a 25-minute timeframe for the panel, confirming the report’s timing detail [S1].
“The moderator cited recent calls from India’s prime minister and France’s president for responsible AI diffusion in the Global South.”
The knowledge base references discussions on how France and India are building AI-related industrial and innovation bridges, supporting the claim that leaders from those countries have made recent calls on responsible AI [S102].
The panel displayed a strong convergence around four core themes: (1) the need for adaptive, use‑case‑driven regulation anchored in human‑centred values; (2) the importance of partnership models that build local capacity and involve civil‑society; (3) the requirement for global cooperation that respects national contexts; and (4) the necessity of sustainable financial mechanisms and impact measurement to drive responsible diffusion.
High consensus – most speakers reiterated similar points from different angles, indicating a shared understanding that responsible AI deployment hinges on coordinated governance, capacity building, inclusive global frameworks, and financially sustainable models. This broad agreement suggests that future policy initiatives are likely to prioritize adaptive regulation, multi‑stakeholder partnerships, and measurable impact tracking.
The discussion revealed three main axes of disagreement: (1) the design of regulatory frameworks – whether they should be adaptive and use‑case‑specific or stable and uniform; (2) the financing of AI diffusion – societal‑value‑driven models versus explicit OPEX/revenue or philanthropic funding; and (3) the measurement of AI’s value – non‑monetary societal benefits versus monetary sustainability metrics. While participants share a common vision of responsible, inclusive AI, they diverge on the mechanisms to achieve it.
Moderate to high disagreement. The divergent regulatory philosophies and funding expectations could impede coordinated action unless a hybrid model is negotiated that blends adaptive oversight with baseline stability and aligns philanthropic, governmental, and private financing while respecting both societal impact and financial viability. These tensions have significant implications for the implementation of AI policies, cross‑border cooperation, and the ability to sustain large‑scale AI deployments across diverse economies.
The discussion was shaped by a series of pivotal remarks that moved the conversation from high‑level aspirations to concrete, actionable ideas. Paula Ingabire’s advocacy for adaptive, use‑case‑driven regulation and proactive data sovereignty set the tone for a pragmatic governance narrative. Terah Lyons’ emphasis on human and institutional challenges, together with her calls for regulatory harmonisation and broader stakeholder representation, redirected the focus toward trust, scalability, and inclusive policymaking. John Palfrey’s human‑centric framing reinforced the ethical underpinnings of these arguments, while the dialogue on a flexible global compact highlighted the tension between universal standards and national contexts. Collectively, these comments created turning points that deepened the analysis, introduced new dimensions (trust, cross‑border consistency, South‑South cooperation), and shaped a forward‑looking agenda for future summits.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

