Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion

20 Feb 2026 13:00h - 14:00h

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit’s “Adoption and Acceleration of Artificial Intelligence” panel brought together leaders from philanthropy, finance, and government to discuss how AI can be deployed equitably and responsibly worldwide [5-8][9-14]. Moderator Rudra Chaudhry framed the discussion around the tension between policy and large-scale adoption, citing recent calls from India’s prime minister and France’s president for responsible diffusion [22-27].


Rwandan Minister Paula Ingabire explained that Rwanda adopts an adaptive, use-case-driven regulatory posture, building rules only after concrete AI applications reveal specific risks, rather than imposing abstract frameworks [35-44]. She emphasized that partnerships must include capacity-building, co-development, and data-sovereignty safeguards such as a national data hub and a pre-existing data protection law [45-53][54].


When asked about a global AI compact, Ingabire affirmed its feasibility but stressed that standards must be contextualized to diverse cultural and linguistic settings and tied to the concrete problems nations aim to solve [61-66]. Tara Lyons noted that while the fundamental policy questions raised during the Obama administration-fairness, transparency, interoperability-remain unchanged, the field has shifted from theoretical debate to applied challenges faced by deploying organizations [71-80]. She argued that the hardest issues are now human and institutional, requiring trustworthy, responsibly scaled AI that delivers real value to users rather than purely technical breakthroughs [82-88].


John Palfrey, representing the MacArthur Foundation, reiterated that AI should serve humanity, calling for stable regulatory regimes that keep humans at the centre and for philanthropy to fund civil-society voices that can shape those rules [95-99][101-108]. He highlighted that the foundation has mobilised over a billion dollars in AI-focused philanthropy, underscoring the sector’s role in supporting research, governance, and inclusive innovation [110-121].


Tara added that the finance sector’s long-standing risk-management experience offers a model for use-case-level governance, and she advocated for greater regulatory harmonisation to enable consistent global deployment [156-169][170-174]. Both speakers called for broader multi-stakeholder participation, urging that future panels include representatives from retail, energy, and manufacturing to showcase concrete value creation for citizens [179-183].


Ingabire described Rwanda’s concrete AI benefits in health, education, and agriculture-improving diagnosis, lesson planning, and farmer data services-while noting that financial sustainability will be measured through service quality rather than direct OPEX returns [129-146]. She also stressed the importance of measuring impact, expanding South-South cooperation, and hosting future summit activities in Kigali to ensure African voices shape the emerging AI governance landscape [205-216].


The discussion concluded that coordinated, use-case-specific regulation, inclusive partnerships, and sustained philanthropic and financial support are essential to realise AI’s promise while managing its risks on a global scale [55-60][156-174][179-186].


Keypoints


Major discussion points


Rwanda’s adaptive, use-case-driven regulatory model and capacity-building partnerships – The minister explained that Rwanda prioritises identifying high-impact AI use cases first and then crafts specific regulations, rather than imposing abstract rules, and that partnerships are structured to transfer skills and ensure local ownership [35-44][45-53].


Feasibility of a global AI compact that respects diverse contexts – Rudra asked whether a worldwide agreement on AI risks is possible, and Paula responded that a compact can exist but must embed non-negotiable shared standards while allowing contextual adaptation for each nation’s problems [59-66].


Human-centred AI, trust and responsible diffusion – Both John and Tara stressed that AI should serve people, not the opposite, and that the hardest challenges are not technical but about making AI trustworthy and useful in everyday life, which is essential for broad adoption [95-98][71-80][82-88].


Philanthropy and multi-stakeholder collaboration as a catalyst for responsible AI – John highlighted the need for sustained philanthropic funding to give civil society a voice, to support research and implementation, and to bridge gaps between innovation and regulation [101-110][120-121].


Regulatory harmonisation and risk-management at scale – Tara described how a global financial institution manages AI risk at the use-case level, stresses the importance of sector-specific oversight, and calls for cross-border regulatory alignment to enable safe, large-scale deployment [156-169][172-174].


Overall purpose / goal of the discussion


The panel was convened to explore how AI can be adopted and accelerated responsibly worldwide, sharing concrete experiences (e.g., Rwanda’s approach, financial-sector risk management) and debating the need for common governance frameworks, funding mechanisms, and multi-stakeholder collaboration that can translate policy into real-world impact.


Overall tone


The conversation began with a formal, courteous opening and an optimistic framing of AI’s potential. As the dialogue progressed, the tone became more probing and analytical, with speakers questioning feasibility (global compact) and highlighting challenges (trust, regulation, financing). Throughout, the tone remained constructive and forward-looking, ending on a collaborative note that invited continued cooperation and concrete next steps (e.g., South-South cooperation, future summit venues).


Speakers

John Palfrey – President of the John D. and Catherine T. MacArthur Foundation; law professor; expertise in philanthropy, AI policy, and law. [S1]


Rudra Chaudhry – Vice President of Observer Research Foundation; moderator of the panel; expertise in AI policy and governance.


Speaker 1 – Opening host/moderator of the summit; specific role or title not provided.


Terah Lyons – Managing Director and Global Head of AI and Data Policy at JPMorgan Chase; expertise in AI policy, finance, and risk management.


Paula Ingabire – Minister of ICT and Innovation, Rwanda; expertise in digital governance, AI adoption, and data sovereignty. [S12]


Additional speakers:


Stephen Bird – Global Head of Thematic Research at Morgan Stanley; expertise in investment research and AI market assessment.


Full session reportComprehensive analysis and detailed insights

Opening & Panel Introduction – Speaker 1 thanked the host, invoked the “AI for all” vision, announced a lost-rupee card, and introduced the panel: John Palfrey (MacArthur Foundation), Terah Lyons (JPMorgan Chase), Her Excellency Paula Ingabire (Rwanda), and moderator Rudra Chaudhry (Observer Research Foundation). Stephen Bird was named in the introduction but did not speak. [1-18]


Framing the Discussion – Rudra opened his 25-minute segment by describing the tension between policy and large-scale adoption and citing recent calls from India’s prime minister and France’s president for responsible AI diffusion in the Global South. He then asked how Rwanda balances governance with population-scale deployment. [19-34]


Rwanda’s Adaptive Regulatory Approach – Paula explained that Rwanda follows an “adaptive” strategy built around concrete use-cases. The government first identifies applications that can deliver the greatest societal benefit, then crafts use-case-specific regulations that evolve as evidence accumulates. Partnerships are required to co-develop solutions and train Rwandan staff, creating a closed loop between capacity-building and regulation. Rwanda is also establishing a national data hub and has enacted a data-protection and privacy law to safeguard data sovereignty. [35-54]


Possibility of a Global AI Compact – Rudra asked whether a global AI compact is realistic. Paula affirmed its feasibility, stressing that any agreement must contain non-negotiable shared standards while allowing cultural, linguistic and contextual adaptation for each nation’s specific problems. [55-66]


Historical Perspective on AI Policy (Obama Era) – Terah traced the origins of modern AI policy to the Obama administration, which first raised issues of fairness, transparency, bias mitigation and interoperability. She noted that the field has moved from theoretical debate to applied challenges. [67-78]


Current Hard Problems – Human & Institutional – Terah argued that the toughest challenges now are human and institutional: building trust, ensuring responsible scaling, and delivering real value to organisations and end-users rather than pursuing purely technical breakthroughs. [79-88]


Philanthropy & Human-Centred Regulation – John stressed that AI must serve humanity and called for a stable, human-centred regulatory regime to prevent the technology from being treated as “magical” or ungovernable. [89-99]


Funding the Ecosystem – John outlined the philanthropic sector’s contribution, citing the $500 million “Humanity AI” fund and a comparable commitment to the AI Collaborative, together amounting to over $1 billion for AI-for-humanity projects that support governance, research and inclusive innovation. [100-121]


Finance-Sector Experience & Need for Harmonisation – Terah described JPMorgan’s roughly $20 billion annual technology spend and a decade-long AI deployment journey that has progressed from analytics to large-language and agentic models. She highlighted sector-specific risk-management expertise and the importance of regulatory harmonisation across jurisdictions for multinational operators seeking “census-scale” deployment while maintaining consistent safeguards. [122-174]


Rwanda’s Value-Based Impact Metrics – Paula argued that AI value should be measured in health, education and agriculture outcomes rather than pure monetary ROI. She cited decision-support tools for community health workers, AI-enhanced lesson-planning for teachers, and data services for farmers that boost productivity and income. She emphasized that over 70 % of Rwanda’s population are youth, who are being trained to develop and maintain these solutions, reinforcing local ownership and trust. [175-190]


Future Directions & Requests


Terah expressed a desire to see more “real-economy” deployers (retail, energy, manufacturing) featured on future panels. [191-193]


John suggested that collaborations between philanthropy and frontier AI labs would be “exciting,” but did not commit to a specific partnership. [194-198]


Paula invited the summit organisers to consider hosting a future meeting in Kigali to deepen South-South cooperation and amplify African perspectives, and called for the development of impact-measurement metrics that quantify AI’s benefits across sectors. [199-207]


Rudra closed the session by thanking the panelists and the organisers for the discussion. [208-210]


Key Consensus Points – All participants endorsed: (a) an adaptive, use-case-specific regulatory approach anchored in human-centred values; (b) partnership models that embed capacity-building and data-sovereignty; (c) a flexible global compact with core non-negotiable standards; (d) the need for clear, sustainable financing mechanisms, whether philanthropic, commercial or OPEX-based; and (e) the development of systematic impact-measurement metrics to inform evidence-based policy. [35-54][55-66][79-88][100-121][122-174][175-190]


In sum, the panel highlighted that responsible AI diffusion depends on adaptive regulation, locally grounded partnerships, a shared yet culturally sensitive global framework, sustainable financing, and robust impact measurement. The forward-looking agenda calls for continued multi-stakeholder engagement, regulatory harmonisation, and South-South collaboration to ensure AI delivers equitable, trustworthy benefits worldwide while avoiding a false binary between regulation and innovation. [208-210]


Session transcriptComplete transcript of the session
Speaker 1

Thank you so much, Your Excellency, Eta Bush, for your valuable insights and for elevating the summit. And it’s really interesting to listen to the perspectives of countries like Sweden, because when we talk of AI for all and global cooperation, the role of each and every country becomes very, very important. Ladies and gentlemen, before I move on, I need to announce that there’s a rupee card which we found. If somebody has lost this rupee card, though I don’t know how much money is there, but if you’ve lost this rupee card, kindly come to me and collect it from me. Thank you. And ladies and gentlemen, now we move to the next panel discussion, which is on adoption and acceleration of artificial intelligence.

The panelists joining us represent some of the most thoughtful voices on how AI is being built and adopted around the world. Mr. John Palfrey is the president of the John D. and Catherine T. MacArthur Foundation, one of the world’s most influential philanthropies, where he has championed the idea that technology must serve the public interest. His perspective on how AI can be deployed equitably, not just efficiently, is essential to the conversation. Ms. Tara Lyons is the managing director and global head of AI and data policy at JPMorgan Chase. AI at one of the world’s largest financial institutions, she is navigating the frontier where AI meets regulation, risk and responsible deployment, ensuring that AI in finance is not just powerful, but trustworthy.

Her Excellency Paula Njibar is the minister of ICT and innovation for the government of Rwanda. Under her leadership, Rwanda has emerged as one of Africa’s most ambitious digital economies, proving that visionary governance can leapfrog traditional development pathways. And we also have Mr. Stephen Bird as the panelist, who is the global head of thematic research at Morgan Stanley, bringing the investor’s lens to the question of which AI bets are real and which are hype. And this discussion will be moderated by Mr. Rudra Chaudhry, Vice President of Observer Research Foundation. Ladies and gentlemen, please join me in welcoming Mr. John Palfrey, Ms. Tara Leons, Her Excellency Paula Ngibar, and also Mr. Rudra Chaudhry. Please kindly come to the stage for this very interesting conversation, a panel on adoption and acceleration of AI.

Mr. Bird will be joining us very soon. Thank you.

Rudra Chaudhry

All right. Hi, everyone. There’s a good bit of distance between me and the panelists, which might be a good thing. We’ll see. We’ve got about 25 minutes, so I’m going to keep it quite swift. The general panel is about policy on the one side, adoption on the other. And I wonder if that’s actually the case. Yesterday in the inaugural, the prime minister made very clear that adoption is a huge opportunity for India and other parts of the global south. But we have to do it responsibly. President Macron made a very similar pitch in his inaugural speech. And I want to start with that framing. And I want to come to you, Minister. Rwanda is a fascinating country in general.

But you’re particularly fascinating on the African continent because you were way ahead of the AI curve in a sense. You invested in a startup ecosystem. You were looking at scale before many of us thought of use case scales. Give us a sense of how Rwanda. Manages these minefields between governance policy on the one side and adoption at population scale on the other.

Paula Ingabire

Thank you very much, Rudy, and great to see you all. I think for us, the decision has always been clear around how we leverage technology as a country to drive socioeconomic development. And so AI, like many other technologies that we’ve experimented with as a country, we took the same posture. And so the idea was figuring out how we leverage this particular technology to address societal challenges. And there were certain trade -offs that we had to make. When it comes to governance, it was a posture around, rather than try to focus more on regulating, we’d rather figure out where do we see AI creating the biggest benefits and gains for society. And then we’re able to build regulations according to the use cases that we’re implementing.

And so the regulatory posture that we take then is more adaptive. And it’s one where it’s evidenced best because we’re already building use cases, using that today. And so we’re able to determine what kind of regulations are needed, and they’re very specific to the problems that we are solving. as opposed to trying to create a very abstract regulatory framework, which may not necessarily address whatever risks and concerns that we foresee. The second one has always been on partnerships because that’s been key. The level of development, digital development that we’ve achieved as a country is thanks to the various partners that we’ve been able to attract into Rwanda. But partnerships, we also look at it very closely to determine how do we make sure that these partnerships are helping us to build capacity.

So, for example, we’re not going to acquire a foreign solution, invite them to train on our data and just leave us with an application. We want them to be able to train our people, co -develop this with our people so that at least we have the skill set and the mastery of what we’re trying to deploy, which will then create that closed loop around the regulatory environment that we put in place. And last, again, I think it’s a conversation that we’ve had throughout this week around sovereignty, thinking about data sovereignty. By design, we’re building our national data hub. and we’re really making sure we understand, you know, what are the guardrails that we put in place.

We don’t want to wait for a crisis to start, you know, worrying about who is using our data, what are they accessing that for. And so we started with already putting in place the data protection and privacy law that governs how you collect, use, and process data. And that has been the foundation through which we can then start to ensure that everything that we do from a data sovereignty perspective, we’re doing it by design.

Rudra Chaudhry

So I’m going to come back to the question on the benefits of AI for all of you and for you, Minister, in a minute. You know, this entire summit process started with Bletchley, where I think the general philosophy was that can we come to some kind of a global compact when it comes to risk and risk aversion, when it comes to early warning systems. The institutional outcomes was these AI safety institutes that were built out. Can I ask a challenging question? Is, from your perspective, is a global compact on something like AI actually possible? Or are there norms that we should generally be thinking about and fitting into our national jurisdictions?

Paula Ingabire

So I believe a global compact is possible. However, it has to reflect the different contexts, cultural, linguistic, everything. And so to a certain extent, what you’re looking at is what are some of those shared standards that we all subscribe to as countries, which are non -negotiables for everyone that is building and deploying AI products and solutions. And then obviously, you then get to contextualize it to whatever problems that you’re solving for. And so, again, it’s going to come back to what are nations deploying AI to solve for? And how do we make sure that these standards are reflective of what we’re looking to adopt through the global compact?

Rudra Chaudhry

Dara, if I could come to you. You were… You’re leading AI at J .P. Morgan. you’ve been in the Obama administration in a very different office on science and technology and policy, way before the AI wave kind of hit us, although people have been working for AI for three decades now. Just give us a sense, just before I come to the immediate, take us a sense back to those second term of the Obama administration. Give us a sense of how were you thinking about AI?

Terah Lyons

Well, I would say that era was the first in which global governments started considering AI policy questions at all. And honestly, a lot of the same questions were being asked then as are being asked now. The question of global governance that the minister just spoke to, I think, was top of mind then as it is today. Questions of standards generation and interoperability were certainly part of the conversation. Issues of fairness, transparency, bias mitigation. sort of localization and other questions were all very much germane. So, you know, in many respects, the field has completely transformed, especially from a commercial perspective, given the level of investment that we’re seeing globally in the last five years, especially. But in many other respects, the foundational questions remain the same that policymakers were considering over 10 years ago.

And those questions, I think, are applicable in a lot of different directions. You know, I think one of the big differences in the current moment is that I really feel like we’ve moved from an era where these conversations have been more theoretical to an era in which they are much more applied and made much more real by the questions being asked by organizations like ours, for example, as AI deploying entities. Where the, you know, the issues of applied AI organizations are really where the rubber meets the road when it comes to these governance issues that we’re talking about from the stage and that policymakers have been considering for the last decade.

Rudra Chaudhry

so I think if I talk to most people who’ve been for the first three summits and I talk to them about this summit there’s a lot of argument about there’s a lot of energy there’s a lot of discussion on use cases diffusion getting this out to humanity getting it out to people and now we have to work downstream and upstream and figure out how best to do the diffusion piece let me ask you a question is you’ve been here for three four days for the summit era what’s really struck you in terms of the diffusion argument the adoption argument and then if you put your policy regulatory lens to it what are you thinking right now

Terah Lyons

well I actually don’t think the hardest questions in this field maybe this is a controversial answer but I’ll try it on for size here I don’t think the hardest questions in this field are technical right now I think they are questions of human issues and institutional issues. And I hear that no matter where I am, talking to clients and other large enterprises, speaking to governments globally, whether in New York, California, Brussels, or Delhi here this week, where the hard problem really isn’t frontier advancement right now. It’s actually making this technology useful to real organizations and making it helpful to real people in their everyday lives. And core to that set of issues are the governance questions that have been so top of mind here at the summit, I think.

And questions of how we scale responsibly, how we engender trust in the technology, because in order for AI to be useful, it has to be applied. And in order for it to be applied and widely adopted, it needs to be trusted. And so these are, I think, are cornerstones of what we need to be thinking about when we’re actually thinking about the frontier of AI in many ways.

Rudra Chaudhry

John, you run one of the most important organizations in the world, and you’re a largest philanthropic organization in the world. If there are students there, you should corner John afterwards for all sorts of things, if there are professors in the audience. But you’ve also got a very strong legal background. So the same question to Tara is, when you think of diffusion, when you think of impact use cases, and you think of what Paula said, which is we have to be adaptive about the regulatory architecture, where are you at?

John Palfrey

Rudy, thank you. And first, let me please, on behalf of MacArthur Foundation, congratulate our hosts in India. What a wonderful global stage to be on, to be having this important conversation. The point of view that I come from as a law professor and as leader of a philanthropy MacArthur Foundation is, of course, that we need to make the technology, the AI, work for humans and to put humans at the center. And I’ve been delighted on this main stage and throughout the summit to hear that as the focus here in India and, of course, around the world. And I think the way to do that is not to treat the AI as something magical and separate, but rather connected to all of the things that we’re trying.

So whether it’s lifting people out of poverty or improving health care or… Thank you. a bank providing capital as needed, we need a stable regulatory regime that makes that possible and puts humans at the center rather than just seeking to advance the technology at all costs and then treating it as something magical and other than forms of mathematics, forms of science that we have been able through human history to regulate so that it serves humans, not for its own sake.

Rudra Chaudhry

From your perspective in terms of philanthropy, but also from the perspective perhaps of peers that you talk to, is the current moment with the verb for adoption, the verb for getting this out to people, changing the way you’re thinking about grantees, partners, and the philosophical way in which you’re thinking about releasing money?

John Palfrey

Yes and no. I think there are some constants in philanthropy that are very important and maybe more important than ever in this moment. You think about the amount of capital that is flowing towards AI and its development, mostly of course by the private sector. I think there are some constants in philanthropy that are very important in the private sector, sometimes by sovereign wealth funds and so forth. What we need to ensure is that civil society has a voice. And of course, again, I credit our hosts for including civil society in this conversation and continuing to do that from Bletchley to today and onward. And the civil society world doesn’t come for free. Somebody has to pay for it, right?

And philanthropy has been historically the source of funding that. And I’m very impressed by the Indian philanthropic environment that is developing. We’re excited to partnership with the Center for Exponential Change and others who are developing homegrown both philanthropy as well as ideas that are coming from India to the rest of the world. But if we don’t invest in civil society, there will be many, many fewer voices able to bring the kind of sensibility that we’re talking about to the world. It doesn’t come without actually thinking about it carefully. So no, we are thinking that long -term capital that is for academia, that is for organizations. And I think about, of course, the Observer Research Foundation, which you’re involved in, Partnership for AI, for which Tara was the founding ED.

These organizations, along with academia, are going to be able to bring the kind of sensibility that we’re talking about to the sensibility that we’re talking about to the world. And I think about the fact that we have to be world. And I think about the fact that we have to be able to bring the kind of sensibility that we’re talking about to the world. And I think about the fact that we have to be able to bring the kind of sensibility that we’re talking about to the world. And I think about the fact that we have to be able to bring the kind of sensibility that we have to be able to bring the kind of in a stable long -term way by philanthropy.

We’ve been able with colleagues to raise half a billion dollars for humanity AI and effort in the US, close to that amount for current AI led by Martin Tisnay and AI Collaborative for global efforts. So we’re over a billion dollars in commitments between these two efforts, but we have to be

Rudra Chaudhry

Minister, let me ask you a question on, you talked about the benefits of AI in Rwanda. Can you open that box up for us a little bit? You know, one of the arguments has been, is that, and there are a lot of arguments about how is this stuff going to pay for itself? Use case and diffusion is all great, but is there an OPEX model or a revenue model for beneficial deployment? It needs to be sustainable over a period of time. And there’s another argument which says, when people actually start using things that are useful, and they see value in it, the rest will follow. What are your citizens in Rwanda feeling in terms of value?

Paula Ingabire

So I’ll defer a little bit because I think value cannot just be seen in monetary terms and how are we going to have the return on investment? How do we just sustain this financially? It’s a good metric to use for sure. But I think the way we are looking and when I look at the use cases that we’ve already identified, one, it speaks to our government’s decision to make sure that we are delivering better services to our citizens. So whether it’s healthcare, whether it’s making sure that we’re giving quality education to our students in Rwanda, whether it’s making sure that a majority of our population, which is made up of farmers, have access to the right data and extension services that then ensure that they have a growth.

And productivity, which will translate essentially also in them being able to have more income and getting out of poverty and building wealth for their families. But a starting point for us has always been. what problem are we trying to solve? And is AI the best way to solve for this? Or is it a combination of AI and many other technologies that can solve for that? We’re a country that has been on a journey of digital transformation for more than 20 years. And so we’ve already started to see the benefit of that. So when I look at the education use cases, we are ranging from being able to facilitate teachers with assessment tools that can help with faster and better assessment.

We’re looking at AI solutions that support with better lesson planning. And so if you’re able to have better lesson planning, you’re able to deliver quality education and make sure that it’s similar across the country, then I think those are benefits that one can easily quantify. For the health sector, we’re looking at our frontline health workers or the community health workers delivering primary health care, giving them decision support tools that enable them to have better diagnosis, and at the same time to reduce the burden of the health care system. So we’re looking at AI solutions that support with better backlog of their in the health care references. him. Essentially, that’s also going to translate into less wastage, into better care, but also even bringing down the cost of care per person, if you look at it that way.

So for our people, they’re very optimistic. Obviously, like any other country, everyone has to wonder, okay, there’s lots of data that you’re going to be using. Some of it, a lot of it is going to be personal data. What guardrails are we putting in place? We have the data protection and privacy law that I talked about earlier. But the most important thing, even for people that they need, is how are we building capacity in -country? So that a lot of these things are not solutions we are acquiring from elsewhere, but we also have more than 70 % of our population that are in the youth bracket. It means these are already people that are very excited about technology, that if you train them the right way, they’ll also be part of building these solutions.

And so I think there’s a lot of optimism on what it can do. it doesn’t mean we’re shying away from what the risks are we think that’s why we’re doing everything by design use case by use case trying to understand for each use case that we are deploying what could be the risks that could be unique to that particular application and how we addressing it

Rudra Chaudhry

no i think that’s fantastic i think the way you’re thinking about disaggregated risk rather than just one big banner sticker on top is perhaps the way we all need to go and as we think about how is this use case risky but how is it actually useful and adds value in different ways um so that’s fantastic um keeping an eye on the clock um tara i just want to talk a little bit about deployment and scale um we all love diffusion we want this stuff out to everybody how do we get it right when it comes to deployment and scale because none of this is going to be easy it’s going to require some kind of a sustainable financial model it’s going to require a lot of time and a lot of time and a lot of time and a lot of time and a lot of time and a lot of work across the board and across borders so just give us someone who works on scale and deployment give us a viewpoint

Terah Lyons

Sure. And maybe just a few words on census scale in our context here at JPMorgan Chase. We operate in over 100 countries globally. We spend close to $20 billion a year on technology. And we are investing really, really deeply in AI. So, you know, I think to answer your question, one of the paradigms from which we come to this issue is certainly from the unique risk management capabilities of finance and regulated banks specifically. We’ve been using AI technologies at the use case level for over 10 years, you know, starting first with more traditional analytic techniques, moving into the era of machine learning models, now introducing large language models, looking in the direction of agentic capabilities and beyond. And I think underscoring one of the points that John raised earlier, which I think is important here.

You know, the sort of risk management posture and considering what effective governance and controls looks like in order to scale in the way that you’re describing is something we have built muscles to do before. We know how to do this pretty well. And one of the superpowers, I think, that we have is sector -specific lens on regulation and oversight. I think that also speaks to some of the great points that the minister just made with respect to really evaluating risk at the use case level. You know, make this conversation about risk management grounded and practical in ways that address the real ways in which AI is getting deployed at the level of individual use cases.

And then making rules of the road that are applicable to that specific context. I think that’s really crucial. The other kind of piece of the equation is, and this speaks to the point I made at the top about our global operations. I think that’s really crucial. We really need regulatory harmonization to the extent possible in order to allow for consistency of rules across borders. And I think that there’s been a lot of really, really rich conversation this week at the summit about sovereign AI as a part of the global governance conversation. I think that that has its own unique and important goals, and I think it needs to be held in the same sort of space as a realization that we also need to be considering what a global baseline looks like, what clarity enables for global operators so that they can really get responsibility at scale right.

Rudra Chaudhry

I’m going to ask you one question before I come back. What would you like to see going ahead? From this summit, the baton has been handed to Switzerland, and from Switzerland, there’s possibly another likely candidate. But what would you like this summit process to do in an institutional setting, perhaps, to keep these conversations going?

Terah Lyons

Well, I think that John’s earlier point about the need for multi -stakeholder diversity is really key. I think that looking across sectors, government, civil society, and industry is deeply important, and making sure all those voices are at the table is critical. I think a sub -point there, from my perspective, is that I would like to see more deployers sitting in seats like this one. We are one of the largest financial institutions in the world, and we use AI in really, really deep ways, as I mentioned before. But I want to see folks from retail, energy, I want to see people from manufacturing, I want to see folks who really represent the real economy sitting on stages like this one next year in Switzerland and speaking to how we deliver real value in the hands of customers and citizens every day using these technologies.

Rudra Chaudhry

And John, very quickly to you, I’m going to ask you a cheeky question. The kind of philanthropy I think that we require now in AI is for MacArthur to be working with a frontier lab. That’s working with a local lab that’s deploying. Is that in your imagination?

John Palfrey

Sure, Enredi, thank you. And I think it’s an exciting idea of going from here to Switzerland and imagining what could come next. And I think what could come next for philanthropy is absolutely an important piece of the story. And I think if you think about the way in which technology works, it often begets innovation in other sectors. So I think what’s exciting is that the technology itself can inform the way we practice philanthropy in ways you suggest, but it also can figure out how to regulate better. And it turns out, of course, regulation is not just against innovation. In fact, regulation sometimes prompts further innovation, and then this wonderful cycle can continue. So my sort of key point on this would be to say, let’s not have a false binary.

Either you regulate or you innovate. Let’s figure out the way that the regulation and the governance drives innovation. And I think that’s an exciting idea, not just for governments, as the minister said, or for banks. It’s true for philanthropy, too, which can improve its work a little bit along the way, too.

Rudra Chaudhry

No, bang on. And Minister, last word to you. We would love to see the summit. hosted in Kigali. From your vantage point, and a lot of this is about South -South cooperation, a lot of it has been about global cooperation. What would you like to see between now and Switzerland? What can we all actively do to make this more palpable by the time we get to Zurich or Geneva or Davos or wherever it is?

Paula Ingabire

I think it’s great that since we started with the Birchley Park convenings, we’re looking at safety, governance, and now it’s about impact, execution, implementation. It would be great that we start to quantify what that impact has looked like and also to create a way where these exchanges are truly happening. And I couldn’t agree more. If we have more of the people that are building and deploying some of these solutions here, we could have some of the communities that have either benefited positively or negatively here so we can have their voices. So as we go ahead with how large -scale adoption of this technology is going to be, I think it’s going to be a very, very important thing.

is going to happen across the world. We’re taking into consideration this conversation. And I think the last one for me is to make sure we have more voices coming from the African continent and elsewhere, so that we can sort of balance between where are we seeing the biggest impact? Is it in emerging economies? Is it in the middle economies or the big ones? And what could be the nuances as we continue to deploy massively? And I think to do that, we need to take this to the African continent sooner rather than later. And we’re happy to host you.

Rudra Chaudhry

There you are. Good offer there. Minister, John, Tara, thank you so much. Thank you for being with us at the Impact Summit. And back to the organizers. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (23)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“John Palfrey is the president of the John D. and Catherine T. MacArthur Foundation.”

The panel description identifies Mr. John Palfrey as the president of the MacArthur Foundation, confirming his role as stated in the report [S19].

Confirmedhigh

“Rwanda has enacted a data‑protection and privacy law to safeguard data sovereignty.”

Rwanda’s implementation of a data-protection and privacy law is documented in the knowledge base, confirming the report’s statement [S23] and [S22].

Additional Contextmedium

“Rwanda’s adaptive regulatory approach is built around concrete use‑cases, partnerships, and co‑creation with stakeholders.”

Additional details describe Rwanda’s emphasis on co-creation with beneficiaries and experts, and its flexible stance on emerging measures, providing nuance to the adaptive strategy mentioned [S108] and [S109].

Confirmedmedium

“The moderator opened a 25‑minute segment to frame the discussion.”

The opening remarks note a 25-minute timeframe for the panel, confirming the report’s timing detail [S1].

Confirmedmedium

“The moderator cited recent calls from India’s prime minister and France’s president for responsible AI diffusion in the Global South.”

The knowledge base references discussions on how France and India are building AI-related industrial and innovation bridges, supporting the claim that leaders from those countries have made recent calls on responsible AI [S102].

External Sources (114)
S1
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion — The panelists joining us represent some of the most thoughtful voices on how AI is being built and adopted around the wo…
S2
Building Trusted AI at Scale – Keynote Anne Bouverot — -John Palfrey: Representative from the MacArthur Foundation (mentioned by Anne Bouverot but did not speak in this transc…
S3
FOSTERING FREEDOM ONLINE — – Deibert, Ronald, John Palfrey, Rafal Rohozinski and Jonathan Zittrain (eds). April 2010. Access Controlled: The Shapin…
S4
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion — Her Excellency Paula Njibar is the minister of ICT and innovation for the government of Rwanda. Under her leadership, Rw…
S5
The reality of science fiction: Behind the scenes of race and technology — ‘Every desireis an endand every endis a desirethenthe end of the worldis a desire of the worldwhat type of end do you de…
S6
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S7
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S8
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S9
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S10
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – John Tass-Parker- Terah Lyons – Terah Lyons- Harshil Mathur
S11
The Power of Satellites in Emergency Alerting and Protecting Lives — Alexandre Vallet: Thank you very much Dr. Zavazava. Thank you very much both of you for this introductory remark. I will…
S12
Reinventing Digital Inclusion / DAVOS 2025 — – Paula Ingabire: Minister of Innovation, Technology and Innovation of Rwanda A major theme of the discussion was the l…
S13
AI: Lifting All Boats / DAVOS 2025 — – Paula Ingabire: Minister of Information, Communication Technology and Innovation of Rwanda Paula Ingabire: Maybe Vij…
S14
UNECA Role in the Internet Ecosystem in Africa | IGF 2023 Open Forum #110 — Hon. Paula Ingabire, Minister of Information and Communications Technology (ICT)
S15
Artificial intelligence (AI) – UN Security Council — The global focus on Artificial Intelligence (AI) capacity-building efforts has been a significant topic of discussion am…
S16
Democratizing AI: Open foundations and shared resources for global impact — Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists. Nina, thank you for giving me the floor. In the globa…
S17
Open Forum #33 Building an International AI Cooperation Ecosystem — International Cooperation and Multi-stakeholder Approach Klauweiter argues that since AI governance is a global problem…
S18
AI: The Great Equaliser? — Rwanda has been digitising various functions and services for nearly two decades, and most government services are now a…
S19
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — “And so the regulatory posture that we take then is more adaptive”[1]. “And then we’re able to build regulations accordi…
S20
Global AI Policy Framework: International Cooperation and Historical Perspectives — I think that’s my understanding. I think now we need to see as a human civilization. Obviously, cultures are very differ…
S21
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — **Local Capacity Building**: The priority of developing local expertise over simply importing advanced equipment. 3. **…
S22
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — Policies play a crucial role in creating a conducive data ecosystem. Rwanda’s implementation of a data protection and pr…
S23
Thinking Big on Digital Inclusion — Data protection and privacy are essential considerations in the digitisation process. Rwanda has implemented data protec…
S24
Keeping AI in check — Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intellige…
S25
Building Public Interest AI Catalytic Funding for Equitable Compute Access — India is proving that you can design AI ecosystems that are both globally competitive and globally competitive. And loca…
S26
Press Conference: Closing the AI Access Gap — An important aspect of the alliance’s work is the creation of relevant international frameworks and public-private partn…
S27
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — The SADC region’s cross-border financial inclusion project demonstrates this principle, focusing on solving real problem…
S28
Agenda item 5 : Day 4 Afternoon session — A central point of discussion was the “Needs-Based Capacity Building Catalogue,” proposed by the Philippines. This propo…
S29
Agenda item 6 — Chair:Thank you, UNIDIR, for your statement and also for all the work that you do. Friends, it’s ten minutes to one, and…
S30
Panel Discussion Data Sovereignty India AI Impact Summit — This comment introduces a powerful paradigm shift from a deficit mindset to an asset-based approach. Instead of focusing…
S31
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S32
UNSC meeting: Artificial intelligence, peace and security — Malta:Thank you, President. And I thank the UK Presidency for holding today’s briefing on this highly topical issue. I a…
S33
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Hiya, how are you doing? Check, check. Is that better? Cool. Again, hello. Welcome. My name is Chri…
S34
Democratizing AI Building Trustworthy Systems for Everyone — And I think that’s critical to ensure that if you want to democratize and ensure that GlobalSoft is integral to that, an…
S35
Local, Everywhere: The blueprint for a Humanitarian AI transformation — Trust:AI developed and governed by humanitarian organisations, rather than opaque commercial platforms, can be aligned w…
S36
How to make AI governance fit for purpose? — This comment elevated the discussion to a more philosophical level, moving beyond technical regulatory approaches to con…
S37
Bridging the AI innovation gap — The speaker stressed that all stakeholders—government, industry, academia, and civil society—have important roles in sha…
S38
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Forming alliances in global digital governance is crucial. Initiatives such as the Coalition for Digital Environmental S…
S39
AI/Gen AI for the Global Goals — Need for multi-stakeholder collaboration including governments, private sector, and civil society
S40
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsib…
S41
WS #98 Towards a global, risk-adaptive AI governance framework — 4. The potential need for sector-specific and use case-specific governance rather than one-size-fits-all approaches. Ti…
S42
Technology Rewiring Global Finance: A Panel Discussion Summary — – Jayee Koffey- Changpeng Zhao ING operates in 35 countries and faces different regulations. Examples include MiCA cryp…
S43
WS #283 AI Agents: Ensuring Responsible Deployment — Government Perspectives and Regulatory Approaches Lazanski points out that regulatory frameworks are emerging different…
S44
Building Population-Scale Digital Public Infrastructure for AI — To address this challenge, the Gates Foundation is investing in “scaling hubs” in Rwanda, Nigeria, Senegal, and soon Ken…
S45
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Introduction and Context Setting ## Military and Dual-Use Applications Virginia Dignam: Thank you very much, Isador…
S46
The Foundation of AI Democratizing Compute Data Infrastructure — “So we are identifying agriculture, education, healthcare, and some more.”[83]. “So inspire them that they can really do…
S47
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — “The general panel is about policy on the one side, adoption on the other”[52]. “…we have to work downstream and upstr…
S48
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S49
Setting the Rules_ Global AI Standards for Growth and Governance — So consensus around the need to do it, consensus around the fact that it’s hard, but it’s important for consumers and bu…
S50
Why science metters in global AI governance — The discussion maintained a consistently serious, collaborative, and optimistic tone throughout. Speakers emphasized urg…
S51
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S52
AI as critical infrastructure for continuity in public services — So the participation of the community into that, in ensuring that the innovation and the policy level align with the nee…
S53
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Participants emphasized the importance of involving diverse stakeholders in policy development, including marginalized g…
S54
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — The speakers demonstrated remarkable consensus on the need for alternative approaches to AI development that prioritize …
S55
Indias AI Leap Policy to Practice with AIP2 — The speakers demonstrated strong consensus on fundamental prerequisites for AI diffusion: skills development, clear gove…
S56
Closing remarks – Charting the path forward — Mentioned key sectors such as health, education, and agriculture as areas where communities should be empowered to innov…
S57
AI for agriculture Scaling Intelegence for food and climate resiliance — All these have been put in the one platform. You can just make a – presently it is working in English and Hindi, but in …
S58
Scaling Innovation Building a Robust AI Startup Ecosystem — Very high level of consensus with unanimous praise for STPI’s multifaceted support and shared recognition of technology’…
S59
Conversational AI in low income &amp; resource settings | IGF 2023 — Rajendra Pratap Gupta supports using voice-based data through Conversational AI to increase the accuracy and volume of h…
S60
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — And in terms of regulation, Reserve Bank’s approach has been largely tech neutral. It’s tech agnostic in some sense, bec…
S61
Secure Finance Risk-Based AI Policy for the Banking Sector — And these systems are fueled by vast data sets drawn from public and proprietary sources. On this foundation operate lar…
S62
WS #98 Towards a global, risk-adaptive AI governance framework — Sulafah Jabarti: OK, so I guess we all agree that AI has been reshaping the economy and the society all over the world…
S63
Building Sovereign and Responsible AI Beyond Proof of Concepts — Valuable AIextends beyond financial metrics to consider real-world benefits and measurable improvements in people’s live…
S64
Comprehensive Report: Preventing Jobless Growth in the Age of AI — -Sharing Productivity Benefits: Labor representative Liz Shuler raised concerns about ensuring workers receive fair shar…
S65
Swiss AI Initiatives and Policy Implementation Discussion — Risk quantification should be done in monetary terms to enable data-driven investment decisions and compare potential be…
S66
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — AI is well-positioned to improve businesses and banking by automating processes. This is enabled by enhancing the capaci…
S67
Technology Regulation and AI Governance Panel Discussion — All three speakers acknowledge that regulatory reform approaches must be adapted to each country’s specific context, ins…
S68
Wrap up — These key comments fundamentally reframed the discussion from typical technology policy debates to deeper philosophical …
S69
State of Play: AI Governance / DAVOS 2025 — While all speakers advocate for some form of regulation, they differ in their specific approaches. Krishna proposes a ri…
S70
Main Session 2: The governance of artificial intelligence — Mashologu advocates for context-aware regulatory innovation that includes regulatory sandboxes, human interlock mechanis…
S71
Global AI Policy Framework: International Cooperation and Historical Perspectives — High level of consensus on fundamental principles and approaches, with differences mainly in emphasis and specific imple…
S72
State of play of major global AI Governance processes — Its flexibility and adaptability are praised for bridging institutional, cultural, and regional practices. A cooperative…
S73
Agenda item 6 — Chair:Thank you, UNIDIR, for your statement and also for all the work that you do. Friends, it’s ten minutes to one, and…
S74
WS #98 Towards a global, risk-adaptive AI governance framework — Focus on use case and sector-specific governance rather than blanket regulations
S75
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in …
S76
Agenda item 5 : Day 4 Afternoon session — A central point of discussion was the “Needs-Based Capacity Building Catalogue,” proposed by the Philippines. This propo…
S77
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — From the African perspective, Desire Kachenje highlighted that DPI development is government-driven but ecosystem-enable…
S78
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — So I believe a global compact is possible. However, it has to reflect the different contexts, cultural, linguistic, ever…
S79
Setting the Rules_ Global AI Standards for Growth and Governance — Esther Tetruashvily responded by describing OpenAI’s efforts to evaluate model performance across various languages and …
S80
Global AI Policy Framework: International Cooperation and Historical Perspectives — Baumann argues for a balanced approach that establishes shared global norms while allowing flexibility for countries to …
S81
UNSC meeting: Artificial intelligence, peace and security — Malta:Thank you, President. And I thank the UK Presidency for holding today’s briefing on this highly topical issue. I a…
S82
Local, Everywhere: The blueprint for a Humanitarian AI transformation — Trust:AI developed and governed by humanitarian organisations, rather than opaque commercial platforms, can be aligned w…
S83
Closing remarks — This is a profound philosophical insight that reframes the entire trust discussion around AI. Rather than focusing on ma…
S84
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — But to me, there’s no question that if you are, and when you are introducing agentic technology, you need to take the re…
S85
Welcome Address — “How to make AI machine -centric and human -centric?”[33]. “Friends, the future of work will be inclusive, trusted, and …
S86
AI in Action: When technology serves humanity — Across these domains (conservation, disaster response, language preservation, small business, and agriculture), technolo…
S87
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — By utilizing a mix of tools and methods, it is possible to effectively address identified issues. Stakeholder cooperatio…
S88
Bridging the AI innovation gap — The speaker stressed that all stakeholders—government, industry, academia, and civil society—have important roles in sha…
S89
AI/Gen AI for the Global Goals — Need for multi-stakeholder collaboration including governments, private sector, and civil society
S90
Workshop 1: AI &amp; non-discrimination in digital spaces: from prevention to redress — Multi-stakeholder collaboration involving equality bodies, civil society, affected communities, and regulators is essent…
S91
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Forming alliances in global digital governance is crucial. Initiatives such as the Coalition for Digital Environmental S…
S92
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsib…
S93
Technology Rewiring Global Finance: A Panel Discussion Summary — – Jayee Koffey- Changpeng Zhao ING operates in 35 countries and faces different regulations. Examples include MiCA cryp…
S94
Lightning Talk #107 Irish Regulator Builds a Safe and Trusted Online Environment — Importance of cross-border regulatory coordination
S95
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S96
AI for food systems — LJ Rich: Thank you so much, Seizo Onoe, for your opening remarks. And now we’ll turn to our fabulous panelists. Ladies a…
S97
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 2 — – Australia (mentioned but did not speak) – Brazil (mentioned but did not speak) – China (mentioned but did not speak)…
S98
Opening of the session — – Albania (mentioned but did not speak in this transcript) – Brazil (mentioned but did not speak in this transcript) -…
S99
The Global Economic Outlook — – Borge: World Economic Forum executive (mentioned but did not speak)
S100
Launch / Award Event #223 Affordable Access for Education and Health Aa4edu — – **Jonathan Moringani** – Basic Internet Foundation, rapporteur (mentioned in introduction but did not speak)
S101
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued)/ part 6 — Chair (Ambassador Gafoor) This comment is insightful because it frames the entire discussion around the delicate nature…
S102
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — The discussion maintained a consistently optimistic and collaborative tone throughout, characterized by mutual respect b…
S103
Closure of the session — Echoing France’s input, they agreed on the need for a consistent institutional dialogue structure to address crucial cyb…
S104
(Day 6) General Debate – General Assembly, 79th session: morning session — Ernest Rwamucyo – Rwanda: At the outset, I would like to congratulate Ambassador Philemon Young on assuming the presid…
S105
Open Forum #26 High-level review of AI governance from Inter-governmental P — Thelma Quaye: Thank you very much. Good evening, everybody. So I’d like to clarify, Smart Africa is not a multinatio…
S106
Multistakeholder Dialogue on National Digital Health Transformation — Sean Blaschke: Thanks, Leah. I’m going to try to apply the same architecture framework to legislation, policy, complia…
S107
WSIS Action Line C7 E-environment — Anita Batamuliza from the Rwanda Utilities Regulatory Authority, who chairs an East African collaboration working group,…
S108
Ad Hoc Consultation: Thursday 1st February, Afternoon session — During a recent conference, the Rwandan representative took the stage to address a topic which, although unspecified, se…
S109
Fixing Healthcare, Digitally — Co-creation is another key aspect highlighted in the analysis. In order to ensure effective implementation and regulatio…
S110
How Trust and Safety Drive Innovation and Sustainable Growth — You always ask me the tough questions. I think, first of all, the harms question, because I think that’s relevant to the…
S111
Summary — Stakeholders follow the following principles when dealing with issues relating to protection against cyber risks. The go…
S112
How AI Drives Innovation and Economic Growth — Artificial intelligence | Financial mechanisms | Social and economic development Kremer explains that while private com…
S113
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/4/OEWG 2025 — Israel: Good morning and thank you, Chair. We will present in brief, for the sake of time, some main points of our nat…
S114
Regional perspectives on digital governance | IGF 2023 Open Forum #138 — Luis Barbosa:Yeah. I’m thinking again about what Nibal was saying. I think there is a path that international organizati…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument142 words per minute396 words167 seconds
Argument 1
AI for all requires every nation’s active participation; global cooperation is the cornerstone of equitable AI development
EXPLANATION
The speaker emphasizes that achieving AI that benefits everyone depends on the involvement of all countries. Global cooperation is presented as essential to ensure AI development is fair and inclusive.
EVIDENCE
In the opening remarks the speaker thanks the audience and highlights the importance of hearing perspectives from countries like Sweden, noting that when discussing “AI for all” and global cooperation, the role of each country becomes “very, very important” [1-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for worldwide cooperation is echoed in discussions about AI governance as a global problem that requires an inclusive UN platform [S17] and in calls for coordinated capacity-building across nations [S15]; cultural and contextual differences that must be reconciled are highlighted in analyses of global AI policy frameworks [S20].
MAJOR DISCUSSION POINT
Opening Remarks on Global Cooperation and “AI for All”
P
Paula Ingabire
6 arguments188 words per minute1412 words449 seconds
Argument 1
Adaptive, use‑case‑specific regulation is more effective than abstract rules (Paula Ingabire)
EXPLANATION
Paula argues that Rwanda prefers regulations that are shaped by concrete AI use‑cases rather than broad, abstract rules. This adaptive approach allows the government to tailor rules to the specific risks and benefits of each application.
EVIDENCE
She explains that Rwanda focuses on identifying where AI creates the biggest societal benefits and then builds regulations specific to those use cases, describing the regulatory posture as “adaptive” and evidence-based because it is informed by ongoing pilots [40-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel remarks describe Rwanda’s regulatory posture as “more adaptive” and built around concrete use-cases, matching the description in the Building Trusted AI at Scale discussion [S19].
MAJOR DISCUSSION POINT
Governance vs. Adoption and the Feasibility of a Global AI Compact
AGREED WITH
John Palfrey, Terah Lyons
DISAGREED WITH
John Palfrey
Argument 2
A global AI compact is possible but must accommodate cultural, linguistic, and contextual differences (Paula Ingabire)
EXPLANATION
Paula states that a worldwide AI agreement can work, provided it respects the diverse cultural, linguistic and contextual realities of each nation. Shared standards would exist, but they would be adapted to local problem‑solving needs.
EVIDENCE
She says a global compact is possible but must reflect different contexts, and that shared non-negotiable standards should be contextualised to the specific problems each nation tackles with AI [61-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of global AI policy stress the importance of respecting cultural and linguistic diversity when shaping international agreements [S20], and emphasize that inclusive platforms like the UN are needed to accommodate all nations [S17].
MAJOR DISCUSSION POINT
Governance vs. Adoption and the Feasibility of a Global AI Compact
AGREED WITH
Speaker 1, Rudra Chaudhry, John Palfrey
Argument 3
Partnerships should prioritize co‑development and local skill‑building rather than simply importing foreign solutions (Paula Ingabire)
EXPLANATION
Paula stresses that Rwanda seeks partnerships that involve joint development and capacity building, not just the acquisition of ready‑made foreign tools. The goal is to ensure Rwandan staff acquire the expertise to own and maintain AI solutions.
EVIDENCE
She gives the example that Rwanda will not simply acquire a foreign solution and have it trained on local data; instead, partners must train Rwandan people and co-develop the technology so that local capacity is built [48-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies on digital inclusion in developing countries highlight that building local expertise is prioritized over importing technology, reinforcing the co-development approach [S21].
MAJOR DISCUSSION POINT
Partnerships, Capacity Building, and Data Sovereignty
AGREED WITH
John Palfrey, Rudra Chaudhry
Argument 4
Rwanda is establishing a national data hub and robust data‑protection law to ensure data sovereignty by design (Paula Ingabire)
EXPLANATION
Paula describes Rwanda’s proactive steps to safeguard data sovereignty, including the creation of a national data hub and the enactment of a data protection and privacy law. These measures are intended to set guardrails before any crisis emerges.
EVIDENCE
She notes that Rwanda is building a national data hub and has already put in place a data protection and privacy law that governs data collection, use, and processing, forming the foundation for data-sovereignty-by-design [51-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rwanda’s implementation of a data-protection and privacy law is cited as a key step toward data sovereignty and responsible digitisation [S22][S23].
MAJOR DISCUSSION POINT
Partnerships, Capacity Building, and Data Sovereignty
AGREED WITH
John Palfrey, Terah Lyons
Argument 5
Sustainable diffusion requires clear OPEX/revenue models and demonstrable citizen value in sectors like health, education, and agriculture (Paula Ingabire)
EXPLANATION
Paula argues that AI deployments must be financially sustainable and show tangible benefits to citizens in key sectors. She links value creation in health, education and agriculture to improved livelihoods and poverty reduction.
EVIDENCE
She outlines use-cases such as AI-enabled teacher assessment tools, lesson-planning support, and decision-support for community health workers, explaining how these improve service quality, reduce costs and increase farmer productivity, thereby creating measurable benefits [129-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a sustainable financial model to support AI diffusion across sectors is highlighted in the Building Trusted AI at Scale discussion [S19].
MAJOR DISCUSSION POINT
Diffusion, Scale, Trust, and Sustainable Business Models
AGREED WITH
Rudra Chaudhry, Terah Lyons, John Palfrey
DISAGREED WITH
John Palfrey, Rudra Chaudhry
Argument 6
South‑South cooperation and greater African representation are essential; hosting future meetings in Kigali would amplify emerging‑economy perspectives (Paula Ingabire)
EXPLANATION
Paula calls for more African participation and suggests that future summits be hosted in Kigali to showcase South‑South collaboration. She believes this would bring African voices to the forefront of AI impact discussions.
EVIDENCE
She remarks that it would be great to quantify impact, involve communities that have benefited (or not), and stresses the need for more African representation, offering Rwanda as a host for upcoming gatherings [205-216].
MAJOR DISCUSSION POINT
Future Institutional Cooperation and Summit Continuity
J
John Palfrey
6 arguments240 words per minute844 words210 seconds
Argument 1
AI must be governed to serve humans; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force (John Palfrey)
EXPLANATION
John stresses that AI should be regulated like any other technology, with humans at the centre, rather than being viewed as a mysterious, untouchable force. A stable regulatory framework is needed to align AI development with human welfare.
EVIDENCE
He states that AI should not be treated as “magical”, but connected to human goals such as lifting people out of poverty, improving health care, and providing capital, and calls for a stable regulatory regime that puts humans first [95-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists call for a stable regulatory regime that puts humans at the centre rather than treating AI as a mysterious force [S19].
MAJOR DISCUSSION POINT
Governance vs. Adoption and the Feasibility of a Global AI Compact
AGREED WITH
Terah Lyons, Paula Ingabire
DISAGREED WITH
Paula Ingabire
Argument 2
Philanthropic collaborations with local labs and civil‑society organisations are crucial for building capacity and ensuring inclusive AI deployment (John Palfrey)
EXPLANATION
John proposes that philanthropy should partner with frontier labs and civil‑society groups to build local capacity and promote inclusive AI. Such collaborations can also inform better regulation and innovation cycles.
EVIDENCE
He responds positively to the idea of working with a frontier lab, noting that technology can inform philanthropy practice and regulation, and that regulation can spur further innovation, rejecting a false binary between the two [188-198].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Philanthropy’s role in partnering with frontier labs and civil-society groups to build local capacity is discussed in reports on catalytic funding and collaborative models in India [S25] and in the summit dialogue on philanthropy-driven innovation [S19].
MAJOR DISCUSSION POINT
Partnerships, Capacity Building, and Data Sovereignty
AGREED WITH
Paula Ingabire, Rudra Chaudhry
Argument 3
Philanthropy must provide long‑term capital to empower civil‑society voices and ensure AI serves the public interest (John Palfrey)
EXPLANATION
John argues that civil society needs sustained funding, which philanthropy historically provides, to keep a human‑centric perspective in AI development. Without such support, fewer voices would be able to influence AI governance.
EVIDENCE
He notes that civil society does not come for free, that philanthropy has historically funded it, and that the Indian philanthropic environment is promising, highlighting partnerships with local organisations [104-109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Long-term, catalytic funding from philanthropic foundations is presented as essential for sustaining civil-society engagement in AI governance [S25] and is reinforced by observations of philanthropy’s historic support for civil society [S19].
MAJOR DISCUSSION POINT
Philanthropy, Funding, and the Role of Civil Society
AGREED WITH
Rudra Chaudhry, Paula Ingabire
DISAGREED WITH
Paula Ingabire, Rudra Chaudhry
Argument 4
Over a billion dollars have been pledged by philanthropic initiatives to support AI for humanity, underscoring the sector’s commitment (John Palfrey)
EXPLANATION
John cites the scale of philanthropic funding dedicated to AI for humanity, indicating a strong commitment from the sector. He mentions specific fundraising achievements that together exceed a billion dollars.
EVIDENCE
He reports that colleagues have raised half a billion dollars for Humanity AI in the US and a similar amount for a global AI effort, totaling over a billion dollars in commitments [120-121].
MAJOR DISCUSSION POINT
Philanthropy, Funding, and the Role of Civil Society
AGREED WITH
Paula Ingabire, Rudra Chaudhry, Terah Lyons
Argument 5
Engaging civil‑society organisations, think‑tanks, and academic partners creates the sensibility needed for responsible AI governance (John Palfrey)
EXPLANATION
John emphasizes that involving civil society, think‑tanks and academia brings the necessary sensibility to AI governance. These actors help ensure that AI development aligns with broader societal values.
EVIDENCE
He credits the summit for including civil society, mentions partnerships with the Center for Exponential Change and other Indian initiatives, and highlights the role of organizations like the Observer Research Foundation and Partnership for AI in shaping responsible AI [105-108][114-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The inclusion of civil-society, think-tanks and academic partners in AI governance discussions is highlighted as a way to bring necessary sensibility and diverse perspectives [S19].
MAJOR DISCUSSION POINT
Philanthropy, Funding, and the Role of Civil Society
Argument 6
Avoid a false binary between regulation and innovation; instead, let governance frameworks stimulate further AI breakthroughs (John Palfrey)
EXPLANATION
John argues that regulation and innovation are not opposing forces; instead, well‑designed governance can drive further AI advances. He calls for moving beyond a simplistic dichotomy.
EVIDENCE
He states “let’s not have a false binary. Either you regulate or you innovate. Let’s figure out the way that the regulation and the governance drives innovation” [195-198].
MAJOR DISCUSSION POINT
Future Institutional Cooperation and Summit Continuity
R
Rudra Chaudhry
3 arguments197 words per minute1177 words357 seconds
Argument 1
Global norms are needed, but they must be integrated into national jurisdictions rather than imposed top‑down (Rudra Chaudhry)
EXPLANATION
Rudra questions whether a global AI compact can work and suggests that any global norms should be adapted within national legal frameworks rather than being forced from above.
EVIDENCE
He asks a challenging question about the feasibility of a global compact and whether norms should be thought of as fitting into national jurisdictions [59-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Commentary on AI governance stresses that global norms should be adapted within national legal frameworks rather than imposed from above, echoing concerns raised about top-down approaches [S17] and the need to reconcile cultural differences [S20].
MAJOR DISCUSSION POINT
Governance vs. Adoption and the Feasibility of a Global AI Compact
AGREED WITH
Speaker 1, Paula Ingabire, John Palfrey
Argument 2
Moderators stress the need for a realistic, financially viable deployment strategy that balances speed with responsibility (Rudra Chaudhry)
EXPLANATION
Rudra emphasizes that scaling AI requires sustainable financial models and careful pacing, warning that deployment must be both responsible and economically feasible.
EVIDENCE
He remarks that diffusion will need a sustainable financial model, time, and cross-border work, and calls for a viewpoint on scale and deployment [155-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions on AI diffusion emphasize the requirement for sustainable financial models that balance rapid deployment with responsible oversight [S19].
MAJOR DISCUSSION POINT
Diffusion, Scale, Trust, and Sustainable Business Models
AGREED WITH
Paula Ingabire, Terah Lyons, John Palfrey
DISAGREED WITH
Paula Ingabire, John Palfrey
Argument 3
Continuous engagement beyond the summit—through concrete impact metrics and ongoing exchanges—will keep momentum alive (Rudra Chaudhry)
EXPLANATION
Rudra calls for institutionalising follow‑up mechanisms, such as impact measurement and regular exchanges, to ensure the summit’s outcomes are sustained until the next meeting.
EVIDENCE
He asks what the summit process should do in an institutional setting to keep conversations going and mentions the need for concrete impact metrics and ongoing exchanges [175-178].
MAJOR DISCUSSION POINT
Future Institutional Cooperation and Summit Continuity
AGREED WITH
Paula Ingabire, John Palfrey
T
Terah Lyons
4 arguments165 words per minute1020 words369 seconds
Argument 1
Foundational policy questions (fairness, transparency, bias, standards) raised a decade ago remain central today (Terah Lyons)
EXPLANATION
Terah notes that many of the AI policy concerns first raised ten years ago—fairness, transparency, bias mitigation, standards—are still the core issues being debated today.
EVIDENCE
She recounts that the early Obama-era discussions already covered fairness, transparency, bias, standards, and that these foundational questions remain central after a decade [71-78].
MAJOR DISCUSSION POINT
Governance vs. Adoption and the Feasibility of a Global AI Compact
Argument 2
The hardest challenges are human and institutional, not technical; building trust is prerequisite for wide adoption (Terah Lyons)
EXPLANATION
Terah argues that the most difficult problems in AI are not technical but relate to human and institutional factors, especially trust. Trust is essential for scaling AI responsibly.
EVIDENCE
She states that the hardest questions are human and institutional, that AI must be useful to real organisations, and that trust and responsible scaling are cornerstones for adoption [82-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI’s societal impact underline that technology is a product of human design and that trust and human responsibility are central to responsible deployment [S24].
MAJOR DISCUSSION POINT
Diffusion, Scale, Trust, and Sustainable Business Models
AGREED WITH
John Palfrey, Paula Ingabire
Argument 3
Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and regulatory harmonisation across borders (Terah Lyons)
EXPLANATION
Terah explains that JPMorgan Chase’s experience in risk management and regulated finance equips it to help scale AI responsibly, and she stresses the need for regulatory harmonisation for global operators.
EVIDENCE
She describes JPMorgan’s 10-year use-case level AI experience, its risk-management posture, sector-specific regulatory insight, and the importance of regulatory harmonisation across borders [156-174].
MAJOR DISCUSSION POINT
Diffusion, Scale, Trust, and Sustainable Business Models
AGREED WITH
Paula Ingabire, Rudra Chaudhry, John Palfrey
Argument 4
The next summit should institutionalise multi‑stakeholder dialogue, bringing more deployers from industry, energy, manufacturing, etc., to the table (Terah Lyons)
EXPLANATION
Terah calls for the next summit to broaden participation beyond finance, inviting representatives from retail, energy, manufacturing and other real‑economy sectors to share deployment experiences.
EVIDENCE
She says she would like to see more deployers from sectors like retail, energy, and manufacturing sitting on future panels, noting JPMorgan’s deep AI use and the need for diverse industry voices [179-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for broader, multi-stakeholder participation in AI governance, including industry sectors beyond finance, are reflected in discussions about inclusive UN platforms and global cooperation frameworks [S17].
MAJOR DISCUSSION POINT
Future Institutional Cooperation and Summit Continuity
Agreements
Agreement Points
All speakers emphasized the necessity of a regulatory framework that is adaptive, use‑case specific and grounded in human‑centred values rather than treating AI as a magical, ungovernable force.
Speakers: Paula Ingabire, John Palfrey, Terah Lyons
Adaptive, use‑case‑specific regulation is more effective than abstract rules (Paula Ingabire) AI must be governed to serve humans; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force (John Palfrey) Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and regulatory harmonisation across borders (Terah Lyons)
Paula described Rwanda’s adaptive, evidence-based regulatory posture built around concrete AI pilots [40-44]; John called for a stable regulatory regime that keeps humans at the centre and rejects the notion of AI as magical [95-99]; Terah highlighted JPMorgan’s sector-specific risk-management experience and the need for regulatory harmonisation to scale responsibly [161-169].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the risk-adaptive and context-aware regulatory approaches advocated in recent global AI governance reports, such as the risk-adaptive framework discussed at the AI Governance Forum [S62] and the emphasis on context-specific sandboxes in OECD/UN discussions [S70][S67][S72].
Broad consensus that partnerships must prioritize co‑development, local capacity building and engagement of civil‑society to ensure inclusive and sustainable AI deployment.
Speakers: Paula Ingabire, John Palfrey, Rudra Chaudhry
Partnerships should prioritize co‑development and local skill‑building rather than simply importing foreign solutions (Paula Ingabire) Philanthropic collaborations with local labs and civil‑society organisations are crucial for building capacity and ensuring inclusive AI deployment (John Palfrey) Continuous engagement beyond the summit—through concrete impact metrics and ongoing exchanges—will keep momentum alive (Rudra Chaudhry)
Paula stressed that partners must train Rwandan staff and co-develop solutions instead of just delivering ready-made tools [48-49]; John welcomed the idea of philanthropy working with frontier labs and civil-society to build capacity and inform regulation [188-198]; Rudra called for institutional follow-up and impact measurement to sustain collaboration [175-178].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on co-development and local capacity aligns with the Gates Foundation’s “scaling hubs” model for digital public infrastructure in Africa, which foregrounds partnership with governments and civil society [S44], and with broader calls for inclusive policy-making in digital public infrastructure initiatives [S53][S46].
All participants recognized the importance of global cooperation and a worldwide AI compact that respects cultural and contextual diversity while establishing shared non‑negotiable standards.
Speakers: Speaker 1, Paula Ingabire, Rudra Chaudhry, John Palfrey
AI for all requires every nation’s active participation; global cooperation is the cornerstone of equitable AI development (Speaker 1) A global AI compact is possible but must accommodate cultural, linguistic, and contextual differences (Paula Ingabire) Global norms are needed, but they must be integrated into national jurisdictions rather than imposed top‑down (Rudra Chaudhry) AI must be governed to serve humans; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force (John Palfrey)
Speaker 1 highlighted the need for every country’s participation in AI for all [2]; Paula affirmed that a global compact can work if it reflects diverse contexts [61-65]; Rudra questioned whether global norms should fit national jurisdictions [59-60]; John reinforced the need for governance that serves humanity, aligning with a globally shared but locally adapted approach [95-99].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for a worldwide AI compact reflects the growing consensus on international standards seen in the “Setting the Rules” report and UN-led multilateral cooperation, which stress shared non-negotiable standards while respecting cultural diversity [S49][S50][S71][S72].
There was unanimous agreement that sustainable financial models and clear value‑demonstration are essential for scaling AI diffusion across sectors such as health, education and agriculture.
Speakers: Paula Ingabire, Rudra Chaudhry, Terah Lyons, John Palfrey
Sustainable diffusion requires clear OPEX/revenue models and demonstrable citizen value in sectors like health, education, and agriculture (Paula Ingabire) Moderators stress the need for a realistic, financially viable deployment strategy that balances speed with responsibility (Rudra Chaudhry) Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and regulatory harmonisation across borders (Terah Lyons) Over a billion dollars have been pledged by philanthropic initiatives to support AI for humanity, underscoring the sector’s commitment (John Palfrey)
Paula outlined use-cases delivering measurable benefits and the need for OPEX models [129-146]; Rudra warned that diffusion must be backed by sustainable financial models [122-127]; Terah cited JPMorgan’s $20 billion annual tech spend and risk-management experience as a basis for scaling responsibly [158-160][161-169]; John reported more than a billion dollars of philanthropic commitments to AI for humanity [120-121].
POLICY CONTEXT (KNOWLEDGE BASE)
Sustainable financial models and value demonstration echo the financing strategies outlined in the Gates scaling hubs [S44], the need for sustainable diffusion models discussed in the “Building Trusted AI at Scale” panel [S47], and sector-specific pilots in health, education and agriculture documented in WHO roundtables and India’s AI Leap policy [S51][S55][S56][S57][S58].
All speakers agreed that building trust and ensuring AI serves human needs are prerequisites for widespread adoption.
Speakers: John Palfrey, Terah Lyons, Paula Ingabire
AI must be governed to serve humans; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force (John Palfrey) The hardest challenges are human and institutional, not technical; building trust is prerequisite for wide adoption (Terah Lyons) Rwanda is establishing a national data hub and robust data‑protection law to ensure data sovereignty by design (Paula Ingabire)
John argued that AI should be regulated to keep humans at the centre and avoid mystifying the technology [95-99]; Terah emphasized that trust and human-institutional issues are the biggest hurdles and essential for scaling [82-88]; Paula described Rwanda’s data-protection law and national data hub as foundations for trust and sovereignty [51-54].
POLICY CONTEXT (KNOWLEDGE BASE)
Building trust and human-centred AI is a core principle in the “From principles to practice” consensus on safety-by-design and transparency [S48], WHO’s push for “glass-box” AI in health [S51], and community-centric approaches in public-service continuity [S52][S59].
Consensus on the need for ongoing monitoring, impact measurement and institutional mechanisms to keep the momentum of the summit alive.
Speakers: Rudra Chaudhry, Paula Ingabire, John Palfrey
Continuous engagement beyond the summit—through concrete impact metrics and ongoing exchanges—will keep momentum alive (Rudra Chaudhry) It would be great that we start to quantify what that impact has looked like and also to create a way where these exchanges are truly happening (Paula Ingabire) Philanthropy must provide long‑term capital to empower civil‑society voices and ensure AI serves the public interest (John Palfrey)
Rudra called for institutional follow-up and impact metrics to sustain dialogue [175-178]; Paula echoed the need to quantify impact and maintain exchanges [205-207]; John highlighted the role of long-term philanthropic capital in supporting civil-society and sustained effort [104-109].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for monitoring and institutional oversight is highlighted in multi-stakeholder governance recommendations that call for impact-measurement frameworks and long-term oversight, as seen in AI governance roadmaps and the Digital Public Infrastructure policy-harmonisation work [S48][S53][S71].
Similar Viewpoints
Both stress that effective AI deployment requires partnerships that build local capacity and involve civil‑society, rather than merely importing external solutions [48-49][188-198].
Speakers: Paula Ingabire, John Palfrey
Partnerships should prioritize co‑development and local skill‑building rather than simply importing foreign solutions (Paula Ingabire) Philanthropic collaborations with local labs and civil‑society organisations are crucial for building capacity and ensuring inclusive AI deployment (John Palfrey)
Both link financial sustainability with sector‑specific risk management, arguing that responsible scaling depends on clear economic models and robust risk controls [161-169][129-146].
Speakers: Terah Lyons, Paula Ingabire
Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and regulatory harmonisation across borders (Terah Lyons) Sustainable diffusion requires clear OPEX/revenue models and demonstrable citizen value in sectors like health, education, and agriculture (Paula Ingabire)
All three acknowledge the need for a global framework on AI that respects national contexts and is built through cooperative, multilateral effort [2][61-65][59-60].
Speakers: Speaker 1, Paula Ingabire, Rudra Chaudhry
AI for all requires every nation’s active participation; global cooperation is the cornerstone of equitable AI development (Speaker 1) A global AI compact is possible but must accommodate cultural, linguistic, and contextual differences (Paula Ingabire) Global norms are needed, but they must be integrated into national jurisdictions rather than imposed top‑down (Rudra Chaudhry)
Unexpected Consensus
Finance sector and government aligning on sector‑specific regulatory risk‑management as a cornerstone for AI scaling.
Speakers: Terah Lyons, Paula Ingabire
Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and regulatory harmonisation across borders (Terah Lyons) Adaptive, use‑case‑specific regulation is more effective than abstract rules (Paula Ingabire)
It is notable that a senior banking executive and a government minister independently converged on the idea that regulation should be tailored to concrete use-cases and managed through sector-specific risk frameworks, highlighting a rare cross-sectoral alignment [161-169][40-44].
POLICY CONTEXT (KNOWLEDGE BASE)
Alignment of finance and government on risk-management mirrors the tech-neutral, consumer-protection-focused regulatory stance of the Reserve Bank in the Global South and the risk-based AI policy frameworks for banking outlined by the Financial Stability Board [S60][S61].
Philanthropy and government both emphasizing the need for measurable impact and long‑term funding rather than one‑off projects.
Speakers: John Palfrey, Paula Ingabire
Over a billion dollars have been pledged by philanthropic initiatives to support AI for humanity, underscoring the sector’s commitment (John Palfrey) It would be great that we start to quantify what that impact has looked like and also to create a way where these exchanges are truly happening (Paula Ingabire)
While philanthropy traditionally focuses on grant-making, John’s emphasis on large-scale, long-term capital aligns with Paula’s call for impact quantification, revealing an unexpected shared focus on measurable, sustained outcomes [120-121][205-207].
POLICY CONTEXT (KNOWLEDGE BASE)
The focus on measurable impact and long-term funding resonates with the Gates Foundation’s multi-year investment model for AI hubs [S44], the call for sustainable financing in diffusion discussions [S47], and the emphasis on quantifiable societal benefits in responsible AI initiatives [S63].
Overall Assessment

The panel displayed a strong convergence around four core themes: (1) the need for adaptive, use‑case‑driven regulation anchored in human‑centred values; (2) the importance of partnership models that build local capacity and involve civil‑society; (3) the requirement for global cooperation that respects national contexts; and (4) the necessity of sustainable financial mechanisms and impact measurement to drive responsible diffusion.

High consensus – most speakers reiterated similar points from different angles, indicating a shared understanding that responsible AI deployment hinges on coordinated governance, capacity building, inclusive global frameworks, and financially sustainable models. This broad agreement suggests that future policy initiatives are likely to prioritize adaptive regulation, multi‑stakeholder partnerships, and measurable impact tracking.

Differences
Different Viewpoints
Regulatory approach: adaptive, use‑case‑specific regulation vs. a stable, overarching regulatory regime
Speakers: Paula Ingabire, John Palfrey
Adaptive, use‑case‑specific regulation is more effective than abstract rules (Paula Ingabire) AI must be governed to serve humans; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force (John Palfrey)
Paula argues that Rwanda prefers an adaptive, evidence-based regulatory posture that is built around concrete AI use cases, whereas John stresses that AI should be governed by a stable, predictable regulatory framework that treats the technology like any other and avoids the myth of it being “magical” [40-44][95-99].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between adaptive, use-case-specific regulation and a stable overarching regime is reflected in divergent regulatory philosophies presented at recent AI governance panels, including risk-based versus light-touch approaches and the advocacy for regulatory sandboxes [S67][S69][S70][S62].
Funding and diffusion models for AI deployment
Speakers: Paula Ingabire, John Palfrey, Rudra Chaudhry
Sustainable diffusion requires clear OPEX/revenue models and demonstrable citizen value in sectors like health, education, and agriculture (Paula Ingabire) Philanthropy must provide long‑term capital to empower civil‑society voices and ensure AI serves the public interest (John Palfrey) Moderators stress the need for a realistic, financially viable deployment strategy that balances speed with responsibility (Rudra Chaudhry)
Paula emphasizes that AI’s value should not be reduced to monetary ROI and points to societal benefits in health, education and agriculture, while John calls for long-term philanthropic capital to fund civil-society and sustain AI for humanity, and Rudra insists on clear OPEX/revenue models and sustainable financing for large-scale diffusion [129-146][104-109][120-121][155-158].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over funding and diffusion models echo the various financing experiments described in the Gates scaling hubs, the sustainable financial model discourse in trusted AI panels, and startup-ecosystem support mechanisms aimed at scaling AI across sectors [S44][S47][S58][S63].
Unexpected Differences
Monetary vs. non‑monetary valuation of AI benefits
Speakers: Paula Ingabire, Rudra Chaudhry, John Palfrey
Sustainable diffusion requires clear OPEX/revenue models and demonstrable citizen value in sectors like health, education, and agriculture (Paula Ingabire) Moderators stress the need for a realistic, financially viable deployment strategy that balances speed with responsibility (Rudra Chaudhry) Philanthropy must provide long‑term capital to empower civil‑society voices and ensure AI serves the public interest (John Palfrey)
Paula explicitly states that AI value cannot be judged solely in monetary terms and stresses societal impact, whereas both Rudra and John foreground financial sustainability and the need for concrete revenue or funding models. The tension between a primarily societal-value narrative and a financially-driven sustainability narrative was not anticipated given the common focus on development outcomes [129-146][155-158][104-109].
POLICY CONTEXT (KNOWLEDGE BASE)
The monetary versus non-monetary valuation debate is illustrated by the Swiss AI initiative’s call for monetary risk quantification [S65] and contrasting perspectives that stress broader societal impact beyond financial metrics, as discussed in responsible AI value frameworks [S63][S64].
Overall Assessment

The discussion revealed three main axes of disagreement: (1) the design of regulatory frameworks – whether they should be adaptive and use‑case‑specific or stable and uniform; (2) the financing of AI diffusion – societal‑value‑driven models versus explicit OPEX/revenue or philanthropic funding; and (3) the measurement of AI’s value – non‑monetary societal benefits versus monetary sustainability metrics. While participants share a common vision of responsible, inclusive AI, they diverge on the mechanisms to achieve it.

Moderate to high disagreement. The divergent regulatory philosophies and funding expectations could impede coordinated action unless a hybrid model is negotiated that blends adaptive oversight with baseline stability and aligns philanthropic, governmental, and private financing while respecting both societal impact and financial viability. These tensions have significant implications for the implementation of AI policies, cross‑border cooperation, and the ability to sustain large‑scale AI deployments across diverse economies.

Partial Agreements
All three agree that some form of global AI governance framework is necessary, but Paula stresses contextual flexibility, Rudra warns against top‑down imposition, and Terah notes that the same core policy questions persist over time, indicating differing views on how the compact should be shaped and operationalised [61-65][59-60][71-78].
Speakers: Paula Ingabire, Rudra Chaudhry, Terah Lyons
A global AI compact is possible but must accommodate cultural, linguistic, and contextual differences (Paula Ingabire) Global norms are needed, but they must be integrated into national jurisdictions rather than imposed top‑down (Rudra Chaudhry) Foundational policy questions (fairness, transparency, bias, standards) raised a decade ago remain central today (Terah Lyons)
All agree that broader, multi‑stakeholder participation is essential for responsible AI, but John focuses on civil‑society and philanthropy, Terah on industry deployers across sectors, and Paula on African and South‑South voices, showing convergence on the goal but divergence on the composition of the stakeholder pool [105-108][179-183][205-216].
Speakers: John Palfrey, Terah Lyons, Paula Ingabire
Philanthropic collaborations with local labs and civil‑society organisations are crucial for building capacity and ensuring inclusive AI deployment (John Palfrey) The next summit should institutionalise multi‑stakeholder dialogue, bringing more deployers from industry, energy, manufacturing, etc., to the table (Terah Lyons) South‑South cooperation and greater African representation are essential; hosting future meetings in Kigali would amplify emerging‑economy perspectives (Paula Ingabire)
Takeaways
Key takeaways
Adaptive, use‑case‑specific regulation is preferred over abstract, one‑size‑fits‑all rules. A global AI compact is feasible but must accommodate cultural, linguistic and contextual differences and be implemented through national jurisdictions. AI governance should keep humans at the centre; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force. Foundational policy concerns (fairness, transparency, bias, standards) raised a decade ago remain central today. Partnerships must prioritize co‑development and local capacity building rather than merely importing foreign solutions. Rwanda is building a national data hub and has enacted data‑protection and privacy legislation to ensure data sovereignty by design. The hardest challenges are human and institutional – building trust, establishing sustainable business models, and aligning risk‑management practices. Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and support regulatory harmonisation across borders. Philanthropy plays a critical role in providing long‑term capital for civil‑society participation and for AI‑for‑humanity initiatives; over $1 billion has been pledged by philanthropic efforts. Future summit processes should institutionalise multi‑stakeholder dialogue, include more deployers from diverse industries, and increase South‑South cooperation and African representation. Regulation and innovation should not be seen as a false binary; governance frameworks can stimulate further AI breakthroughs.
Resolutions and action items
Rwanda will continue to develop its national data hub and enforce its data‑protection and privacy law for AI deployments. The summit organisers are invited to consider hosting a future meeting in Kigali to amplify African and emerging‑economy perspectives. Participants (especially from finance, industry and philanthropy) will seek to bring more real‑world deployers (retail, energy, manufacturing) into future summit panels. Philanthropic bodies (e.g., MacArthur Foundation) will pursue collaborations with frontier labs and local research institutions to support responsible AI development. A call for developing concrete impact‑measurement metrics for AI deployments was made, to be pursued before the next summit.
Unresolved issues
Specific mechanisms and enforcement structures for a global AI compact remain undefined. Detailed sustainable OPEX/revenue models for AI deployments in sectors such as health, education and agriculture were discussed but not concretised. How to achieve practical regulatory harmonisation across jurisdictions without stifling innovation remains an open question. The exact process for scaling AI responsibly while ensuring trust and managing risk at the global level was not fully resolved. Further clarification is needed on how civil‑society organisations will be funded and integrated into AI governance frameworks.
Suggested compromises
Adopt shared, non‑negotiable global standards while allowing nations to contextualise them for specific use‑cases. Regulation should be adaptive and evidence‑based, built around deployed use‑cases rather than imposed top‑down. View regulation and innovation as complementary; use governance frameworks to drive further AI breakthroughs. Balance the need for rapid diffusion with responsible, financially viable deployment models that include capacity‑building components.
Thought Provoking Comments
Rather than try to focus more on regulating, we’d rather figure out where we see AI creating the biggest benefits and gains for society, and then build regulations that are specific to those use‑cases. Our regulatory posture is adaptive and evidence‑based, not an abstract framework.
She reframes AI governance from a top‑down, one‑size‑fits‑all model to a use‑case driven, adaptive approach, highlighting how regulation can evolve alongside deployment.
This comment set the foundation for the discussion on how Rwanda balances policy and adoption. It prompted follow‑up questions about global standards versus national flexibility and influenced other speakers to stress context‑specific risk assessment and the need for adaptable frameworks.
Speaker: Paula Ingabire (Minister of ICT and Innovation, Rwanda)
I don’t think the hardest questions in this field are technical right now; they are human and institutional issues. The real challenge is making the technology useful to real organisations and engendering trust so it can be widely adopted.
Lyons shifts the focus from technical breakthroughs to societal, trust, and institutional challenges, arguing that the frontier is now about practical, trustworthy deployment.
Her point redirected the conversation from abstract policy to concrete adoption hurdles, leading the panel to explore trust, risk management, and the practicalities of scaling AI responsibly.
Speaker: Terah Lyons (Managing Director, Global Head of AI and Data Policy, JPMorgan Chase)
I believe a global compact is possible, but it has to reflect different cultural, linguistic and contextual realities. We need shared non‑negotiable standards, with room for nations to adapt them to the specific problems they are solving.
She acknowledges the desirability of a universal framework while emphasizing the necessity of flexibility, bridging the gap between global governance aspirations and national sovereignty.
This answer opened a nuanced debate on how universal norms can coexist with local adaptation, influencing later remarks about regulatory harmonisation and data sovereignty.
Speaker: Paula Ingabire (Minister of ICT and Innovation, Rwanda)
We need to make AI work for humans and put humans at the centre, not treat AI as something magical. A stable regulatory regime that serves people is essential, otherwise we advance technology for its own sake.
Palfrey foregrounds a human‑centric ethic for AI, linking philanthropy, policy and societal outcomes, and warning against technology‑driven hype.
His human‑first framing reinforced the earlier adaptive‑regulation theme and steered the discussion toward the role of civil society and philanthropy in shaping responsible AI deployment.
Speaker: John Palfrey (President, MacArthur Foundation)
We really need regulatory harmonisation to the extent possible so that there is consistency of rules across borders. A global baseline would give operators clarity and enable responsible scaling at census‑scale.
Lyons highlights the practical necessity of cross‑border regulatory alignment for multinational AI deployment, connecting governance with operational scalability.
This comment linked Rwanda’s data‑sovereignty efforts with the broader need for international rule‑making, prompting the panel to consider how global standards can support large‑scale, cross‑jurisdictional AI use.
Speaker: Terah Lyons (JPMorgan Chase)
I would like to see more deployers from retail, energy, manufacturing and other parts of the real economy sitting on panels like this, so we hear how AI delivers value to everyday customers and citizens.
She calls for broader stakeholder representation beyond finance and government, emphasizing the importance of voices from the ‘real economy’ in shaping AI policy.
This suggestion broadened the agenda for future summits, resonating with Paula’s invitation to host the next meeting in Kigali and reinforcing the need for South‑South and multi‑sector participation.
Speaker: Terah Lyons (JPMorgan Chase)
By design we are building a national data hub and have already put in place a data protection and privacy law. We are tackling data‑sovereignty proactively, not waiting for a crisis.
She demonstrates a proactive, design‑by‑default approach to data governance, illustrating how Rwanda integrates sovereignty concerns into AI rollout.
This concrete example of pre‑emptive regulation gave weight to the earlier abstract discussion on adaptive policy, and it was referenced later when participants talked about risk‑by‑use‑case and the importance of guardrails.
Speaker: Paula Ingabire (Minister of ICT and Innovation, Rwanda)
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from high‑level aspirations to concrete, actionable ideas. Paula Ingabire’s advocacy for adaptive, use‑case‑driven regulation and proactive data sovereignty set the tone for a pragmatic governance narrative. Terah Lyons’ emphasis on human and institutional challenges, together with her calls for regulatory harmonisation and broader stakeholder representation, redirected the focus toward trust, scalability, and inclusive policymaking. John Palfrey’s human‑centric framing reinforced the ethical underpinnings of these arguments, while the dialogue on a flexible global compact highlighted the tension between universal standards and national contexts. Collectively, these comments created turning points that deepened the analysis, introduced new dimensions (trust, cross‑border consistency, South‑South cooperation), and shaped a forward‑looking agenda for future summits.

Follow-up Questions
Is a global AI compact feasible, and what shared standards and contextual adaptations are needed for national jurisdictions?
Understanding the possibility of a global compact is crucial for coordinated risk management and governance across countries.
Speaker: Rudra Chaudhry (asked to Paula Ingabire)
How can adaptive, use‑case‑specific regulatory frameworks be designed and implemented effectively?
Rwanda’s approach of building regulations around specific AI deployments suggests a need for research on best practices for adaptive regulation.
Speaker: Paula Ingabire
What concrete guardrails and technical measures are required to ensure data sovereignty and privacy in national data hubs?
While Rwanda has a data protection law, details on enforcement and technical safeguards remain unclear and need further study.
Speaker: Paula Ingabire
What sustainable OPEX or revenue models can support large‑scale, beneficial AI deployments in developing economies?
Identifying financially viable models is essential for long‑term AI adoption beyond pilot projects.
Speaker: Rudra Chaudhry (asked to Paula Ingabire)
How can the impact of AI adoption be quantitatively measured and reported across sectors and regions?
Quantifying impact would enable better assessment of AI’s benefits and guide future investments.
Speaker: Paula Ingabire
What steps are needed to achieve regulatory harmonization for AI across borders while respecting sovereign AI initiatives?
Cross‑border consistency can reduce compliance burdens for multinational operators and facilitate responsible scaling.
Speaker: Tara Lyons
How can future summit panels include a broader range of AI deployers (e.g., retail, energy, manufacturing) to reflect real‑economy perspectives?
Incorporating diverse industry voices will enrich policy discussions with practical deployment insights.
Speaker: Tara Lyons
What models of partnership between philanthropy and frontier AI labs can accelerate responsible AI deployment in low‑resource settings?
Exploring collaborative frameworks can leverage philanthropic resources to drive innovation while ensuring governance.
Speaker: John Palfrey
What mechanisms can strengthen South‑South cooperation for AI governance, capacity building, and deployment?
Facilitating collaboration among emerging economies can accelerate inclusive AI adoption and share best practices.
Speaker: Paula Ingabire
What effective capacity‑building strategies can equip Rwanda’s youth with AI skills to develop and maintain local solutions?
Building a skilled domestic workforce is critical for sustainable AI ecosystems and reduces reliance on external vendors.
Speaker: Paula Ingabire
How can risk assessment frameworks balance use‑case‑specific risks with overarching ethical standards?
Developing nuanced risk models can prevent over‑broad regulation while protecting against specific harms.
Speaker: Paula Ingabire & Tara Lyons
What are citizens’ perceptions of AI‑driven public services, and how do trust and perceived value influence adoption?
Understanding public sentiment is vital for designing AI solutions that are accepted and widely used.
Speaker: Paula Ingabire

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.