Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion
20 Feb 2026 13:00h - 14:00h
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion
Session transcript
Thank you so much, Your Excellency, Ebba Busch, for your valuable insights and for elevating the summit. And it’s really interesting to listen to the perspectives of countries like Sweden, because when we talk of AI for all and global cooperation, the role of each and every country becomes very, very important. Ladies and gentlemen, before I move on, I need to announce that there’s a rupee card which we found. If somebody has lost this rupee card, though I don’t know how much money is there, but if you’ve lost this rupee card, kindly come to me and collect it from me. Thank you. And ladies and gentlemen, now we move to the next panel discussion, which is on adoption and acceleration of artificial intelligence.
The panelists joining us represent some of the most thoughtful voices on how AI is being built and adopted around the world. Mr. John Palfrey is the president of the John D. and Catherine T. MacArthur Foundation, one of the world’s most influential philanthropies, where he has championed the idea that technology must serve the public interest. His perspective on how AI can be deployed equitably, not just efficiently, is essential to the conversation. Ms. Tara Lyons is the managing director and global head of AI and data policy at JPMorgan Chase. AI at one of the world’s largest financial institutions, she is navigating the frontier where AI meets regulation, risk and responsible deployment, ensuring that AI in finance is not just powerful, but trustworthy.
Her Excellency Paula Njibar is the minister of ICT and innovation for the government of Rwanda. Under her leadership, Rwanda has emerged as one of Africa’s most ambitious digital economies, proving that visionary governance can leapfrog traditional development pathways. And we also have Mr. Stephen Bird as the panelist, who is the global head of thematic research at Morgan Stanley, bringing the investor’s lens to the question of which AI bets are real and which are hype. And this discussion will be moderated by Mr. Rudra Chaudhry, Vice President of Observer Research Foundation. Ladies and gentlemen, please join me in welcoming Mr. John Palfrey, Ms. Tara Leons, Her Excellency Paula Ngibar, and also Mr. Rudra Chaudhry. Please kindly come to the stage for this very interesting conversation, a panel on adoption and acceleration of AI.
Mr. Bird will be joining us very soon. Thank you.
All right. Hi, everyone. There’s a good bit of distance between me and the panelists, which might be a good thing. We’ll see. We’ve got about 25 minutes, so I’m going to keep it quite swift. The general panel is about policy on the one side, adoption on the other. And I wonder if that’s actually the case. Yesterday in the inaugural, the prime minister made very clear that adoption is a huge opportunity for India and other parts of the global south. But we have to do it responsibly. President Macron made a very similar pitch in his inaugural speech. And I want to start with that framing. And I want to come to you, Minister. Rwanda is a fascinating country in general.
But you’re particularly fascinating on the African continent because you were way ahead of the AI curve in a sense. You invested in a startup ecosystem. You were looking at scale before many of us thought of use case scales. Give us a sense of how Rwanda. Manages these minefields between governance policy on the one side and adoption at population scale on the other.
Thank you very much, Rudy, and great to see you all. I think for us, the decision has always been clear around how we leverage technology as a country to drive socioeconomic development. And so AI, like many other technologies that we’ve experimented with as a country, we took the same posture. And so the idea was figuring out how we leverage this particular technology to address societal challenges. And there were certain trade -offs that we had to make. When it comes to governance, it was a posture around, rather than try to focus more on regulating, we’d rather figure out where do we see AI creating the biggest benefits and gains for society. And then we’re able to build regulations according to the use cases that we’re implementing.
And so the regulatory posture that we take then is more adaptive. And it’s one where it’s evidenced best because we’re already building use cases, using that today. And so we’re able to determine what kind of regulations are needed, and they’re very specific to the problems that we are solving. as opposed to trying to create a very abstract regulatory framework, which may not necessarily address whatever risks and concerns that we foresee. The second one has always been on partnerships because that’s been key. The level of development, digital development that we’ve achieved as a country is thanks to the various partners that we’ve been able to attract into Rwanda. But partnerships, we also look at it very closely to determine how do we make sure that these partnerships are helping us to build capacity.
So, for example, we’re not going to acquire a foreign solution, invite them to train on our data and just leave us with an application. We want them to be able to train our people, co -develop this with our people so that at least we have the skill set and the mastery of what we’re trying to deploy, which will then create that closed loop around the regulatory environment that we put in place. And last, again, I think it’s a conversation that we’ve had throughout this week around sovereignty, thinking about data sovereignty. By design, we’re building our national data hub. and we’re really making sure we understand, you know, what are the guardrails that we put in place.
We don’t want to wait for a crisis to start, you know, worrying about who is using our data, what are they accessing that for. And so we started with already putting in place the data protection and privacy law that governs how you collect, use, and process data. And that has been the foundation through which we can then start to ensure that everything that we do from a data sovereignty perspective, we’re doing it by design.
So I’m going to come back to the question on the benefits of AI for all of you and for you, Minister, in a minute. You know, this entire summit process started with Bletchley, where I think the general philosophy was that can we come to some kind of a global compact when it comes to risk and risk aversion, when it comes to early warning systems. The institutional outcomes was these AI safety institutes that were built out. Can I ask a challenging question? Is, from your perspective, is a global compact on something like AI actually possible? Or are there norms that we should generally be thinking about and fitting into our national jurisdictions?
So I believe a global compact is possible. However, it has to reflect the different contexts, cultural, linguistic, everything. And so to a certain extent, what you’re looking at is what are some of those shared standards that we all subscribe to as countries, which are non -negotiables for everyone that is building and deploying AI products and solutions. And then obviously, you then get to contextualize it to whatever problems that you’re solving for. And so, again, it’s going to come back to what are nations deploying AI to solve for? And how do we make sure that these standards are reflective of what we’re looking to adopt through the global compact?
Terah, if I could come to you. You were… You’re leading AI at J.P. Morgan. you’ve been in the Obama administration in a very different office on science and technology and policy, way before the AI wave kind of hit us, although people have been working for AI for three decades now. Just give us a sense, just before I come to the immediate, take us a sense back to those second term of the Obama administration. Give us a sense of how were you thinking about AI?
Well, I would say that era was the first in which global governments started considering AI policy questions at all. And honestly, a lot of the same questions were being asked then as are being asked now. The question of global governance that the minister just spoke to, I think, was top of mind then as it is today. Questions of standards generation and interoperability were certainly part of the conversation. Issues of fairness, transparency, bias mitigation. sort of localization and other questions were all very much germane. So, you know, in many respects, the field has completely transformed, especially from a commercial perspective, given the level of investment that we’re seeing globally in the last five years, especially. But in many other respects, the foundational questions remain the same that policymakers were considering over 10 years ago.
And those questions, I think, are applicable in a lot of different directions. You know, I think one of the big differences in the current moment is that I really feel like we’ve moved from an era where these conversations have been more theoretical to an era in which they are much more applied and made much more real by the questions being asked by organizations like ours, for example, as AI deploying entities. Where the, you know, the issues of applied AI organizations are really where the rubber meets the road when it comes to these governance issues that we’re talking about from the stage and that policymakers have been considering for the last decade.
so I think if I talk to most people who’ve been for the first three summits and I talk to them about this summit there’s a lot of argument about there’s a lot of energy there’s a lot of discussion on use cases diffusion getting this out to humanity getting it out to people and now we have to work downstream and upstream and figure out how best to do the diffusion piece let me ask you a question is you’ve been here for three four days for the summit era what’s really struck you in terms of the diffusion argument the adoption argument and then if you put your policy regulatory lens to it what are you thinking right now
well I actually don’t think the hardest questions in this field maybe this is a controversial answer but I’ll try it on for size here I don’t think the hardest questions in this field are technical right now I think they are questions of human issues and institutional issues. And I hear that no matter where I am, talking to clients and other large enterprises, speaking to governments globally, whether in New York, California, Brussels, or Delhi here this week, where the hard problem really isn’t frontier advancement right now. It’s actually making this technology useful to real organizations and making it helpful to real people in their everyday lives. And core to that set of issues are the governance questions that have been so top of mind here at the summit, I think.
And questions of how we scale responsibly, how we engender trust in the technology, because in order for AI to be useful, it has to be applied. And in order for it to be applied and widely adopted, it needs to be trusted. And so these are, I think, are cornerstones of what we need to be thinking about when we’re actually thinking about the frontier of AI in many ways.
John, you run one of the most important organizations in the world, and you’re a largest philanthropic organization in the world. If there are students there, you should corner John afterwards for all sorts of things, if there are professors in the audience. But you’ve also got a very strong legal background. So the same question to Tara is, when you think of diffusion, when you think of impact use cases, and you think of what Paula said, which is we have to be adaptive about the regulatory architecture, where are you at?
Rudy, thank you. And first, let me please, on behalf of MacArthur Foundation, congratulate our hosts in India. What a wonderful global stage to be on, to be having this important conversation. The point of view that I come from as a law professor and as leader of a philanthropy MacArthur Foundation is, of course, that we need to make the technology, the AI, work for humans and to put humans at the center. And I’ve been delighted on this main stage and throughout the summit to hear that as the focus here in India and, of course, around the world. And I think the way to do that is not to treat the AI as something magical and separate, but rather connected to all of the things that we’re trying.
So whether it’s lifting people out of poverty or improving health care or… Thank you. a bank providing capital as needed, we need a stable regulatory regime that makes that possible and puts humans at the center rather than just seeking to advance the technology at all costs and then treating it as something magical and other than forms of mathematics, forms of science that we have been able through human history to regulate so that it serves humans, not for its own sake.
From your perspective in terms of philanthropy, but also from the perspective perhaps of peers that you talk to, is the current moment with the verb for adoption, the verb for getting this out to people, changing the way you’re thinking about grantees, partners, and the philosophical way in which you’re thinking about releasing money?
Yes and no. I think there are some constants in philanthropy that are very important and maybe more important than ever in this moment. You think about the amount of capital that is flowing towards AI and its development, mostly of course by the private sector. I think there are some constants in philanthropy that are very important in the private sector, sometimes by sovereign wealth funds and so forth. What we need to ensure is that civil society has a voice. And of course, again, I credit our hosts for including civil society in this conversation and continuing to do that from Bletchley to today and onward. And the civil society world doesn’t come for free. Somebody has to pay for it, right?
And philanthropy has been historically the source of funding that. And I’m very impressed by the Indian philanthropic environment that is developing. We’re excited to partnership with the Center for Exponential Change and others who are developing homegrown both philanthropy as well as ideas that are coming from India to the rest of the world. But if we don’t invest in civil society, there will be many, many fewer voices able to bring the kind of sensibility that we’re talking about to the world. It doesn’t come without actually thinking about it carefully. So no, we are thinking that long -term capital that is for academia, that is for organizations. And I think about, of course, the Observer Research Foundation, which you’re involved in, Partnership for AI, for which Tara was the founding ED.
These organizations, along with academia, are going to be able to able to bring the kind of in a stable long -term way by philanthropy. We’ve been able with colleagues to raise half a billion dollars for humanity AI and effort in the US, close to that amount for current AI led by Martin Tisnay and AI Collaborative for global efforts. So we’re over a billion dollars in commitments between these two efforts, but we have to be
Minister, let me ask you a question on, you talked about the benefits of AI in Rwanda. Can you open that box up for us a little bit? You know, one of the arguments has been, is that, and there are a lot of arguments about how is this stuff going to pay for itself? Use case and diffusion is all great, but is there an OPEX model or a revenue model for beneficial deployment? It needs to be sustainable over a period of time. And there’s another argument which says, when people actually start using things that are useful, and they see value in it, the rest will follow. What are your citizens in Rwanda feeling in terms of value?
So I’ll defer a little bit because I think value cannot just be seen in monetary terms and how are we going to have the return on investment? How do we just sustain this financially? It’s a good metric to use for sure. But I think the way we are looking and when I look at the use cases that we’ve already identified, one, it speaks to our government’s decision to make sure that we are delivering better services to our citizens. So whether it’s healthcare, whether it’s making sure that we’re giving quality education to our students in Rwanda, whether it’s making sure that a majority of our population, which is made up of farmers, have access to the right data and extension services that then ensure that they have a growth.
And productivity, which will translate essentially also in them being able to have more income and getting out of poverty and building wealth for their families. But a starting point for us has always been. what problem are we trying to solve? And is AI the best way to solve for this? Or is it a combination of AI and many other technologies that can solve for that? We’re a country that has been on a journey of digital transformation for more than 20 years. And so we’ve already started to see the benefit of that. So when I look at the education use cases, we are ranging from being able to facilitate teachers with assessment tools that can help with faster and better assessment.
We’re looking at AI solutions that support with better lesson planning. And so if you’re able to have better lesson planning, you’re able to deliver quality education and make sure that it’s similar across the country, then I think those are benefits that one can easily quantify. For the health sector, we’re looking at our frontline health workers or the community health workers delivering primary health care, giving them decision support tools that enable them to have better diagnosis, and at the same time to reduce the burden of the health care system. So we’re looking at AI solutions that support with better backlog of their in the health care references. him. Essentially, that’s also going to translate into less wastage, into better care, but also even bringing down the cost of care per person, if you look at it that way.
So for our people, they’re very optimistic. Obviously, like any other country, everyone has to wonder, okay, there’s lots of data that you’re going to be using. Some of it, a lot of it is going to be personal data. What guardrails are we putting in place? We have the data protection and privacy law that I talked about earlier. But the most important thing, even for people that they need, is how are we building capacity in -country? So that a lot of these things are not solutions we are acquiring from elsewhere, but we also have more than 70 % of our population that are in the youth bracket. It means these are already people that are very excited about technology, that if you train them the right way, they’ll also be part of building these solutions.
And so I think there’s a lot of optimism on what it can do. it doesn’t mean we’re shying away from what the risks are we think that’s why we’re doing everything by design use case by use case trying to understand for each use case that we are deploying what could be the risks that could be unique to that particular application and how we addressing it
I think that’s fantastic i think the way you’re thinking about disaggregated risk rather than just one big banner sticker on top is perhaps the way we all need to go and as we think about how is this use case risky but how is it actually useful and adds value in different ways um so that’s fantastic um keeping an eye on the clock um tara i just want to talk a little bit about deployment and scale um we all love diffusion we want this stuff out to everybody how do we get it right when it comes to deployment and scale because none of this is going to be easy it’s going to require some kind of a sustainable financial model it’s going to require a lot of time and a lot of time and a lot of time and a lot of time and a lot of time and a lot of work across the board and across borders so just give us someone who works on scale and deployment give us a viewpoint
Sure. And maybe just a few words on census scale in our context here at JPMorgan Chase. We operate in over 100 countries globally. We spend close to $20 billion a year on technology. And we are investing really, really deeply in AI. So, you know, I think to answer your question, one of the paradigms from which we come to this issue is certainly from the unique risk management capabilities of finance and regulated banks specifically. We’ve been using AI technologies at the use case level for over 10 years, you know, starting first with more traditional analytic techniques, moving into the era of machine learning models, now introducing large language models, looking in the direction of agentic capabilities and beyond.
And I think underscoring one of the points that John raised earlier, which I think is important here. You know, the sort of risk management posture and considering what effective governance and controls looks like in order to scale in the way that you’re describing is something we have built muscles to do before. We know how to do this pretty well. And one of the superpowers, I think, that we have is sector -specific lens on regulation and oversight. I think that also speaks to some of the great points that the minister just made with respect to really evaluating risk at the use case level. You know, make this conversation about risk management grounded and practical in ways that address the real ways in which AI is getting deployed at the level of individual use cases.
And then making rules of the road that are applicable to that specific context. I think that’s really crucial. The other kind of piece of the equation is, and this speaks to the point I made at the top about our global operations. I think that’s really crucial. We really need regulatory harmonization to the extent possible in order to allow for consistency of rules across borders. And I think that there’s been a lot of really, really rich conversation this week at the summit about sovereign AI as a part of the global governance conversation. I think that that has its own unique and important goals, and I think it needs to be held in the same sort of space as a realization that we also need to be considering what a global baseline looks like, what clarity enables for global operators so that they can really get responsibility at scale right.
I’m going to ask you one question before I come back. What would you like to see going ahead? From this summit, the baton has been handed to Switzerland, and from Switzerland, there’s possibly another likely candidate. But what would you like this summit process to do in an institutional setting, perhaps, to keep these conversations going?
Well, I think that John’s earlier point about the need for multi -stakeholder diversity is really key. I think that looking across sectors, government, civil society, and industry is deeply important, and making sure all those voices are at the table is critical. I think a sub -point there, from my perspective, is that I would like to see more deployers sitting in seats like this one. We are one of the largest financial institutions in the world, and we use AI in really, really deep ways, as I mentioned before. But I want to see folks from retail, energy, I want to see people from manufacturing, I want to see folks who really represent the real economy sitting on stages like this one next year in Switzerland and speaking to how we deliver real value in the hands of customers and citizens every day using these technologies.
And John, very quickly to you, I’m going to ask you a cheeky question. The kind of philanthropy I think that we require now in AI is for MacArthur to be working with a frontier lab. That’s working with a local lab that’s deploying. Is that in your imagination?
Sure, Rudra, thank you. And I think it’s an exciting idea of going from here to Switzerland and imagining what could come next. And I think what could come next for philanthropy is absolutely an important piece of the story. And I think if you think about the way in which technology works, it often begets innovation in other sectors. So I think what’s exciting is that the technology itself can inform the way we practice philanthropy in ways you suggest, but it also can figure out how to regulate better. And it turns out, of course, regulation is not just against innovation. In fact, regulation sometimes prompts further innovation, and then this wonderful cycle can continue. So my sort of key point on this would be to say, let’s not have a false binary.
Either you regulate or you innovate. Let’s figure out the way that the regulation and the governance drives innovation. And I think that’s an exciting idea, not just for governments, as the minister said, or for banks. It’s true for philanthropy, too, which can improve its work a little bit along the way, too.
No, bang on. And Minister, last word to you. We would love to see the summit. hosted in Kigali. From your vantage point, and a lot of this is about South -South cooperation, a lot of it has been about global cooperation. What would you like to see between now and Switzerland? What can we all actively do to make this more palpable by the time we get to Zurich or Geneva or Davos or wherever it is?
I think it’s great that since we started with the Birchley Park convenings, we’re looking at safety, governance, and now it’s about impact, execution, implementation. It would be great that we start to quantify what that impact has looked like and also to create a way where these exchanges are truly happening. And I couldn’t agree more. If we have more of the people that are building and deploying some of these solutions here, we could have some of the communities that have either benefited positively or negatively here so we can have their voices. So as we go ahead with how large -scale adoption of this technology is going to be, I think it’s going to be a very, very important thing.
is going to happen across the world. We’re taking into consideration this conversation. And I think the last one for me is to make sure we have more voices coming from the African continent and elsewhere, so that we can sort of balance between where are we seeing the biggest impact? Is it in emerging economies? Is it in the middle economies or the big ones? And what could be the nuances as we continue to deploy massively? And I think to do that, we need to take this to the African continent sooner rather than later. And we’re happy to host you.
There you are. Good offer there. Minister, John, Tara, thank you so much. Thank you for being with us at the Impact Summit. And back to the organizers. Thank you.
Paula Ingabire
Speech speed
Default speed
Speech length
Default length
Speech time
Default duration
Adaptive, use‑case‑specific regulation
Explanation
Paula argues that AI regulation should be flexible and tailored to each specific use case, allowing rules to address the unique risks and needs of individual applications rather than imposing a one‑size‑fits‑all framework.
Evidence
“And so the regulatory posture that we take then is more adaptive” [1]. “And then we’re able to build regulations according to the use cases that we’re implementing” [2]. “And so we’re able to determine what kind of regulations are needed, and they’re very specific to the problems that we are solving” [3].
Major discussion point
Governance, Regulation, and Policy Frameworks for AI
Topics
Artificial intelligence | The enabling environment for digital development
Sustainable financial models and citizen‑perceived value
Explanation
She emphasizes that AI’s value cannot be judged solely by monetary returns; sustainable financing must reflect the tangible benefits perceived by citizens and build local capacity.
Evidence
“value cannot just be seen in monetary terms and how are we going to have the return on investment?” [67]. “how are we building capacity in -country?” [60].
Major discussion point
Adoption, Diffusion, and Sustainable Deployment of AI
Topics
Financial mechanisms | Capacity development | Social and economic development
Partnerships must build local capacity and co‑development
Explanation
Paula stresses that partnerships should focus on training and co‑developing AI solutions with local talent to ensure ownership and a closed loop between deployment and regulation.
Evidence
“We want them to be able to train our people, co -develop this with our people so that at least we have the skill set and the mastery of what we’re trying to deploy, which will then create that closed loop around the regulatory environment that we put in place” [18].
Major discussion point
Adoption, Diffusion, and Sustainable Deployment of AI
Topics
Capacity development | Artificial intelligence
Emphasize South‑South cooperation and increase African representation
Explanation
She calls for greater African participation and South‑South collaboration to ensure AI standards and benefits reflect diverse cultural and contextual realities.
Evidence
“more voices coming from the African continent and elsewhere, so that we can sort of balance between where are we seeing the biggest impact?” [91]. “And I think to do that, we need to take this to the African continent sooner rather than later” [100].
Major discussion point
Global Cooperation and Multi‑Stakeholder Engagement
Topics
Artificial intelligence | Capacity development
Concrete AI use cases in health, education, and agriculture improve services
Explanation
Paula highlights specific AI applications that are already enhancing frontline health workers, teachers, and agricultural services, demonstrating tangible benefits of AI deployment.
Evidence
“For the health sector, we’re looking at our frontline health workers… giving them decision support tools that enable them to have better diagnosis” [84]. “And so when I look at the education use cases, we are ranging from being able to facilitate teachers with assessment tools that can help with faster and better assessment” [108].
Major discussion point
Impact, Use Cases, and Measuring Benefits
Topics
Social and economic development | Artificial intelligence
Focus on solving specific problems first, not merely ROI
Explanation
She argues that AI initiatives should start by identifying the concrete problem to solve, rather than being driven primarily by return‑on‑investment calculations.
Evidence
“what problem are we trying to solve?” [66]. “value cannot just be seen in monetary terms and how are we going to have the return on investment?” [67].
Major discussion point
Impact, Use Cases, and Measuring Benefits
Topics
Artificial intelligence | Social and economic development
John Palfrey
Speech speed
Default speed
Speech length
Default length
Speech time
Default duration
Stable, human‑centered regulatory regime
Explanation
John contends that a stable regulatory framework is needed that places humans at the centre, ensuring AI serves societal needs rather than being treated as a magical, self‑driven force.
Evidence
“we need a stable regulatory regime that makes that possible and puts humans at the center rather than just seeking to advance the technology at all costs and then treating it as something magical” [16].
Major discussion point
Governance, Regulation, and Policy Frameworks for AI
Topics
Artificial intelligence | The enabling environment for digital development
Regulation can drive innovation
Explanation
He points out that thoughtful regulation can actually stimulate innovation, creating a virtuous cycle between governance and technological advancement.
Evidence
“Let’s figure out the way that the regulation and the governance drives innovation” [6]. “In fact, regulation sometimes prompts further innovation, and then this wonderful cycle can continue” [12].
Major discussion point
Governance, Regulation, and Policy Frameworks for AI
Topics
Artificial intelligence | The enabling environment for digital development
Philanthropy should fund civil‑society voices and ensure long‑term support
Explanation
John stresses that philanthropy must back civil‑society participation and provide stable, long‑term capital to keep non‑governmental perspectives in AI governance alive.
Evidence
“What we need to ensure is that civil society has a voice” [57]. “These organizations, along with academia, are going to be able to bring the kind of in a stable long‑term way by philanthropy” [19]. “I think there are some constants in philanthropy that are very important” [68].
Major discussion point
Global Cooperation and Multi‑Stakeholder Engagement
Topics
Financial mechanisms | Artificial intelligence | Capacity development
Philanthropic investment to drive impact‑oriented AI initiatives
Explanation
He notes that large philanthropic capital has already been mobilized to support AI projects aimed at broad human benefit, illustrating the role of philanthropy in scaling impact‑focused AI.
Evidence
“We’ve been able with colleagues to raise half a billion dollars for humanity AI and effort in the US, close to that amount for current AI led by Martin Tisnay and AI Collaborative for global efforts” [35].
Major discussion point
Impact, Use Cases, and Measuring Benefits
Topics
Financial mechanisms | Artificial intelligence
Necessity to quantify impact and track outcomes for accountability
Explanation
John argues that AI initiatives must be measured and monitored, using data to ensure accountability and to inform future philanthropic and policy decisions.
Evidence
“the technology itself can inform the way we practice philanthropy in ways you suggest, but it also can figure out how to regulate better” [99].
Major discussion point
Impact, Use Cases, and Measuring Benefits
Topics
Monitoring and measurement | Artificial intelligence
Rudra Chaudhuri
Speech speed
Default speed
Speech length
Default length
Speech time
Default duration
Feasibility of a global AI compact and need for shared standards
Explanation
Rudra questions whether a global AI compact is possible and stresses the importance of establishing shared, non‑negotiable standards that all countries can adopt.
Evidence
“Is, from your perspective, is a global compact on something like AI actually possible?” [23]. “can we come to some kind of a global compact when it comes to risk and risk aversion” [30].
Major discussion point
Governance, Regulation, and Policy Frameworks for AI
Topics
Artificial intelligence | The enabling environment for digital development
Responsible adoption requires balancing policy and diffusion
Explanation
He highlights that effective AI deployment needs a balance between regulatory policy and large‑scale diffusion, including sustainable financial models to support rollout.
Evidence
“The general panel is about policy on the one side, adoption on the other” [52]. “…we have to work downstream and upstream and figure out how best to do the diffusion piece… it will require a sustainable financial model” [53].
Major discussion point
Adoption, Diffusion, and Sustainable Deployment of AI
Topics
Artificial intelligence | Financial mechanisms
Institutional continuity of the summit process to keep dialogue alive
Explanation
Rudra calls for the summit to maintain an institutional presence so that the conversation on AI governance and deployment continues beyond individual events.
Evidence
“what would you like this summit process to do in an institutional setting, perhaps, to keep these conversations going?” [105].
Major discussion point
Global Cooperation and Multi‑Stakeholder Engagement
Topics
Artificial intelligence | Follow‑up and review
Sustainability over time for AI initiatives
Explanation
He notes that AI policies and deployment models must be designed to be sustainable over the long term, not just short‑term pilots.
Evidence
“It needs to be sustainable over a period of time” [17].
Major discussion point
Adoption, Diffusion, and Sustainable Deployment of AI
Topics
Financial mechanisms | Artificial intelligence
Terah Lyons
Speech speed
Default speed
Speech length
Default length
Speech time
Default duration
Call for regulatory harmonization to enable cross‑border scaling
Explanation
Terah argues that harmonized AI regulations across jurisdictions are essential to allow consistent, scalable deployment of AI solutions internationally.
Evidence
“We really need regulatory harmonization to the extent possible in order to allow for consistency of rules across borders” [10].
Major discussion point
Governance, Regulation, and Policy Frameworks for AI
Topics
Artificial intelligence | The enabling environment for digital development
Early AI policy foundations show enduring questions
Explanation
She points out that many of the policy questions raised a decade ago remain relevant today, indicating that foundational AI governance issues persist over time.
Evidence
“foundational questions remain the same that policymakers were considering over 10 years ago” [37]. “a lot of the same questions were being asked then as are being asked now” [41].
Major discussion point
Governance, Regulation, and Policy Frameworks for AI
Topics
Artificial intelligence | The enabling environment for digital development
Hardest challenges are human/institutional, not technical; trust is essential
Explanation
Terah emphasizes that the biggest barriers to AI adoption are human and institutional factors, and that building trust is critical for widespread use.
Evidence
“how we scale responsibly, how we engender trust in the technology” [32].
Major discussion point
Adoption, Diffusion, and Sustainable Deployment of AI
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Need broader sector‑specific deployer representation
Explanation
She calls for more representation from diverse industry sectors on AI governance panels to ensure that deployment decisions reflect real‑world economic actors.
Evidence
“I would like to see more deployers sitting in seats like this one” [82].
Major discussion point
Adoption, Diffusion, and Sustainable Deployment of AI
Topics
Artificial intelligence | The digital economy
Sector‑specific lens on regulation and oversight
Explanation
Terah notes that a sector‑specific approach to AI regulation is a strategic advantage, allowing tailored oversight that matches the nuances of each industry.
Evidence
“one of the superpowers, I think, that we have is sector‑specific lens on regulation and oversight” [8].
Major discussion point
Governance, Regulation, and Policy Frameworks for AI
Topics
Artificial intelligence | The enabling environment for digital development
Announcer
Speech speed
Default speed
Speech length
Default length
Speech time
Default duration
“AI for all” demands each country’s perspective
Explanation
The Announcer stresses that global AI discussions must incorporate the viewpoints of every nation, ensuring that AI benefits are equitably shared worldwide.
Evidence
“And it’s really interesting to listen to the perspectives of countries like Sweden, because when we talk of AI for all and global cooperation, the role of each and every country becomes very, very important” [34].
Major discussion point
Global Cooperation and Multi‑Stakeholder Engagement
Topics
Artificial intelligence
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

