Policymaker’s Guide to International AI Safety Coordination

Policymaker’s Guide to International AI Safety Coordination

Session transcript

Nicolas Miailhe

that the race towards artificial intelligence is no longer a theoretical pursuit. As billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence, the technology is now advancing rapidly. And safety is not keeping pace with it. There are wonderful opportunities on the other side of this quest. There are also big risks. And so that’s the purpose, that’s the reason AI Safety Connect was founded. AI Safety Connect is there to help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management. AI Safety Connect has been founded to encourage global majority engagement into frontier AI safety. And AI Safety Connect, has been connected to showcase Concrete’s governance coordination mechanisms, tools, and solutions.

So how we do this? We convene at each AI summit. So last year we started in Paris, this year in India, next year we’re going to be in Switzerland. But we also convene at the UN General Assembly, right? We need a faster tempo for these safety discussions, so every six months we have this global convening. We also do capacity building, and we also do trust building exercises at times behind closed doors. Well, this week in New Delhi has been an intense one, an impactful one. On Tuesday we had a full day of panels, conference, solution demonstrations, and closed -door workshop discussions on some specific nuts to crack to advance AI safety. We, for example, at the privilege of, hosting Prime Minister Dick Schuh from the Netherlands on stage to deliver a special address on the role of top leadership in advancing AI safety.

We also engage with industry, engage with academia. of India and abroad. So we’re an extremely busy week beside our main event. We had this closed -door discussion that I was mentioning yesterday and today, this closed -door scientific dialogues. We’re going to publish the results soon that brought together senior industry leaders to discuss shared responsibility for AI safety. Well, obviously, none of this would happen without partnership. And we want to thank our co -hosts, the International Association for Safe and Ethical AI and its director, Professor Stuart Russell, to whom I will hand over the floor in a few minutes, and the Digital Empowerment Foundation who is anchoring us at the grassroots here with Osama Manzar,. We’ll close the session later on.

And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moderate that panel and we’re thankful for that. The Future of Life Institute, Ima and Yann, who’s been supporting this effort, and the Mindero Foundation, whose team is here as well with team. And it’s great to have your support and we are thankful for that. So today we’re about to hear from His Excellency Matthias Korman who’s the Secretary General of the OECD We’re going to hear from Her Excellency Minister Josephine Theo who’s the Minister for Digital Development and Information at the Government of Singapore. Thank you for your continuous support, really appreciate that Same for Jann Tallinn who’s the AI investor but also a founding engineer at Skype and the co -founder of the Future of Life Institute And last but not least, we also have Minister Teo who’s going to be with us from Malaysia Minister for Digital Development and Information Thank you Minister as well as Vice President Kim for Digital and AI at the World Bank So an extremely important conversation to have And before we welcome you to the stage I would like to hand over the floor to Professor Stuart Russell to say a few words and to speak about also what’s happening next week in Paris Thank you so much.

Stuart Russell

Thank you very much, Cyrus and Nico. So as Nico mentioned, the International Association for Safe and Ethical AI, or ICI, the world’s worst acronym, is a global, democratic, scientific and professional society. We have several thousand members and approaching 200 affiliate organizations. Our mission is to ensure that AI systems operate safely and ethically for the benefit of humanity. And as Nico mentioned, our second annual conference will take place in Paris starting on Tuesday. It’s still, I think, possible to register, but we’re already up over 1 ,300 people coming. It’s at UNESCO headquarters in Paris. Thank you. So achieving this mission of ensuring… that AI systems operate safely and ethically is partly a technical challenge. How do we even build systems that have that property?

But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this panel is mainly about this second challenge. And I think it’s one on which global coordination is essential because the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders. And we must coordinate to make sure that they don’t happen or they don’t originate anywhere. And it’s, I think, fitting that we are having this summit here in India, which has really, among other things, championed the idea that everyone on Earth should have a say. And so with that, I will hand over to Eileen. Thank you very much.

Nicolas Miailhe

Thank you, Stuart. So Dr. Eileen Donahoe is the founder and managing partner of Sympathico Ventures. She’s also the former U.S. Special Envoy and Coordinator for Digital Freedom and Ambassador to the UNHCR. Eileen? Welcome the speaker on the floor. Please, Your Excellency, Mr. Mattias Korman, Mr. Gobind Singh Deo, Mr. Josephine Teo, and Mr. Jann Tallinn, as well as Mr. Sangbu Kim, join us on stage.

Eileen Donahoe

Okay. Given this remarkable panel and the very short time we have, let me very briefly frame our discussion and get right to our speakers. So we’re here to share. Views on the opportunity for policymakers to impact international AI governance. As the race towards AGI and superintelligence intensifies, AI safety advocates face a compounding challenge. The technology is advancing rapidly and being deployed with minimal guardrails, while the risk management processes that do exist are either ill -adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators. The result is an unharmonized governance landscape that fails to shape the behavioral incentives. Of those building and funding frontier AI. Economies, governments, and societies do not respond well to such mixed signals.

While much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper international diplomacy on the most… extreme risks. At this juncture, middle powers and global majority states can’t be seen as peripheral actors in this landscape. Through pooled resources, market leverage, normative influence, and regulatory innovation, they can shape the direction of global AI practices and safeties. Leading from the middle may turn out to be a more powerful approach than previously anticipated. Whether or not that collective power is exercised now will determine whether international AI governance moves from the rhetorical level to the real -world impact on safety. This panel will aim to identify present -day coordination gaps in the global AI practice and the global market.

We will also look at the role of global AI in international AI safety and highlight practical steps policymakers can take in the coming months to close them. So to our panel, I’ll start with Secretary General Corman. The OECD has done remarkable work over the past decade, developing consensus on the OECD principles, providing a definition of AI systems that has resonated internationally, and playing an international role in operationalizing the Hiroshima International Code of Conduct. Along with those foundations, we now have the International AI Safety Report and the Singapore Consensus on Global AI Safety Research Priorities. With these principles, definitions, and frameworks in mind, two -part question for you. First, what are the key lessons learned from the process of building consensus and then implementing these frameworks?

And then second, looking ahead, what’s the most critical? What’s the most critical piece of coordinated frontier AI safety infrastructure we should be building now? Some have called for an international incident response center, and we’re all curious whether you think that should be a priority and achievable. Just some small, easy questions.

Mathias Cormann

In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is built through inclusion and on the basis of objective evidence. And, you know, I think what we’ve learned over the last few years is that bringing together all the relevant actors, governments, companies, civil society, technical experts, is what we need to do. I mean, each has a different perspective and different imperatives. I mean, markets reward the private sector for speed, scale, and innovation. While governments must manage risk and protect the public interest without stifling progress. But a challenge, and it’s been mentioned in some of the opening remarks, a challenge for policymakers in this context is that AI is moving much faster than policy cycles have traditionally moved, which easily then creates gaps between innovation and progress and opportunity, but necessary oversight, mitigation and management of risk.

But all sides in this conversation do share an essential common interest, and that is to ensure that the systems that are developing are trustworthy, because without public trust in the end, even the most powerful AI tools will struggle to gain broad adoption. So that means that occasionally, and it’s not always popular with everyone, but occasionally we should slow down. Occasionally we should actually pause. Pause, test, monitor, audit, share information, and take the time and invest in building confidence that these systems can work as intended and respect fundamental rights. So that’s sort of, I guess, the first point. another critical lesson involves international consistency and this is part of the reason why these sorts of summits are so important is to really facilitate these conversations among countries and among different jurisdictions because national priorities can vary quite widely and there’s of course fragmentation and compliance cost related risks and at the OECD really what we’ve been doing for six decades now across different policy areas is to try and reduce fragmentation and by achieving alignment around key principles, building shared evidence and facilitating the necessary conversations to develop a more coherent better coordinated approach moving forward and on AI I mean we’ve developed the OECD principles which were first adopted in 2019 and which are now adhered to by 50 countries around the world and that was really the first globally recognized baseline for trustworthy AI The OECD’s lifecycle definition of an AI system has since shaped policy frameworks from the EU AI Act to U.S. executive orders.

And we’ve had just earlier the meeting of the Global Partnership on AI co -chaired by Korea and Singapore. We’ve got the OECD AI Policy Observatory, which is sort of essentially the broad gamut of all of the different policy approaches around the world to provide countries and industries with data and evidence on what’s being done, facilitating peer learning, and trying to take some of the politics and the rhetoric out of it, but really looking at the facts. Now, looking ahead, and you sort of ask a question here about what to do about the risk. I mean, the most critical piece of frontier AI safety infrastructure is coordinated. transparency and incident reporting. I mean, the Hiroshima AI Process Code of Conduct and its reporting framework launched at the AI’s Action Summit in Paris last year.

You know, that’s a promising step, and we’ve got to continue to develop that. Since their publication, 25 organizations across nine countries have already submitted detailed reports on how they manage AI risks, offering for the first time a comparable view of developer practices across jurisdictions. The next stage is to strengthen information sharing on AI failures and near misses. The GPI AI Common Framework for Incident Reporting aims to help us collectively learn from mistakes before they scale globally, and over time, this could evolve into an international AI Incident Response Center, coordinating alerts between governments and labs without exposing companies to commercial or legal penalties for reporting in good flight. Finally, we do need to scale access to practical safety tools.

With global partners, the OECD recently launched an open call for open source safety and evaluation tools hosted in the OECD .ai catalog of tools and metrics to make a trustworthy AI easier to implement in practice. I mean, these are some initiatives to form the foundation of a more transparent, data -driven, and interoperable AI governance ecosystem, and

Eileen Donahoe

Excellent. Minister Teo, a number of questions for you, but let me start with the fact that Singapore occupies a very distinctive position in the global geostrategic landscape as a pro -innovation, advanced knowledge economy, with deep commercial and diplomatic ties to both the U.S. and China. Thank you. As the race to AGI intensifies and bilateral tensions mount, is there a role for Singapore and other middle powers to play in bridging the coordination gap to keep scientific and safety channels open? And also, what’s the most important step middle powers can take in the next 12 months to help establish a shared minimum understanding of frontier safety?

Josephine Teo

Well, thank you very much for that question. I think there is no running away from the fact that for smaller states, and that includes Singapore, the technology that our companies, our citizens are going to rely on do not originate from our shores. So they don’t necessarily come within our jurisdictions. We don’t always get to set the rules. Having said that, I do believe that we’re not without. Thank you. agency. It doesn’t mean that we take a step back and just let things happen to us. There are still things that we can do. One of the most important things I think as policymakers is for us to think about what it takes to translate what we know from science into policy.

And I wanted to just say why this is so important. In our case, as policymakers, the key questions will always be, are the policies that we make effective? And also, policies always come with trade -offs. With the question of effectiveness, there is always a need to understand what actually works, as opposed to what looks good on paper. With the question of trade -offs, it’s about understanding what we lose as a result of whatever safety aspects it is that we choose to put in place. And whether we can minimize them, can we mitigate them? Now, in areas where safety is the objective, we can’t just go with gut. We can’t just go with speculation. You take, for example, in my previous life, I was working on promoting Singapore’s Air Hub.

And we had to deal with a question of aviation safety. We were expanding our airport. It was going to carry many more passengers in and out of the country. But we are limited by number of runways. And in landscape Singapore, you can’t just click your finger and say, let’s build a new one. It’s a long runway. It’s very expensive anyway. Then there is the question of what do you do when you have these jumbo jets like A380s? Because each time an A380 hits the runway. It creates so much of a blast that you really need to create more distance between the A380 taking off and the next aircraft that is scheduled to take off. Now, this is not a question that the transport minister can just decide on a whim.

The air traffic management has to decide on its policy of how much distance is considered safe between landings or rather between takeoffs. And to answer this question, you really need to invest in the research. You need to invest in understanding the tests. So the science is one part of it. But between science to policy, you are actually going to need a lot of time. You are going to need a lot of tests. You are going to need a lot of simulations. you need to understand whether the distances that you decide are safe works well in a thunderstorm, a tropical thunderstorm. Does it work just as well in a snowstorm? Well, we don’t have snow in Singapore.

But you think about the airline that operates this. If each country that they fly into has a different safety distance, that creates some difficulty. So we therefore think that not only is there a need to invest in understanding the science, not only is there a need in understanding what testing looks like, what good testing looks like, there is also a need for us to think about what standards that will eventually be interoperable, what do they look like, which is why we think that international efforts, the collaboration that… that is being carried forward by the OECD through the Global Partnership on AI, the AI Safety Connect effort, and also ICI. Where is Stuart now? Those kinds of efforts, you can’t do away without.

At the outset, there is likely to be a bit of a fragmentation. And the trade -off with not having these conversations is that we are not even going to make advances in AI safety. And I don’t think that that’s a very good place for us to be in. It doesn’t give us the assurance that we can deliver to our citizens. And it does not create a foundation of trust that will eventually help us to push ahead with the use of this technology on a wider scale. So that’s how we are thinking about it, Aileen. Thank you.

Eileen Donahoe

So let me turn to Minister Gobind from Malaysia. and note that under your leadership and Malaysia’s 2025 ASEAN chairmanship, Malaysia succeeded in placing AI at the center of ASEAN’s agenda by establishing the ASEAN AI Safety Network. Malaysia is now finalizing its own AI National Action Plan, and Malaysia’s AI Governance Bill is expected in Parliament in 2026. So this dual -track approach of building national capacity while leading regional coordination represents a model of middle power agency that other countries are watching closely. So what lessons do you think other middle powers can draw from Malaysia’s experience? And on the ASEAN AI Safety Network, we have to note that operationalize and it will require sustained political will. technical capacity and resources.

So what concrete steps must ASEAN take in the next 12 to 18 months to ensure that this isn’t just aspirational?

Gobind Singh Deo

Online fraud, for example, scams, you have deepfakes today, you have huge concerns about certain vulnerable groups that are going to be impacted, children, older folk and so on and so forth. So this is something that stretches across the region. How do we deal with it in a coordinated way and ensure that the conversation doesn’t just stop with the government of the day, but it’s a conversation that expands over a period of time with clear policies that we can actually execute. The second layer that I think we need to think about is in the event there’s a need for execution. When we speak about risks in AI and we speak about how we’re going to govern these risks, we often talk about standards.

We often talk about regulation. We even speak about legislation at times for areas that pose higher risks. But ultimately, it really comes back down to you making sure you have an agency that can enforce it, because you can have the best standards. regulations and legislation but if there is no institution that’s really able to implement those standards to ensure that they are properly implemented and also to ensure that rules for failure to implement are enforced then those standards regulation and policies are really going to be just strong on paper but they’re not going to really have that impact that you need. So again, how do you build this mechanism across ASEAN where every country strengthens themselves domestically first and then moves across to the ASEAN member states and hopes to learn from their experiences so that we can together move ahead in this new world of AI and I think the threats that we anticipate in future.

Now the third part which is really important is also ensuring that whilst this goes on, you create those policies, you have institutions that enforce and the discussions persist at an ASEAN level. I think what is important is also to have that expertise looking at what comes next. We must make sure that our countries are prepared for the risks that are to come with the next generation technology. This is important because you don’t want a situation where new technology is adopted and there are risks that come with this new technology, you’re not prepared. I think that’s something we want to avoid and that’s the reason why I come back to where I started off. We really need to look at building institutions that have the expertise and of course are able to sustain as we go along and to build and deliver something that’s impactful.

Sorry, but that’s in short what we’re doing in Malaysia today.

Eileen Donahoe

Excellent. Thank you so much. Okay. Let me turn to Vice President Kim and talk about the World Bank, which has been at the forefront of digital public infrastructure, helping countries leapfrog legacy systems. We note that frontier AI systems, though, are arriving in the global south under very different conditions from previous waves of technology and governments are under pressure to deploy AI systems quickly. often using models that haven’t been adequately tested, let alone certified for their context, languages, or risk tolerances. So how can the World Bank help Global South countries move from being passive recipients of frontier AI to active shapers of safety and reliability requirements before the systems are deployed at scale?

Sangbu Kim

Thank you. In one word, definitely we need to make our clients well prepared from the scratch. When they design the AI systems, definitely they need to design the safety architecture within the system. That’s very, in general, that’s very correct. But real challenge is that… nobody can really expect a new type of new threat especially our some countries in a low capacity it is really hard to figure out what that will be so that’s the in order to tackle that type of irony and dilemma we need to very closely working with very developed economies company and government and very high end examples so that we can really well connect those good examples to the developing world so one partnership is one of the good examples we are helping our country for example some big tech company who is running some red teams so that you they are trying very hard to attack their system in advance by fully utilizing AI.

So through that type of practice and experiment, they can learn how to prevent the AI attack in the future, which is pretty much possible. So in this way, it is inevitable for our developing countries to keep track on the new trend and new innovation, even in this safety protection area. It is the only way. So I have to admit that this constraint. But think about this. Some anecdotal story in East Asia, in China and in Korea, there’s two models. Merchant who is selling two products. Number one is. sphere. And then they keep saying that this sphere is so strong so it can get through any kind of shield. So this is one vendor. The other vendor is selling shield.

And then they are saying that this shield is one of the most safe and strong shields. No sphere can get through this shield. This is exactly an ironical situation. If you think about AI, AI attack is the sphere. AI is so strong and smart and really capable so it can get through and hack any system with high -end intelligence and knowledge. But good news is that on the other hand, we also can build strong protective systems. by fully utilizing AI. So this is one good news, but the constraint is that we do not clearly know how AI can really evolve to fully protect those big attacks in the future. So in order to solve this type of ironical situation from the developing world point of view and from the World Bank point of view, this is the only way to very closely work and collaborate and learn from the advanced technology and advanced company and advanced country.

Eileen Donahoe

Thank you so much. Last but not least, Mr. Jan Tallinn, you occupy a very rare position in this landscape as a founding engineer of Skype, an early investor in DeepMind and Anthropic, and you’re also the co -founder of the Future of Life, which last October released a statement on superintelligence. calling for prohibition on superintelligent development until two conditions are met. Number one, broad scientific consensus that it can be done safely and controllably, and second, strong public buy -in. Let’s just ask the hard question. What would an effective prohibition look like in practice? How could that work?

Jann Tallinn

Thank you very much. So I think I’m kind of like a little bit different from the people on this panel. And that too, I guess. That I’m kind of, my main kind of threat vector about, my main worries about future are less about like how AI is being deployed and diffused and taken into practice. And I’m way more worried about what is happening in the labs, in the top AI companies. I’m not sure what the future is going to look like. because they are now in a cutthroat race to build something that is smarter than they are. They are in a cutthroat race to build superintelligence. And, like, I mean, we just saw yesterday the picture where, with a photo of it, Narendra Modi, Dario Amadei, and Sam Altman refused to link hands.

I mean, this is, like, indicative. We also saw both Dario and Demis Hassabis call for a slowdown in Davos last month. They just can’t do it alone. And I think there are, like, two reasons why it’s, like, an unfortunate situation. One is that the U.S. as a country is conflicted. They basically rely on AI for their economic and competitive power. So they are, like, very hesitant to, kind of, meddle with now. cutthroat situation in AI companies and the rest of the world really doesn’t understand how big danger they are now. So it’s part of the reason why we did the superintelligence statement is to create awareness that there is increasing political demand to do something about this situation.

We now have more than 130 ,000 signatures which is like many times more than we had done our original six months post letter had in 2023. So yeah, that’s like if there was enough pressure, I think clearly like the rest of the world is still kind of more powerful than the kind of leading AI countries. There are more people, there’s more economic power, etc. So if there was like enough pressure this could be solved. Like the way I put it is that it’s super hard to do like a $10 billion project. it’s impossible to do it if it’s illegal. So having these trillions flow into AI actually makes it easier to govern than harder.

Eileen Donahoe

So I’m tempted to follow up with a question about investors and their potential role in this. They are obviously playing a decisive role in shaping the incentives, but they’re largely absent from the governance conversation. So what would it take to bring investors meaningfully into the safety conversation?

Jann Tallinn

So, yeah, I think the answer is kind of simple. I don’t think investors play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them. They will now IPO soon. And if you are like an IPO market, there is… like, like, so level playing field, which means that like, if somebody’s not funding, somebody else will. So I don’t think investors, investors could have affected things, but like, five, 10 years ago.

Eileen Donahoe

Great. Okay, so since we’re running short on time, I’m going to ask one question, and ask you all to answer it, which is about the 12 month window. Oh, the very shortly, each shortly. Many in the AI safety community believe we have a narrow window, perhaps 12 to 24 months before frontier AI capabilities advance beyond our ability to evaluate and govern them. So what would you recommend is prioritized between now and we’re basically in the next year to two years, each of you to enhance safety? and security?

Josephine Teo

I think there are two, really. I think the AI safety research priorities need to be refreshed because the field has moved so quickly. The Singapore consensus identified a set, but as soon as they are published, we recognize that they will be out of date. So we need to refresh it. That’s why we’re going to have the second edition, you know, worked on. Hopefully in a few months. The second thing I think is that we can’t just keep thinking about frameworks, you know, and guidelines. At some point, we need to be able to introduce better testing tools. And until we are able to do so, the companies that are developing and deploying AI models, they also don’t have a very practical way of giving assurance.

So I’d like to see in the next 12 months some further advancements. In those two areas.

Mathias Cormann

I’ll be really quick I know there’s always a temptation in these sorts of conversations, what is the one thing that can sort of fix it all and the truth is there’s not one thing we’ve got to go as fast as we can to play catch up to a degree but we’ve also got to go as comprehensive and as deep as we can there’s just no alternative, there’s catch up to be played, we’ve got to put a real effort and it’s got to be right across the board and I don’t think that you can just say there’s the one thing that will make us all safe and it’s going to be okay.

Eileen Donahoe

Minister Gobind?

Gobind Singh Deo

I think as I said earlier, we need to start thinking how we can build structures and perhaps institutionalize this entire conversation about building security around AI and its governance in this regard, we have to understand that things are going to move very quickly and you’re going to see new technology develop very fast which brings new risks as well, so in that regard, you’ve got to build something that’s sustainable and I think in order to do that, institutionalizing it should be a priority.

Sangbu Kim

everyone is really rushing for ai system development ai solution development that means ai is currently ai safety measures currently under invested so i really like to urge all of us to think about this is not free you know things we need to spend some money to protect the system in advance from the scratch when you design the system so that means we should allocate some money to fully invest in in the

Eileen Donahoe

Jann Tallinn?

Jann Tallinn

so slow down we really need to slow down that the companies are asking for it and if we like instrumental to that would be basically transparency like more people should know what the leaders of ai companies know in order to basically understand how crucial the slowdown now is

Eileen Donahoe

okay great well i believe we have a little bit of a close coming and thank you all so much i wish we had had a day to talk about all of these issues. But thank you so much. Thank you very much.

Nicolas Miailhe

Thank you very much, Eileen, and this fantastic panel, excellencies, colleagues, friends. What we’ve heard today confirms something important. The coordination gap frontier in AI safety is real, and it is urgent. And as we’ve discussed today, it is closable. And before I hand over the floor to Osama Manzara to close off for a few minutes of remarks and reflection, I’d like to invite you all to the United Nations General Assembly next edition in New York, where we hope to organize the fourth edition of AI Safety Connect, and hopefully with many of the great policymakers and leaders we have heard from today, to carry forward that collective effort. Osama, the floor is yours.

Osama Manzar

Well, thank you very much. And we are one of those absentee co -organizer in this one. So, you know, because being a local, but I just want to I mean, apart from thanking each one of you who didn’t get up and, you know, go out of the room. And every one of you who gave all the safety remarks before usage of AI on behalf of 40 million people that we have reached out in the last 23 years. And billions of the other people whom we are going to work for. I want to suggest that the entire safety aspect of AI should be more from please save people from AI. Right. Because that’s the safety like it’s a car on the road.

You know, we have to save people before you teach people how to think. So we also have to keep a very, very strong thing. How do we save human intelligence from artificial intelligence? And how do we inbuilt in the safety guards and all the ethics and all the all the, you know, policy playbooks? Thank you very much. Thank you. Bye. Thank you. Thank you.

N

Nicolas Miailhe

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

AI safety lagging behind rapid development; need AI Safety Connect

Explanation

Nicolas warns that AI safety is not keeping pace with the speed of AI advances, creating an urgent coordination gap. He proposes AI Safety Connect as a dedicated global convening to shape frontier AI safety and risk management.


Evidence

“And safety is not keeping pace with it” [13]. “The coordination gap frontier in AI safety is real, and it is urgent” [3]. “AI Safety Connect is there to help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management” [1]. “And so that’s the purpose, that’s the reason AI Safety Connect was founded” [14].


Major discussion point

Global Coordination and Governance of AI Safety


Topics

Artificial intelligence | The enabling environment for digital development


Accelerate safety discussions with semi‑annual global convenings

Explanation

He stresses the need for a faster tempo of safety dialogues, suggesting a global convening every six months to keep pace with AI progress.


Evidence

“We need a faster tempo for these safety discussions, so every six months we have this global convening” [97].


Major discussion point

Global Coordination and Governance of AI Safety


Topics

Artificial intelligence | The enabling environment for digital development


S

Stuart Russell

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Global coordination essential to address cross‑border AI harms

Explanation

Stuart argues that AI‑related harms cross national borders, making global coordination a prerequisite for both technical solutions and governance mechanisms.


Evidence

“And I think it’s one on which global coordination is essential because the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders” [16]. “But also a governance challenge” [90].


Major discussion point

Global Coordination and Governance of AI Safety


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


E

Eileen Donahoe

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Fragmented AI governance hampers safety incentives

Explanation

Eileen highlights that current risk‑management processes are fragmented, ill‑adapted, and insufficiently binding, resulting in an unharmonized governance landscape that fails to shape incentives.


Evidence

“The technology is advancing rapidly and being deployed with minimal guardrails, while the risk management processes that do exist are either ill -adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators” [30]. “The result is an unharmonized governance landscape that fails to shape the behavioral incentives” [23].


Major discussion point

Global Coordination and Governance of AI Safety


Topics

Artificial intelligence | The enabling environment for digital development


Middle powers can leverage pooled resources to shape AI safety

Explanation

She argues that middle‑power and global‑majority states can use pooled resources, market leverage, normative influence, and regulatory innovation to steer global AI practices toward safety.


Evidence

“Through pooled resources, market leverage, normative influence, and regulatory innovation, they can shape the direction of global AI practices and safeties” [27].


Major discussion point

Role of Middle Powers and Regional Initiatives


Topics

Artificial intelligence | Capacity development


Urgent need for deeper international diplomacy on extreme AI risks

Explanation

Eileen calls for intensified diplomatic engagement to address the most extreme AI risks, noting that current discourse focuses too much on superpowers.


Evidence

“While much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper international diplomacy on the most… extreme risks” [6].


Major discussion point

Global Coordination and Governance of AI Safety


Topics

Artificial intelligence | The enabling environment for digital development


M

Mathias Cormann

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Trust built through inclusive, evidence‑based processes

Explanation

Mathias stresses that public trust in AI systems emerges from inclusive dialogue and objective evidence, which are essential for trustworthy AI adoption.


Evidence

“In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is built through inclusion and on the basis of objective evidence” [34].


Major discussion point

Global Coordination and Governance of AI Safety


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Coordinated transparency and incident reporting as foundation for global response

Explanation

He outlines that coordinated transparency and incident reporting, exemplified by the Hiroshima AI Process Code and the GPI AI Common Framework, can evolve into an international AI Incident Response Center.


Evidence

“transparency and incident reporting” [32]. “The GPI AI Common Framework for Incident Reporting aims to help us collectively learn from mistakes before they scale globally, and over time, this could evolve into an international AI Incident Response Center, coordinating alerts between governments and labs without exposing companies to commercial or legal penalties for reporting in good flight” [33]. “I mean, the Hiroshima AI Process Code of Conduct and its reporting framework launched at the AI’s Action Summit in Paris last year” [70].


Major discussion point

Building Practical Safety Infrastructure


Topics

Artificial intelligence | Data governance


OECD model shows middle‑power coordination yields globally recognised AI principles

Explanation

Mathias points to the OECD’s long‑standing multi‑stakeholder work that produced principles adopted by 50 countries, demonstrating how middle‑power collaboration can create a trusted AI baseline.


Evidence

“the OECD principles which were first adopted in 2019 and which are now adhered to by 50 countries around the world and that was really the first globally recognized baseline for trustworthy AI” [39]. “We’ve got the OECD AI Policy Observatory, which is sort of essentially the broad gamut of all of the different policy approaches around the world to provide countries and industries with data and evidence on what’s being done, facilitating peer learning, and trying to take some of the politics and the rhetoric out of it, but really looking at the facts” [66].


Major discussion point

Role of Middle Powers and Regional Initiatives


Topics

Artificial intelligence | The enabling environment for digital development


Comprehensive, cross‑sector catch‑up needed; no single silver bullet

Explanation

He warns against looking for a single fix, urging fast, comprehensive action across technical, regulatory, and institutional dimensions.


Evidence

“there’s not one thing we’ve got to go as fast … we have to go as comprehensive and as deep as we can … there’s just no alternative, there’s catch up to be played, we’ve got to put a real effort and it’s got to be right across the board” [87].


Major discussion point

Immediate Priorities and 12‑24‑Month Action Window


Topics

Artificial intelligence | The enabling environment for digital development


J

Josephine Teo

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Refresh AI safety research priorities and develop testing tools

Explanation

Josephine calls for updating AI safety research agendas and accelerating the creation of practical testing tools to keep up with rapid AI advances.


Evidence

“I think the AI safety research priorities need to be refreshed because the field has moved so quickly” [5]. “So we need to refresh it” [86].


Major discussion point

Immediate Priorities and 12‑24‑Month Action Window


Topics

Artificial intelligence | Capacity development


Translate scientific knowledge into policy via standards and interoperable testing

Explanation

She emphasizes the need to invest in understanding testing, develop interoperable standards, and bridge science‑policy gaps through international collaboration.


Evidence

“So we therefore think that not only is there a need to invest in understanding the science, not only is there a need in understanding what testing looks like, what good testing looks like, there is also a need for us to think about what standards that will eventually be interoperable, what do they look like” [36]. “One of the most important things I think as policymakers is for us to think about what it takes to translate what we know from science into policy” [53]. “You need to invest in understanding the tests” [54]. “At some point, we need to be able to introduce better testing tools” [55].


Major discussion point

Building Practical Safety Infrastructure


Topics

Artificial intelligence | The enabling environment for digital development


G

Gobind Singh Deo

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Institutionalize AI safety governance for sustainable oversight

Explanation

Gobind argues that building and institutionalising structures for AI security and governance is essential to sustain oversight as technology evolves rapidly.


Evidence

“I think as I said earlier, we need to start thinking how we can build structures and perhaps institutionalize this entire conversation about building security around AI and its governance” [37].


Major discussion point

Immediate Priorities and 12‑24‑Month Action Window


Topics

Artificial intelligence | The enabling environment for digital development


ASEAN dual‑track: national capacity first, then regional coordination

Explanation

He describes a model where each ASEAN member strengthens domestic capacity before moving to a coordinated regional mechanism, supported by enforcement agencies and sustained political will.


Evidence

“So again, how do you build this mechanism across ASEAN where every country strengthens themselves domestically first and then moves across to the ASEAN member states…” [42]. “Now the third part which is really important is also ensuring that whilst this goes on, you create those policies, you have institutions that enforce and the discussions persist at an ASEAN level” [64].


Major discussion point

Role of Middle Powers and Regional Initiatives


Topics

Artificial intelligence | Capacity development


S

Sangbu Kim

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Allocate funding early to embed safety architecture in AI system design

Explanation

Sangbu stresses that AI safety measures are under‑invested and that money must be allocated at the design stage to embed safety architecture from the outset.


Evidence

“everyone is really rushing for ai system development ai solution development that means ai is currently ai safety measures currently under invested so i really like to urge all of us to think about this is not free you know things we need to spend some money to protect the system in advance from the scratch when you design the system so that means we should allocate some money to fully invest in in the” [7]. “When they design the AI systems, definitely they need to design the safety architecture within the system” [11].


Major discussion point

Immediate Priorities and 12‑24‑Month Action Window


Topics

Artificial intelligence | Financial mechanisms


World Bank partnership can help low‑capacity countries embed safety from design onward

Explanation

He proposes close collaboration with the World Bank, advanced economies, and tech firms to enable developing nations to embed safety measures early and keep pace with new AI threats.


Evidence

“So in order to solve this type of ironical situation from the developing world point of view and from the World Bank point of view, this is the only way to very closely work and collaborate and learn from the advanced technology and advanced company and advanced country” [78]. “So in this way, it is inevitable for our developing countries to keep track on the new trend and new innovation, even in this safety protection area” [81].


Major discussion point

Building Practical Safety Infrastructure


Topics

Artificial intelligence | Financial mechanisms | Capacity development


J

Jann Tallinn

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Implement slowdown and greater transparency to buy time for safety

Explanation

Jann calls for a coordinated slowdown of AI development, coupled with increased transparency about what AI leaders know, to create the necessary window for safety work.


Evidence

“so slow down we really need to slow down that the companies are asking for it and if we like instrumental to that would be basically transparency like more people should know what the leaders of ai companies know in order to basically understand how crucial the slowdown now is” [29].


Major discussion point

Immediate Priorities and 12‑24‑Month Action Window


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Effective prohibition requires broad scientific consensus and strong public buy‑in

Explanation

He argues that banning superintelligent AI development is only viable if there is a wide scientific agreement on safety and robust public support.


Evidence

“Number one, broad scientific consensus that it can be done safely and controllably, and second, strong public buy -in” [44].


Major discussion point

Global Coordination and Governance of AI Safety


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


O

Osama Manzar

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

AI safety must protect human intelligence with strong ethical safeguards

Explanation

Osama emphasizes that AI safety should be oriented toward preserving human intelligence, embedding ethical guards and policy playbooks from the beginning.


Evidence

“I want to suggest that the entire safety aspect of AI should be more from please save people from AI” [15]. “How do we save human intelligence from artificial intelligence?” [45]. “And how do we inbuilt in the safety guards and all the ethics and all the all the, you know, policy playbooks?” [72].


Major discussion point

Building Practical Safety Infrastructure


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar

Session transcript

Announcer

Now we move to a conversation about how artificial intelligence needs to be positioned in the global context. And we have very elite panelists for this session. Ms. Kristalina Georgieva, the Managing Director of the International Monetary Fund. From macroeconomic stability to digital transformation, she’s been a leading voice on how AI will reshape the global economic order and what policymakers must do to ensure that its benefits are widely shared. Ms. Johanna Hill, the Deputy Director General of the World Trade Organization, bringing the trade perspective to a technology that is redrawing the boundaries of comparative advantage. Ms. Josephine Teo, the Minister of Digital Development and Information for Singapore, a nation that has become a global benchmark for how governments can integrate AI into public services.

And this conversation will be held in a few minutes. This will be moderated by Mr. Mariano Florentino Cuellar, President of the Carnegie Endowment for International Peace. So we have a very elite… set of panelists who are going to join us on this panel discussion, which is titled AI Needs to be Positioned in the Global Context. May I please invite our panelists to please join us on stage? So over to you, Mr. Quayar.

Mariano-Florentino Cuéllar:

Thank you very much and good afternoon, everybody. How are we doing AI summits? Let me try that again. Hello, Delhi. Thank you. Much better. It is not every day that we have the pleasure of having such a distinguished panel of international leaders. And I want to start by making three observations only as special observations for those of you who have chosen to be with us this afternoon. You could be anywhere in this complex, anywhere in the city, and you’re right here with us. The first is about the role of technology and science and global ties in making the world better. For those of you who are younger than me, which is most of you in the audience, you will live longer than my generation because of global ties, commerce, science and technology.

In 1950, when India was a young nation, global life expectancy was 47 years. Now it’s closer to 73 years. But at the same time, the second point is that the world that we are navigating today is fragmented. That set of global ties, diffusing science and technology, advancing global understanding and cooperation is a lot harder now than it was even five or 10 years ago. And everybody who’s been on this stage has been alluding to that in some way, that reality. The third point is that the use and development of AI will have an effect on those ties and on that prosperity in all likelihood. But there are divergences, different paths around AI. Some countries are using it more, some less.

Some countries play a certain role. Some very developed role in the tech stack and others less. To talk about these issues, I cannot imagine a better pan. It’s not every day, as I said, that we have the managing director of the IMF, the deputy director general of the World Trade Organization, and the minister for information and digital development from Singapore. So I’m going to start with a question for managing director Gorgieva. And the question is, all this discussion about artificial intelligence at the frontier, what do you see as the greatest possibilities and the greatest risks?

Kristalina Georgieva

Thank you very much. Namaste. Namaste. AI is an incredibly transformative we know. And the question is, what does it do for the world economy? We did some research, and here is the answer. Based on what we know, AI can lift up global growth by all. Almost. a percentage point, we say 0 .8%. What does that mean? It would mean that the world would grow faster than it did before the COVID pandemic. And that is fantastic for creating more opportunities, more jobs. This is the magnitude that we see for India. And it would mean that India’s Vixit Bharat is achievable. It also means that the world risks to be even further diverse. The accordion of opportunities may open even more from countries that do well to those who fall behind.

Thank you very much. Actually, what we see is the potential for countries that go fast on digital infrastructure, on skills, on adoption of AI, that they can do twice as well as those that don’t. So what is our main reason to be here at the AI Summit in Delhi? To embrace India’s proposition of democratizing AI, making sure that experience in India can then be passed to other countries, especially countries in the developing world, to make diffusion, to make adoption of AI. The main priority and do it with focus on people, on improving the opportunities, the livelihoods of people. I am very optimistic about AI. I’m also not naive. It brings significant risks. First, it brings the risk of making countries and the world less fair.

Some have it and others don’t. Second, it brings the risk of displacement of jobs with no good thinking about how to help people find their place in the new AI economy. We calculated this risk as very high. We actually see the impact of AI on the labor market like a tsunami hitting it globally. 40 % of jobs will be affected by AI, some enhanced, others eliminated. Emerging markets, 40%, but in advanced economies, 60%. And that is happening over a relatively short period of time. And the third risk we at the IMF worry a lot about is financial stability risk. Could AI get loose and create havoc on financial markets? But on balance, my appeal to all of us is embrace the opportunities, be mindful of the risks, and manage them well.

And above all, make sure that the spirit here is that AI is for the well -being of everybody, everywhere. Thank you.

Mariano-Florentino Cuéllar:

And what we’re going to do, we’re going to. I’m going to come right back to these questions in a minute, but I want to bring in the Deputy Director General of the World Trade Organization into the conversation. I want to ask you, picking up exactly where Managing Director Gheorgheva was going. the interest in democratizing the technology, having more countries be closer to the frontier. For more than a generation, as you know, we have been having arguments about trade globally and about whether trade helps reduce the gap in well -being between countries or actually pulls them apart even more. And given all that experience, I wonder what role you think the international trading system has in dealing with potential inequities and access to AI and the development of AI.

Johanna Hill

Thank you so much for the invitation. To be here, definitely we see that trade can help the diffusion of AI to those that most need it. And we also think that AI can help trade and can help lower income and middle income economies really progress through trade. Now, we do see that AI is really shifting what we think of as comparative advantage to those economies that are more strong in capital, data, and in computing power. and therefore the countries that are more labor intensive feel more at risk. At the same time, we also see important opportunities for these same countries. Of course, with all the caveats that we’ve been speaking about, the importance of investing in skills and regulations and in infrastructure, digital infrastructure are incredibly important.

Our research suggests that by the year 2040, trade growth could be almost growing by 40%. So we see really important opportunities for the middle and lower income economies. And trade is already working well in that way. Our trade agreements, the world trading system is set up so that goods trade and services trade can develop with AI. But there are some areas where they’re still too new and still too nuanced. And we still have to wait and see how that will develop and how the system has to accommodate.

Mariano-Florentino Cuéllar:

Minister Teo, as that system evolves, and we deal with this, emerging, not even emerging anymore, emerged technology. we talk about how much it’s going to affect countries large and small. You are playing a critical role, and I know you’re playing a critical role because I see you at every single AI summit in the world. It’s amazing. But how are countries like Singapore in a position to navigate this tsunami, these changes? And what, in particular, what do you think we could learn from Singapore’s strategy, as I see it, of being at the forefront here on AI governance, the Model AI Governance Plan, for example, but also navigating a world that some people see as balkanized between China and the United States around the technology stack?

Josephine Teo

Thank you very much, Tino. That’s a lot of questions packed into one. I’ll do my best to address them. I think embedded in what you’re saying is that there is the risk of technology decoupling. And what does a small state do? In this kind of context? And how do we navigate the big power contestation? The way we think about it is that for Singapore, it’s very important for us to maintain this ability to operate as a trusted node. Trusted node means that, well, we can trust you with our technology. So your companies, your people can continue to access this, whatever is the most sophisticated, because they will not be abused and the risk of them being misused is also minimized.

The question, however, is how do we remain trusted? And I think the only way to do so is if we act in a consistent and principled way. And being consistent and principled is not a matter of size. And Singapore is not the only small state that has a good track record of holding this discipline. We are consistent in being. Pro -Singapore. And sometimes our choices may align with this country or that country. Sometimes they will align with many countries. Sometimes they only align with a few countries. But they always align with our own interests. In technology choice, for example, 5G, we are always operating on the basis of principles. Number one, that these are commercial decisions that have to be undertaken by the operators of the mobile networks.

And they have to decide on the basis of what works for them in terms of performance, in terms of security, in terms of resilience, keeping in mind what are all the rules that are in place in our context. So those are the broad directions in which we operate in. And it’s not easy, but it’s a path that has served us well.

Mariano-Florentino Cuéllar:

And I note that among the many things that Singapore, I think, has contributed to the discussion of AI globally, in addition to being a trusted node and connecting different countries, there’s also the role Singapore and the region of Southeast Asia plays in all this because Southeast Asia is such a region of such diversity and importance globally. And I want to come back in a minute to the question of how we might imagine Southeast Asia evolving as almost a laboratory for some of the issues we’re talking about. But first, I want to go back to Kristalina, if I may, and ask you about, it was clear in your earlier remarks that you see enormous possibilities for AI.

But you also acknowledge candidly something that maybe not every speaker has acknowledged, which is along with that opportunity will probably come some disruption. Some real policy difficulties in some countries that are experiencing rapid change. The question then is how we might develop the right strategy so that the productivity gains that the world can experience would actually translate into shared prosperity. What do you think we can do on that score?

Kristalina Georgieva

The first thing we ought to do is… to carefully observe what is actually happening and then project what are the implications for policymakers. At the Fund, we did a very interesting piece of research in the United States assessing how much AI is affecting already the labor market. And we found out that one in 10 jobs already requires additional skills. And for those who have these skills, the job pays better. Now, with money in their pocket, people then go and buy more local services. They go to restaurants, to entertainment. That creates demand for low -skilled jobs. And to our surprise, the total impact on employment in the aggregate is positive. One job with AI, 1 .3 jobs.

1 .3 jobs. in total employment. But what does that mean? It means that a smaller segment of people get higher opportunities. A larger segment, yes, they can have jobs, but jobs that are on the lower end of the pay scale. And the most problematic is the fate of those squeezed in the middle. Their jobs don’t change. In relative terms, they pay less, and some of these jobs disappear. What concerns us the most is that jobs that disappear tend to be entry -level jobs. They are routine, and they are easily automated. So if you are in this place of the labor market that is easily automated, of course that creates a risk. Now, we are going to talk about the risk of the labor market.

We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. once obviously we will continue to work with countries to understand what is happening and then how do we project it for policies for the future i would make three conclusions so far and of course we have to be agile in how we look at ai the first one is education has to be revamped for the for a new world people have to learn to learn not to learn specific skills so much and there has to be second there has to be support for those if they’re a big chunk in a particular local economy and this labor market is changing dramatically there has to be social protection social support so they don’t feel like what happened with the industrial world workers in the united states when their jobs were exported overseas and three it is very important that we look at the overall enabling environment.

Why in some places AI makes it faster and in others it doesn’t. And what we find is not very surprising. Some parts of the economy, some parts of society are naturally better positioned because they have digital infrastructure in place. They are already in the digital world because there is more demand for entrepreneurship. Somebody spoke about it and entrepreneurship is more dominant. And I think it is important for the world to be very attentive to what works, what doesn’t work and not sugarcoat the picture because if we do, we would end up where we ended up with globalization. People revolting against it despite all the benefits it brings because, yes, the world as a whole benefited but some communities were devastated.

and the world did not pay attention to these communities in a timely manner. So that is my conclusion so far. And I know that I am very mindful that we are going to learn much more. At the front, we are trying to see how our country is positioned. Some countries actually have more demand for AI skills than supply. Some countries have more supply of AI skills than demand, and some have neither. So we have to work on multiple fronts, and we have to work based on concrete assessment of conditions in countries and localities in countries. I want to finish with a message to the Indian friends here in the audience. You’re very fortunate that your country invested in public digital infrastructure.

So this country… Condition for AI? Check. You are very fortunate because your country is removing actively barriers to entrepreneurship. And on that count, we say check. And you are super fortunate to have youthful, energetic, innovative population that is embracing AI. So what do we say? Check. So all the very best.

Mariano-Florentino Cuéllar:

This is terrific. Perfect. Minister Teo.

Josephine Teo

Can I agree withthe managing director more, if I may be allowed to chime in? I think sometimes there is a desire, a tendency to want to think of ways of regulating AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need. For example, in making… I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk.

I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. But to over -expect AI regulations to deliver on the other important issues, such as the potential for greater social inequality, I think it’s unrealistic. The way to deal with it is to look at what other methods there must be to strengthen social solidarity.

For example, what provisions do we put in place to help people to move from one job to the next? What provisions do we put in place to ensure that even people who don’t earn a lot have the prospect of owning their own homes, access to good health care, educating their children to a very high level? I think these are the other things, and you cannot run away from those conversations just by expecting regulations to solve the problem.

Mariano-Florentino Cuéllar:

So what I’m hearing you both say, in a way, is that it would be a very silly thing if we tried to solve health care problems. just by regulating pharmaceuticals. That would be a very poor fit, right? At the same time, you recognize that, you know, certain products that are sold, it’s good for them to be safe. And in fact, safety, trust, security can make them even more easy to diffuse. But I think what a very important takeaway from both of you is that the entire spectrum of tools that a society has to build social cohesion are going to be important in the transition to a more AI -driven economy. And we shouldn’t ignore them, but we shouldn’t put just the focus on what we can do by making models built in a certain way.

And I’d love for you to chime in because trade has come up already, just even in the last like 47 seconds of a bunch of times. Actually, yes.

Johanna Hill

We put out a report last year that looks at this issue exactly in that way. We look at the opportunities that I talked about of AI in the future, not only for the advanced countries, but developing in the lower income. But we also look at the need for national policies for that to actually… happen and to help transition. And so we look at issues around competition policy, around labor force. around skills development, around education. And to do that, the world trading system cannot do it alone. We need to partner at our level with international organizations and at the national level with the appropriate authorities and the private sector in order to have that holistic approach.

I would say lesson learned from past experiences, and we definitely want to apply those lessons to this new one.

Mariano-Florentino Cuéllar:

So we have about four minutes left, and I have a last question for you all. Well, imagine yourselves in the future looking back at the past, maybe 15 years in the future. And at that point, you’re being interviewed on the same stage here in India, and you’re saying it’s been a very good thing to see how well the world has handled its relationship with this emerging technology of AI, and it’s turned out very well because blank. And I want you to mention one thing that you think in particular would have been so critical to make that transition well. You’ve all mentioned a bunch of things, but I’m interested in the main, most important takeaway that you’d like to leave the audience with.

For me, that one word is trust.

Josephine Teo

In 15 years, if we went and asked citizens in all the countries where AI is being deployed widely, do you trust this technology? If their answer is no, then I believe that we must have failed in some way. If they believe that this technology has been implemented in a way that didn’t rob them of a livelihood, that didn’t rob them of, you know, being totally misinformed about the world, didn’t rob them of, you know, being able to carry out their lives in a safe and secure manner, it didn’t destroy families. I think if they can still say that this is a technology that can work reasonably well if you put in place the safeguards, I think we would have come a long way.

Mariano-Florentino Cuéllar:

Deputy Director?

Johanna Hill

An appreciation for what the world trading system can and is delivering. You know, when I think about it, last year it turned 30 years that the WTO was born. And down the road at CERN, the World Wide Web was being created by scientists that wanted to collaborate. And that architecture, which is technology neutral, allowed for those developments of the digital economy to come through. And how much of that architecture can serve us for this new wave? And then concentrate on those areas that are still needing to be worked on by collaboration, by cooperation, and focus on those. You know, trading with trust, trading with safety, and then appreciating and using what we already have to deliver.

Mariano-Florentino Cuéllar:

Managing Director?

Kristalina Georgieva

Well, in 15 years, if my life expectancy has grown by another 50 years, I would say, great, we are successful. But on a serious note, I think, to me, the most important… factor, it goes a bit in the trust area, is the ethical foundation of AI. Whether we would manage to put AI on the foundation of force for good, or we leave space for AI to be force for evil. And that balance is not easy one. When I look at progress so far, we have done much more on the technical side of AI, and much less on building that strong ethical foundation, and putting guardrails that are not restricting innovation, but are protecting us from AI for bad.

I still want my 50 years extra life.

Mariano-Florentino Cuéllar:

One closing observation to just reinforce my appreciation to the three of you and the work we do. So in the weeks immediately after the release of ChatGPT, which seems like 20 years ago, but it was not that long ago, there was talk about the need for an international atomic energy agency for AI or a new international agency or treaty. We don’t talk about that anymore. And I think in some ways it’s an appropriate and mature recognition that we already have a set of institutions and mechanisms in place to deal with a set of emerging challenges. I think it’s also a recognition that many individual countries have to do their part to create social cohesion and manage this change and this transformation effectively.

But I would ask that this audience recognize that all three of our remarkable leaders here on the stage also reflect another reality, which is that even if sovereignty is important and even if individual countries have to have their own priorities, the challenge of how we best live with the technology we have created is truly a global one. It’s not an individual country. It’s a country one. And the conversation we’re having today is an example of how we can learn from each other and find the right solutions. Thank you and namaste.

A

Announcer

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Global Contextualization of AI

Explanation

The discussion opens by emphasizing that artificial intelligence must be framed within a global perspective so that its benefits are shared worldwide. Positioning AI globally is presented as a prerequisite for inclusive outcomes.


Evidence

“Now we move to a conversation about how artificial intelligence needs to be positioned in the global context.” [1]. “So we have a very elite… set of panelists… titled AI Needs to be Positioned in the Global Context.” [5].


Major discussion point

Global Contextualization of AI


Topics

Artificial intelligence


K

Kristalina Georgieva

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

AI adds ~0.8% to global GDP, fast adopters can double gains

Explanation

Georgieva quantifies AI’s contribution to economic growth, noting a 0.8 % lift to global GDP and highlighting that countries that adopt AI quickly could see productivity gains twice as large as slower adopters.


Evidence

“a percentage point, we say 0 .8%” [17]. “Potential for countries that go fast on digital infrastructure, on skills, on adoption of AI, that they can do twice as well as those that don’t.” [16].


Major discussion point

Economic Growth Potential and Risks


Topics

Artificial intelligence | The digital economy


AI threatens fairness, displaces jobs, and may destabilize financial markets

Explanation

Georgieva warns that AI can undermine global fairness, affect a large share of employment, and pose systemic risks to financial stability if left unchecked.


Evidence

“First, it brings the risk of making countries and the world less fair.” [28]. “40 % of jobs will be affected by AI, some enhanced, others eliminated.” [18]. “Could AI get loose and create havoc on financial markets?” [21].


Major discussion point

Economic Growth Potential and Risks


Topics

Artificial intelligence | Social and economic development


Education must be revamped to teach learning‑how‑to‑learn

Explanation

Georgieva argues that education systems need to shift from teaching fixed skills to fostering the ability to continuously learn, preparing people for the rapidly changing AI‑driven labour market.


Evidence

“education has to be revamped for the for a new world people have to learn to learn not to learn specific skills” [68].


Major discussion point

Policy Toolkit for AI‑Driven Disruption


Topics

Capacity development | Artificial intelligence


Social protection needed for workers displaced by AI

Explanation

She stresses that alongside skill development, robust social safety nets are essential to support workers whose jobs are transformed or eliminated by AI technologies.


Evidence

“there has to be support for those … there has to be social protection social support” [68].


Major discussion point

Policy Toolkit for AI‑Driven Disruption


Topics

Social and economic development | Capacity development


Building an ethical foundation and guardrails for AI

Explanation

Georgieva highlights the need for a strong ethical base and safeguards that protect society without stifling innovation, ensuring AI remains a force for good.


Evidence

“the most important… factor, it goes a bit in the trust area, is the ethical foundation of AI.” [91]. “we have done much more on the technical side of AI, and much less on building that strong ethical foundation, and putting guardrails that are not restricting innovation, but are protecting us from AI for bad.” [94].


Major discussion point

Trust and Ethical Foundations as Core to AI Success


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


J

Johanna Hill

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Trade can diffuse AI to low‑ and middle‑income economies

Explanation

Hill points out that the existing trade system can be leveraged to spread AI technologies to countries that need them most, thereby supporting growth in lower‑income regions.


Evidence

“To be here, definitely we see that trade can help the diffusion of AI to those that most need it.” [13]. “And we also think that AI can help trade and can help lower income and middle income economies really progress through trade.” [20].


Major discussion point

Trade as Vehicle for AI Diffusion and Equity


Topics

The digital economy


AI reshapes comparative advantage toward data, capital and computing power

Explanation

She observes that AI changes the basis of comparative advantage, favouring economies with strong data assets, capital, and computing infrastructure, which calls for targeted skill and infrastructure investment.


Evidence

“Now, we do see that AI is really shifting what we think of as comparative advantage to those economies that are more strong in capital, data, and in computing power.” [25].


Major discussion point

Trade as Vehicle for AI Diffusion and Equity


Topics

The digital economy


Coordination of competition, labour and skills policies across WTO, governments and private sector

Explanation

Hill stresses the need for a holistic policy approach that brings together competition, labour and skills policies at the WTO level, national governments and the private sector to manage AI‑driven disruption.


Evidence

“And so we look at issues around competition policy, around labor force.” [77]. “We need to partner at our level with international organizations and at the national level with the appropriate authorities and the private sector in order to have that holistic approach.” [78].


Major discussion point

Policy Toolkit for AI‑Driven Disruption


Topics

The digital economy | The enabling environment for digital development | Artificial intelligence


WTO’s trusted architecture can support AI’s digital economy

Explanation

Hill highlights that the WTO’s technology‑neutral, trusted framework can facilitate the development of a digital economy powered by AI.


Evidence

“And that architecture, which is technology neutral, allowed for those developments of the digital economy to come through.” [57]. “You know, trading with trust, trading with safety, and then appreciating and using what we already have to deliver.” [52].


Major discussion point

Trust and Ethical Foundations as Core to AI Success


Topics

The digital economy | Artificial intelligence


J

Josephine Teo

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Singapore acts as a trusted node with consistent, principled policies

Explanation

Teo explains that Singapore’s role as a trusted node rests on its commitment to consistent, principle‑based decision‑making, which builds confidence among partners.


Evidence

“The way we think about it is that for Singapore, it’s very important for us to maintain this ability to operate as a trusted node.” [43]. “Trusted node means that, well, we can trust you with our technology.” [44]. “And being consistent and principled is not a matter of size.” [45].


Major discussion point

Singapore’s Trusted‑Node Governance Model


Topics

Artificial intelligence | The enabling environment for digital development


Small states can navigate technology decoupling while staying neutral

Explanation

She notes that small states like Singapore can manage the risk of technology decoupling by adhering to disciplined, principle‑driven choices that align with national interests without taking sides.


Evidence

“And Singapore is not the only small state that has a good track record of holding this discipline.” [54]. “I think embedded in what you’re saying is that there is the risk of technology decoupling.” [58]. “And sometimes our choices may align with this country or that country.” [61].


Major discussion point

Singapore’s Trusted‑Node Governance Model


Topics

The enabling environment for digital development | Artificial intelligence


Regulation alone cannot solve inequality; broader social‑solidarity measures needed

Explanation

Teo argues that relying solely on regulation is insufficient to address social inequality; comprehensive policies covering housing, health, and education are required.


Evidence

“I think these are the other things, and you cannot run away from those conversations just by expecting regulations to solve the problem.” [84]. “What provisions do we put in place to ensure that even people who don’t earn a lot have the prospect of owning their own homes, access to good health care, educating their children to a very high level?” [85].


Major discussion point

Policy Toolkit for AI‑Driven Disruption


Topics

Social and economic development | Human rights and the ethical dimensions of the information society


Citizen trust in AI technology is essential for transition

Explanation

She emphasizes that public confidence in AI is a prerequisite for a successful societal transition to AI‑driven systems.


Evidence

“In 15 years, if we went and asked citizens in all the countries where AI is being deployed widely, do you trust this technology?” [90].


Major discussion point

Trust and Ethical Foundations as Core to AI Success


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


M

Mariano-Florentino Cuéllar

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Trust is the single most important factor for a successful AI future

Explanation

Cuéllar stresses that trust is the cornerstone of any AI deployment; without it, the technology cannot achieve its full potential.


Evidence

“For me, that one word is trust.” [93].


Major discussion point

Trust and Ethical Foundations as Core to AI Success


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider

Session transcript

Thomas Schneider

So, dear friends and colleagues from India and from all around the world, it is an honor and pleasure to be here with you in Delhi at this pivotal moment for global AI governance. And first, of course, I want to express my gratitude to the government of India for bringing together a diverse and distinguished group of leaders, innovators, researchers, civil society representatives from all around the world. Switzerland very much welcomes and supports the focus of the AI Impact Summit, which is well presented in the three sutras, people, progress, planet, as we all have learned in the past weeks and months. And we fully agree that we need to develop and use AI in a way that everyone in the world can benefit from the potential that AI offers.

This includes economic and societal social progress for everyone. At the same time, of course, we need to make sure that we are able to develop and use AI in a way that everyone in the world can benefit from the that we respect human dignity and autonomy, as well as our planet, which is the basis for all life that we know, at least so far. We haven’t found other life elsewhere. So we are honored and very proud to be hosting the next AI Summit in Geneva in 2027. It is overwhelming to see already now and feel the momentum and the enthusiasm that we sense on national level among all Swiss stakeholders, as well as the very positive reactions from our partners from all around the world, who are all eager and willing to cooperate with us and contribute to the summit in Geneva.

Already now, we are approached by many governmental and other stakeholders that share their ideas with us about what the Geneva Summit and the road leading up to it should focus on and what it should achieve. And let me assure you that this is very welcome and helpful to us. The Swiss motivation for organizing the next summit is to, not to make a show, it is to substantially and meaningfully contribute to achieving the goal that mankind and the world want to achieve. it is to substantially and meaningfully contribute to achieving the goal that mankind uses the unprecedented potential of AI to achieve the goal that mankind uses for good and not for bad. This potential of AI, which may be at least as transformative as the invention of the printing press, radio, television and the internet, as well as the invention of the combustion and other engines together, this potential must be used to raise and not lower the quality of life of all people in the world and not just a few.

AI must strengthen and not weaken the dignity and autonomy of all people in the global north, south, east and west or whatever we call the region where we live and help us all to live together in peace and prosperity. So we are very keen to hear your ideas about what we could and should do together to achieve this goal. Of course, we do have some ideas on our own, but we have not decided yet about the focus of the Geneva Summit. We will discuss it with you together, shape it together. Of course, there will be a Swiss flavor to the Geneva Summit, which is based on the way we work and what we understand, our role in the international community.

We will try to be constructive. Thank you. creative and innovative and try to find pragmatic and fair solutions through bringing together all stakeholders in their respective roles and with their respective experience and at the same time we will try not to reinvent the wheel and duplicate processes and instruments that already exist and that work but rather we will try to build on them because we do already have a number of dialogue platforms for AI governance and for sharing good practices such as the UN Internet Governance Forum and its national and regional initiatives, the AI for Good Summit and the Global Forum on Ethics of AI organized by ITU, UNESCO and many other UN related processes and forum.

We have other forum like the OECD, GPI and other international and regional organizations and of course we will build on the outcomes of the previous summit in the UK, Korea, Japan, sorry Paris, Japan will follow at some point in time, UK, Korea, Paris and of course here in Delhi and we should not forget There are many academic and other networks that provide expertise and solutions. So we will do our best to bring them all together. And with the help of our longstanding partners from the Diplo Foundation and the Geneva Internet Platform, we will also try to facilitate the orientation in this complex governance ecosystem, in particular for less resourced communities, so that also they know better about what is going on where and where we need to raise our voice so that they are actually heard.

At the same time, we consider the transformative power of AI to be too big, broad and context -specific so that no one single institution and no single instrument will allow us to seize all opportunities and will solve all problems. So we will have to learn to live with a certain complexity of the governance of this transformation. But also, this is not a completely new situation. If we look at how we have governed the transformative power of combustion and other engines in the past 200 years, there are some lessons that we can also apply to AI. While today we are developing AI to automate cognitive labor, we have developed engines to automate physical labor. We have put engines in vehicles or machines to move goods or people from one place to another.

And we have put engines in machines to produce food or other goods automatically. And we do not expect one single institution or instrument to govern all of this. But we have developed a set of thousands of technical, legal, and also non -written societal norms that guide us in the use of these machines. We have regulated also the infrastructure that these machines use. We are setting requirements and liabilities for the people that develop, handle, and steer these machines. And we have developed instruments to protect people that are affected by the impact of these machines. And we are seeing different levels of harmonizations when it comes to regulating machines and engines. As an example, of course, we know that the airline industry is much more harmonized because it’s global than the way we regulate cars.

Cars driving in our streets on one side or the other side, where there’s more diversity possible. So after 200 years, we are still continuing to adapt the governance framework for engine driven machines, depending on the context of use. And we need to do exactly the same with AI. We need to develop appropriate technical, legal and societal frameworks and norms that allow us to develop and use AI for good in many different ways. And this work has already begun. We have analyzed our existing governance frameworks, have started to identify and fill the gaps. We have started to work on technical norms for AI systems. We have started to work on binding and non -binding legal instruments.

And of course, in this regard, I’d like to particularly highlight the Vilnius Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, for which I had the honor to lead the negotiations among 55 countries from all over the world at the Council of Europe in Strasbourg. This provides for a principle based framework, not just for Europe, but for all countries. It provides for a principle based framework, not just for Europe, but for all countries on our planet that value human rights, democracy and the rule of law. so that our societies and economies can use AI to innovate, while at the same time we uphold our respect to human dignity and autonomy, also in the context of AI.

The principles set out by the Vilnius Convention are simple and clear, but the Convention leaves enough leeway to participating states in order to allow to embed these principles in their existing legal and regulatory institutions and traditions. This will allow many countries to become parties to this global convention and to make sure that their governance frameworks may, although not become identical, but at least interoperable. This Convention, which we hope will be ratified and enter into force very soon, will become one important instrument to make sure that AI is used for the good and not the bad. But of course, there will have to be many more binding and non -binding norms and more sector -specific norms and instruments to complement it, which hopefully will be… at least coherent in their logic and spirit.

So we will use the time until the Geneva Summit next year to continue to identify gaps in global and regional governance of AI and achieve our shared objectives so that AI is used for innovation, while at the same time legitimate concerns and risks are appropriately addressed. Switzerland will be the host of the next summit, but we know that we will not be able to achieve anything on our own. So we look forward to collaborating with all of you, with all countries and all stakeholders from the global north, south, east and west, and we will first try to identify areas where there’s a willingness and a shared vision to make progress together and then work with all of you on pragmatic and workable steps towards this vision.

We will only be the facilitators trying to build bridges and build a climate of open and respectful and constructive dialogue, trying to offer pragmatic structures for trustworthy cooperation so that we can all use the potential AI, to say it again, to live together in peace, prosperity and security. Dignity. Dignity. The Swiss Summit team and I personally are looking forward to collaborating with all of you in the coming months, and we look forward to seeing you all in Geneva in 2027. Thank you for your support and attention.

T

Thomas Schneider

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Vision of Inclusive AI for Humanity

Explanation

Schneider argues that AI must be developed and used in a way that benefits every person worldwide, upholding human dignity, autonomy and protecting the planet. He also stresses that AI’s transformative power is comparable to historic inventions such as the printing press and the engine.


Evidence

“AI must strengthen and not weaken the dignity and autonomy of all people in the global north, south, east and west or whatever we call the region where we live and help us all to live together in peace and prosperity.” [1]. “we need to make sure that we are able to develop and use AI in a way that everyone in the world can benefit from the that we respect human dignity and autonomy, as well as our planet, which is the basis for all life that we know, at least so far.” [2]. “This potential of AI, which may be at least as transformative as the invention of the printing press, radio, television and the internet, as well as the invention of the combustion and other engines together, this potential must be used to raise and not lower the quality of life of all people in the world and not just a few.” [5].


Major discussion point

Vision of Inclusive AI for Humanity


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | Environmental impacts


Switzerland’s Role and the 2027 Geneva AI Summit

Explanation

Schneider states that Switzerland will host the next AI summit in Geneva in 2027, aiming to make a substantive contribution rather than a mere showcase. The summit will carry a distinct “Swiss flavor” while being co‑created with global stakeholders.


Evidence

“Switzerland will be the host of the next summit, but we know that we will not be able to achieve anything on our own.” [22]. “So we are honored and very proud to being hosting the next AI Summit in Geneva in 2027.” [23]. “The Swiss motivation for organizing the next summit is to, not to make a show, it is to substantially and meaningfully contribute to achieving the goal that mankind and the world want to achieve.” [24]. “Of course, there will be a Swiss flavor to the Geneva Summit, which is based on the way we work and what we understand, our role in the international community.” [26].


Major discussion point

Switzerland’s Role and the 2027 Geneva AI Summit


Topics

Artificial intelligence | The enabling environment for digital development


Multi‑Stakeholder, Multi‑Instrument Governance Approach

Explanation

Schneider emphasizes building on existing multistakeholder platforms rather than creating new ones, and highlights the need to help less‑resourced communities navigate the complex AI governance ecosystem.


Evidence

“creative and innovative and try to find pragmatic and fair solutions through bringing together all stakeholders in their respective roles and with their respective experience and at the same time we will try not to reinvent the wheel and duplicate processes and instruments that already exist and that work but rather we will try to build on them because we do already have a number of dialogue platforms for AI governance and for sharing good practices such as the UN Internet Governance Forum and its national and regional initiatives, the AI for Good Summit and the Global Forum on Ethics of AI organized by ITU, UNESCO and many other UN related processes and forum.” [29]. “And with the help of our longstanding partners from the Diplo Foundation and the Geneva Internet Platform, we will also try to facilitate the orientation in this complex governance ecosystem, in particular for less resourced communities, so that also they know better about what is going on where and where we need to raise our voice so that they are actually heard.” [40].


Major discussion point

Multi‑Stakeholder, Multi‑Instrument Governance Approach


Topics

Artificial intelligence | Internet governance | Capacity development


Learning from Past Technological Governance (Engines Analogy)

Explanation

Schneider draws lessons from the governance of combustion engines, noting that AI’s breadth requires many institutions and sector‑specific rules with varying levels of harmonisation.


Evidence

“If we look at how we have governed the transformative power of combustion and other engines in the past 200 years, there are some lessons that we can also apply to AI.” [15]. “At the same time, we consider the transformative power of AI to be too big, broad and context -specific so that no one single institution and no single instrument will allow us to seize all opportunities and will solve all problems.” [17]. “And we are seeing different levels of harmonizations when it comes to regulating machines and engines.” [20]. “And we do not expect one single institution or instrument to govern all of this.” [48]. “But of course, there will have to be many more binding and non -binding norms and more sector -specific norms and instruments to complement it, which hopefully will be… at least coherent in their logic and spirit.” [49].


Major discussion point

Learning from Past Technological Governance (Engines Analogy)


Topics

Artificial intelligence | The enabling environment for digital development | Internet governance


Vilnius Convention on AI, Human Rights, Democracy and the Rule of Law

Explanation

Schneider presents the Vilnius Convention as a principle‑based, flexible framework applicable globally, serving as a key binding instrument that will be complemented by additional sector‑specific norms.


Evidence

“It provides for a principle based framework, not just for Europe, but for all countries on our planet that value human rights, democracy and the rule of law.” [11]. “This Convention, which we hope will be ratified and enter into force very soon, will become one important instrument to make sure that AI is used for the good and not the bad.” [14]. “And of course, in this regard, I’d like to particularly highlight the Vilnius Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, for which I had the honor to lead the negotiations among 55 countries from all over the world at the Council of Europe in Strasbourg.” [30]. “The principles set out by the Vilnius Convention are simple and clear, but the Convention leaves enough leeway to participating states in order to allow to embed these principles in their existing legal and regulatory institutions and traditions.” [51]. “But of course, there will have to be many more binding and non -binding norms and more sector -specific norms and instruments to complement it, which hopefully will be… at least coherent in their logic and spirit.” [49].


Major discussion point

The Vilnius Convention on AI, Human Rights, Democracy and the Rule of Law


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | The enabling environment for digital development


Call for Collaborative Gap‑Identification and Pragmatic Action

Explanation

Schneider calls for identifying shared‑vision areas, developing workable steps, and acting as facilitators to bridge governance gaps before the 2027 Geneva summit, emphasizing pragmatic cooperation.


Evidence

“We will only be the facilitators trying to build bridges and build a climate of open and respectful and constructive dialogue, trying to offer pragmatic structures for trustworthy cooperation so that we can all use the potential AI, to say it again, to live together in peace, prosperity and security.” [13]. “So we will use the time until the Geneva Summit next year to continue to identify gaps in global and regional governance of AI and achieve our shared objectives so that AI is used for innovation, while at the same time legitimate concerns and risks are appropriately addressed.” [25]. “We will first try to identify areas where there’s a willingness and a shared vision to make progress together and then work with all of you on pragmatic and workable steps towards this vision.” [36].


Major discussion point

Call for Collaborative Gap‑Identification and Pragmatic Action


Topics

Artificial intelligence | Capacity development | Internet governance


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs

Session transcript

Speaker 1: Namaste. Thank you so much for that introduction. Good evening everyone. It is truly an honor to be here today. In his May 2011 address to the British Parliament, President Obama said, and I quote, I am told that the last three speakers here have been the Pope, Her Majesty, the Queen, and Nelson Mandela, which is either a very high bar or the beginning of a very funny joke. Unquote. Even though I am no President Obama, I feel something similar standing before you this evening. Being in the company of leaders like our Honorable Prime Minister Modiji, Sundar Pichai, Rishi Sunak, Sam Altman, Mukesh Ambani, Narayan Murthy, and my father, to name a few, this for sure is a very high bar that I will surely not reach. I am very nervous and this is clearly not a joke. Under the leadership of our Honorable Prime Minister, it has been extraordinary to see to watch India step into a driving role in the global AI discourse. As a young leader, I feel extremely grateful to have this platform and I feel that it is our responsibility to make sure that we deliver. India’s journey from a $4 trillion economy to a $40 trillion economy in the arc that stretches from where we are today to the Vixit Bharat we aspire to be by 2047, technology will play a decisive role. This moment carries a similar taste to that of the Industrial Revolution. A period where the relationship between human labor and economic output was fundamentally rewritten. And yet, I would argue, that what we are witnessing today is even more profound. The Industrial Revolution amplified our physical capabilities. AI is amplifying our cognitive ones. What we are living through is nothing less than a Cambrian explosion of possibilities. A phase where entirely new forms of value and new modes of human potential are emerging at a pace that defies our linear thinking. We are standing at a seminal moment in the history of human progress. A moment of extraordinary possibilities. In our humble attempt to translate these possibilities into reality, I am here to introduce Birla AI Labs. When it comes to the Aditya Birla group and AI, I want to be clear. My father has been at this for a while. Deliberately, quietly, and steadily. Not for the spectacle, but to deliver tangible value to our stakeholders. Birla AI Labs has a dual mandate. The first is to service my father’s direction and act as an apex AI body for the Aditya Birla Group, building solutions alongside our business tech teams to unlock new value across our businesses. The second is to operate as a frontier research lab doing ongoing original research at cutting edge and translating that science into proprietary AI products for the open market. This dual positioning gives Birla AI Labs a rare advantage. Real world data, domain know -how and enterprise scale deployment through the group paired with the freedom to build category defining products for global markets. Let me share what this looks like in practice. As a part of our first mandate driven by my father, we are executing AI deployment, across the Aditya Birla Group. The group has been operational for the last 160 plus years, and we have decades of operational data across manufacturing, financial services, commodities, consumer businesses, and a growing bench of talent that understands both the science and the business, giving us an undeniable moat. With a $120 billion market cap operating across 42 countries with over 250 ,000 employees, we are witnessing tangible early gains across our diverse portfolio as advanced analytics and AI reshape everything from supply chains to workforce management. Let me start with Birla Estates. We will be using AI to compress project concept timelines by 90%. Freeing over 2 ,000 man days a year. The immediate impact is efficiency. Architects and developers are no longer constrained by the time it takes to test an idea Contract intelligence tools will now give our teams a unified, accurate view of every agreement Flagging potential claims before they escalate Moving into financial services and the transformation takes a completely different shape Aditya Birla Capital has built one of the most ambitious Gen AI programs in India’s financial sector Not by picking a single use case, but by going after the entire value chain at once Underwriting turnaround time is down 50 % Credit assessment preparation has been cut by 90 % A fully AI -enabled sales program is already targeting more than $100 million in gross sales And it’s not just about the sales, it’s also about the value of the product While the customer service platform is pushing first call resolution beyond 90 % What makes this remarkable is not any individual number. It is the concurrence. Then there is Hindalco. And here the story shifts register entirely. This is about applying intelligence in one of the most physically demanding energy -intensive industries in the world. On the shop floor, a proprietary factory intelligence integrates 24 operational KPIs in real -time, turning what were once static spreadsheets into a living intelligence layer that surfaces anomalies before they escalate. And what we are building is more ambitious still. A digital twin for our smelters and furnaces, and an AI layer on top that will orchestrate the entire coal -sourced power ecosystem. But if there is one place in our portfolio where AI feels most consequential, most human, it is for Tantra. Tantra our microfinance business. Here we are embedding AI across sales, audit and quality control and we expect it to unlock at least 30 % in productivity gains. It means a loan officer can reach more people. It means a woman in a village gets access to capital that she would not have had access to otherwise. That kind of efficiency does not just improve margins, it has the potential to improve lives. The consumer businesses brings this completely full circle. I have come to believe through my experience that to build great brands today, product must be backed by content -driven distribution. Our fashion, retail and jewelry businesses are deploying AI for hyper -personalized marketing and real -time inventory intelligence. Within Birla Cosmetics, our brand Love Etc and Contraband are using AI creativity tools to move from campaign ideation to final asset delivery at a fraction of the traditional cost. What this tells me is that the question is no longer whether AI can work in complex, capital -intensive real -world industries. We have seen it and we know it can. The real question is how fast and how deep should we go, given the ambiguity that surrounds artificial intelligence. Now, on to our second mandate. And this is where Birla AI Labs operates as a frontier research lab. The conviction here is simple. India’s next great institution will emerge at the intersection of deep research, applied engineering and market creation. That is our North Star. To do this, we have assembled a global team of researchers and engineers from Oxford and IIT Madras to BITS Pilani, ISRO Google and Goldman Sachs. Our first major research vertical is in structured foundation. A field that a recent Forbes article estimates at a $600 billion market opportunity. Often overlooked, a vast majority of the world’s data sits in time series and tabular formats Stock prices, sensor readings, supply chain signals, weather patterns, energy consumptions, patient vitals This data could actually power predictive intelligence in industry, in finance, in infrastructure, in healthcare In December, our team was in Europe in San Diego where our paper, Time to Time, was accepted It asks a very provocative question Do these time series foundation models actually understand what a market crash is? Or are they just fitting curves? A researcher showed that you can reach inside a model’s hidden states Inject the signature of a historical crash and watch the forecast shift accordingly This is not a question of time This is not curve fitting This is a model that has learned something about the structure of the world Our researchers are working at that frontier This thesis for Birla AI Labs has been presented at the Oxford AI Summit and the World Summit AI in the Netherlands in 2025 This lab has built, is building I would say a credible global research presence presenting at top venues partnering with leading institutions and attracting talent that could work anywhere in the world but has chosen to build for India A second research vertical is one that I believe the industry has a moral obligation to pursue AI now mediates the everyday decisions, relationships and information environments of over 1 .7 billion people worldwide Yet the study of what this does to human cognition, agency and daily life remains nascent We at Birla AI Labs want to do something about this We are here to help We conducted a study with Delhi University students to measure how language model usage affects curiosity and cognitive agency among students. The results of this study will be presented at King’s College this June. This is the kind of research that industry too often leaves to others. But I believe that those of us who are building AI have a responsibility to understand its human consequences, not as an afterthought, but as a core part of the enterprise. Alongside the research, we are also building tools. In December 2024, we launched a beta version of an AI native research and productivity platform at IIT Bombay. Combining a genetic search, real -time data processing and multimodal intelligence to deliver contextual insights across the Internet, documents and financial data. That platform is now being used across my own office. to drive day -to -day efficiency. It is a tangible example of what happens when frontier research meets applied deployment. And that is exactly the loop that we at Birla AI Labs are designed to close. This approach, building at the frontier while staying rooted in real -world application, is not new to us. The Aditya Birla Group’s history has been very intertwined with the story of our nation. My forefathers have built through every chapter of India’s journey, through independence, through liberalization, through globalization. Ours is a history of reading the moment, adapting with conviction, and building institutions that outlast the disruptions that gave birth to them in the first place. This century of building has given us something very invaluable, a muscle memory for navigating tectonic shifts We are here to build a new world. We are here to build a new world. We are here to build a new world. We are here to build a new world. We are here to build a new world. The generations before me, my brother and my sister, have learned to read the early tremors, to invest before consensus forms, and to build for decades rather than just quarters. Every generation of our group has faced a moment where the old playbook had to be rewritten. What is different today is the elements of high ambiguity and uncertainty, which, if we look at closely, can give rise to immense opportunity. And that is precisely what makes this moment so thrilling and so consequential. But here is what I have come to believe very, very strongly. No single institution, no matter how large or how well resourced, can navigate this epoch alone. The journey from $4 trillion to $40 trillion will not be powered by industry acting in isolation. It will require something way more fundamental. It will require us to build an ecosystem that brings academia, industry and policy into genuine, sustained collaboration. As India writes its AI chapter, we intend to be on the front lines, not as observers, not as fast followers, but as honest and true responsible builders of technology, of institutions and of the ecosystem this country needs to lead. And we will do so with utmost responsibility. The playbook for what comes next has not yet been written. And at the Aditi Birla Group and at Birla AI Labs, we look forward to writing it together. Thank you all so very much. It’s been an honour.

S

Speaker 1

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

AI as cognitive amplifier of the new industrial era

Explanation

The speaker frames AI as the modern equivalent of the Industrial Revolution, amplifying human cognition rather than physical labor. This positions AI as a transformative catalyst for creating new forms of value and human potential.


Evidence

“AI is amplifying our cognitive ones.” [1] “The Industrial Revolution amplified our physical capabilities.” [2]


Major discussion point

India’s AI‑driven economic transformation


Topics

Artificial intelligence | The digital economy | Social and economic development


AI essential to lift India to a $40 trillion economy by 2047

Explanation

India’s ambition to grow from a $4 trillion to a $40 trillion economy hinges on technology, with AI identified as a decisive driver for this leap by 2047.


Evidence

“India’s journey from a $4 trillion economy to a $40 trillion economy in the arc that stretches from where we are today to the Vixit Bharat we aspire to be by 2047, technology will play a decisive role.” [13]


Major discussion point

India’s AI‑driven economic transformation


Topics

Artificial intelligence | The digital economy | Social and economic development


Leadership responsibility to steer AI development responsibly

Explanation

The speaker emphasizes that builders of AI must proactively consider the human impact of their technologies, making responsibility a core enterprise priority rather than an afterthought.


Evidence

“But I believe that those of us who are building AI have a responsibility to understand its human consequences, not as an afterthought, but as a core part of the enterprise.” [10]


Major discussion point

India’s AI‑driven economic transformation


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | Building confidence and security in the use of ICTs


Birla AI Labs’ dual mandate

Explanation

Birla AI Labs is tasked with two complementary missions: serving as the apex AI body for the Aditya Birla Group to deliver business value, and operating as a frontier research lab that creates original science and market‑ready AI products.


Evidence

“Birla AI Labs has a dual mandate.” [24] “The first is to service my father’s direction and act as an apex AI body for the Aditya Birla Group, building solutions alongside our business tech teams to unlock new value across our businesses.” [29] “The second is to operate as a frontier research lab doing ongoing original research at cutting edge and translating that science into proprietary AI products for the open market.” [30]


Major discussion point

Birla AI Labs’ dual mandate (enterprise delivery + frontier research)


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Rare advantage from dual positioning

Explanation

Combining real‑world data, domain expertise, and enterprise scale with the freedom to innovate gives Birla AI Labs a unique competitive edge to create category‑defining AI products.


Evidence

“This dual positioning gives Birla AI Labs a rare advantage.” [32] “Real world data, domain know -how and enterprise scale deployment through the group paired with the freedom to build category defining products for global markets.” [39]


Major discussion point

Birla AI Labs’ dual mandate (enterprise delivery + frontier research)


Topics

Artificial intelligence | The enabling environment for digital development


Birla Estates AI compresses project timelines and saves man‑days

Explanation

AI tools are used to accelerate project concept development by 90 % and free more than 2,000 man‑days each year, dramatically improving efficiency in real‑estate operations.


Evidence

“We will be using AI to compress project concept timelines by 90%.” [19] “Freeing over 2 ,000 man days a year.” [52]


Major discussion point

Enterprise AI deployments across Birla Group businesses


Topics

Artificial intelligence | The digital economy | Capacity development


Aditya Birla Capital AI cuts underwriting time, credit‑assessment effort and drives $100 M+ sales

Explanation

In the financial services arm, AI reduces underwriting turnaround by half, slashes credit‑assessment preparation by 90 %, and powers an AI‑enabled sales program targeting more than $100 million in gross sales.


Evidence

“Underwriting turnaround time is down 50%” [18] “Credit assessment preparation has been cut by 90%” [18] “A fully AI -enabled sales program is already targeting more than $100 million in gross sales” [18]


Major discussion point

Enterprise AI deployments across Birla Group businesses


Topics

Artificial intelligence | The digital economy | Financial mechanisms


Hindalco factory‑floor intelligence and AI‑orchestrated power ecosystem

Explanation

A digital twin of smelters and furnaces, coupled with an AI layer, orchestrates the coal‑sourced power ecosystem, while real‑time factory intelligence integrates 24 KPIs to create a living operational dashboard.


Evidence

“A digital twin for our smelters and furnaces, and an AI layer on top that will orchestrate the entire coal -sourced power ecosystem.” [5] “On the shop floor, a proprietary factory intelligence integrates 24 operational KPIs in real -time, turning what were once static spreadsheets into a living intelligence layer that surfaces anomalies before they escalate.” [53]


Major discussion point

Enterprise AI deployments across Birla Group businesses


Topics

Artificial intelligence | Environmental impacts | The digital economy


Tantra micro‑finance AI drives ≥30 % productivity gains

Explanation

AI is embedded across sales, audit and quality functions in the micro‑finance business, unlocking at least a 30 % uplift in productivity and expanding financial inclusion.


Evidence

“Here we are embedding AI across sales, audit and quality control and we expect it to unlock at least 30 % in productivity gains.” [7] “But if there is one place in our portfolio where AI feels most consequential, most human, it is for Tantra.” [57]


Major discussion point

Enterprise AI deployments across Birla Group businesses


Topics

Artificial intelligence | Social and economic development | Financial mechanisms


Consumer brands AI for hyper‑personalized marketing and cost‑effective creativity

Explanation

Fashion, retail and jewelry businesses use AI for hyper‑personalized marketing and real‑time inventory intelligence, while AI creativity tools cut campaign production costs dramatically.


Evidence

“Our fashion, retail and jewelry businesses are deploying AI for hyper -personalized marketing and real -time inventory intelligence.” [9] “Within Birla Cosmetics, our brand Love Etc and Contraband are using AI creativity tools to move from campaign ideation to final asset delivery at a fraction of the traditional cost.” [36]


Major discussion point

Enterprise AI deployments across Birla Group businesses


Topics

Artificial intelligence | The digital economy | Social and economic development


Structured foundation models for time‑series and tabular data – $600 B market

Explanation

The lab’s first research vertical focuses on structured foundation models, a field estimated by Forbes to represent a $600 billion market opportunity.


Evidence

“A field that a recent Forbes article estimates at a $600 billion market opportunity.” [61] “Our first major research vertical is in structured foundation.” [62]


Major discussion point

Frontier research focus areas


Topics

Artificial intelligence | The digital economy | Capacity development


Time to Time paper shows models internalize market‑crash signatures

Explanation

The research paper “Time to Time” demonstrates that time‑series foundation models can learn structural market‑crash signatures rather than merely fitting curves, indicating deeper world‑modeling capability.


Evidence

“Our paper, Time to Time, was accepted … It asks a very provocative question Do these time series foundation models actually understand what a market crash is?” [48] “A researcher showed that you can reach inside a model’s hidden states Inject the signature of a historical crash and watch the forecast shift accordingly … This is not curve fitting This is a model that has learned something about the structure of the world.” [20]


Major discussion point

Frontier research focus areas


Topics

Artificial intelligence | Capacity development


Study on language‑model usage affecting Delhi University students

Explanation

Birla AI Labs conducted a study measuring how interaction with large language models influences curiosity and cognitive agency among Delhi University students.


Evidence

“We conducted a study with Delhi University students to measure how language model usage affects curiosity and cognitive agency among students.” [20]


Major discussion point

Frontier research focus areas


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


AI‑native research & productivity platform launched at IIT Bombay

Explanation

In December 2024, the lab released a beta version of an AI‑native research and productivity platform at IIT Bombay, now being used internally to boost efficiency.


Evidence

“In December 2024, we launched a beta version of an AI native research and productivity platform at IIT Bombay.” [21]


Major discussion point

Frontier research focus areas


Topics

Artificial intelligence | Capacity development | The enabling environment for digital development


Collaboration across academia, industry and policy is essential

Explanation

The speaker stresses that no single institution, regardless of size or resources, can navigate the AI epoch alone, calling for sustained multi‑stakeholder collaboration.


Evidence

“No single institution, no matter how large or how well resourced, can navigate this epoch alone.” [71]


Major discussion point

Building an ecosystem for responsible AI


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Commitment to honest, responsible building of AI technology and institutions

Explanation

Birla AI Labs positions itself as a front‑line, honest and responsible builder of AI technologies, institutions, and the broader ecosystem needed for India to lead in AI.


Evidence

“As India writes its AI chapter, we intend to be on the front lines, not as observers, not as fast followers, but as honest and true responsible builders of technology, of institutions and of the ecosystem this country needs to lead.” [17]


Major discussion point

Building an ecosystem for responsible AI


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


Understanding human consequences of AI as a core enterprise priority

Explanation

The speaker reiterates that grasping AI’s impact on humans must be embedded at the heart of business strategy, not treated as a peripheral concern.


Evidence

“But I believe that those of us who are building AI have a responsibility to understand its human consequences, not as an afterthought, but as a core part of the enterprise.” [10]


Major discussion point

Building an ecosystem for responsible AI


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | Building confidence and security in the use of ICTs


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden

Session transcript

Moderator

Thank you so much, Mr. Cristiano Amon, for that very, very interesting session. And I’m sure each one of us must have gained something, some new insights out of it. Are you all excited about such sessions, such keynote speakers? Louder yes would do better. Thank you. I think we all keep reading about AI. We all are aware of the challenges in front of the world when it comes to AI. But, capital but, B -U -T, such sessions are actually adding such new perspectives to our understanding of AI, the challenge, and also the future, what to expect in future. So I think it’s really time to thank our keynote speakers who are adding such great value to our understanding of artificial intelligence, as well as to this AI Impact Summit.

And ladies and gentlemen, now, it’s my honor to invite Her Excellency, Ebba Bush, Deputy President of the United States, and the President of the United States, Prime Minister and Minister for Energy and Business, Sweden. Sweden has long been a quiet powerhouse of innovation, from Ericsson to Spotify to some of Europe’s most promising AI startups. As Deputy Prime Minister, Ms. Ebba Bush is navigating the critical nexus between energy policy and AI infrastructure. Now, that’s a challenge I think every nation will face as the data centers demand ever -growing share of national power grids. Ladies and gentlemen, please join me in welcoming Deputy Prime Minister of Sweden, Her Excellency, Ebba Bush.

Ebba Bush

Thank you so much, Excellencies, distinguished guests, dear friends. Namaste, ap kärsahein. And let me begin by expressing my sincere gratitude I am very grateful to the European Council for the towards the government of India and to the organizers of this important summit. It is truly an honor to be here in beautiful, beautiful India. Given this unique chance to address you all today, I would like to talk about three points. About why, first of all, it is important to be here, some reflections on public legitimacy, and finally, about cooperation and AI sovereignty. India today is not only the world’s largest democracy, it is a leading voice in shaping the future global order. Your leadership matters, your perspective matters, and the Global South must be fully included when we shape the rules of innovation, technology governance, and global standards.

I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, Swede that represents the second largest international delegation here at the AI Summit after France. That’s worth an applaud in itself. Thank you so much. The Nordics are deeply engaged here in India, and we are here because we believe this partnership is strategic.

It is long -term and built on trust. India is not only the world’s largest democracy, it is also the world’s youngest democracy. And I am impressed with the long -term vision of India for a better life for young people, with a commitment that stretches across generations. And Sweden shares this long -term commitment. Since India first gained independence, Swedish companies have worked alongside Indian partners, and we have grown together. And as India makes strategic investments in sovereign and democratic development, we have developed different AI models and advanced research ensuring that 1 .4 billion people can benefit from AI. This is not only industrial policy. It is in many ways poverty reduction. It is empowerment. It is development leap of historic proportions.

Sweden intends to be a reliable and innovative partner as India continues its economic rise. Prime Minister Modi often talks about and speaks of India as Vishmamitra, a friend to the world. Today, we stand at a new frontier where that friendship is more vital than ever, the frontier of artificial intelligence. Sweden is a proud friend of India. In the ancient scriptures, we read of the Samudramanta. The churning of the cosmic ocean. It teaches us that collaboration is the only way to truly unlock the deepest treasures. Today, the vast ocean of data is our samudra and AI is our churning rod. Clearly. Thank you. So as you understand, clearly, there are very, very good reasons why we are here and why this summit is taking place in India, in New Delhi.

And that brings me to the point of legitimacy. In 1450, with modern time telling, when the printing press was introduced, the reaction from the status quo was not excitement. It was fear. Power had long depended on being able to control information and suddenly knowledge could scale. And if you look back at the argument. That we heard that. They’re a bit familiar, actually. This will spread the wrong ideas. People won’t know what to trust. Society will lose control. And people, especially writers at the time, will lose their jobs. But the printing press wasn’t dangerous. What was dangerous was not understanding it. Those who understood it could soon reach a nation in only two weeks and a whole continent possibly in two months.

Every major technological shift follows the same sort of emotional curve. It goes from fear, then trust, then legitimacy, and finally, a worldwide transformation. We are now living through another such moment. And artificial intelligence isn’t just another digital upgrade. It is a fundamental shift. AI is no longer about algorithms alone. It is about control of energy, compute capacity, data, and trust. Nations that master AI infrastructure will shape economic growth, industrial competitiveness, and democratic resilience for decades. It’s going to make a massive shift. Make no mistake, we are not digitalizing the old economy. We are building an entirely new global AI industry, one that will redefine the foundations of productivity, of healthcare, of defense, energy systems, and of course also public administration.

The nations that lead this transformation, they will prosper. Those that merely consume AI built elsewhere will fall behind. The future will not be decided necessarily by the ones that builds the biggest models. But rather, the future will be decided by the ones that build the biggest models. the ones that build the most trusted systems. So for me, the question is not whether or not this transformation will happen. The question is who shapes it and on what values. And that is why I am here. So let’s talk a little bit about something else that is often misunderstood. Data centers. Because AI, much like fire, it is powerful. And in this sense, it is invisible. And it is very energy intensive.

Demanding of energy intense data centers, often on the countryside, rupturing forests and fields. To many citizens, data centers look like someone else’s internet using our electricity. At least that’s the debate in Sweden and I know in many other countries. But I believe that that’s the debate. And I think that’s the debate. That perception is incomplete. In reality, they can be long term. local job anchors if implemented and used correctly. They can enable renewable energy investments. They can be infrastructure for hospitals, for research, defense and industry. And they are the factories of the new economy. And this brings us to the core political challenge. People do not vote for technology. People vote for outcomes. A job, a hospital that works, energy they can afford.

If AI is to become electable in our democracies, policymakers must find a way to translate complexity into tangible benefit. Fear turns into trust when we understand. And when understanding grows. So how do we get there? No nation can build resilient AI infrastructure alone. Democracies have to cooperate. AI sovereignty does not mean isolation. It means choosing your dependencies. To be able to choose our dependencies and the values that shape global AI, we also need a measure of sovereignty over AI. True sovereignty, the way I see it, rests on three pillars. First, jurisdictional control, knowing where your data is stored and processed. Second, infrastructure capacity, having sovereign compute for advanced models. And third, strategic choice, selecting partners from a place of strength, not dependency.

And in a turbulent world, you need to choose your friends carefully. Sweden is choosing India. India provides the incredible scale and speed, the very engine of this movement. Europe and Sweden can provide precision and trust, the filter that ensures that what we extract is the amrit, the nectar of progress for all. Just as Lord Vishwakarma unified divine vision with practical tools, we must unify the human heart with machine power. We must not see AI as a replacement for the human spirit, but as a power multiplier for human dignity. And when we combine India’s digital scale with Sweden’s systematic trust, we do more than build code. We build a future where technology never outweighs. Sweden offers Europe and all of our global partners what the AI transition actually needs.

needs. So now you’ll have a little bit of Swedish bragging, which is not that very common. But first of all, we have an abundant of clean and reliable energy. We export more electricity per capita than any other European country. AI is becoming the most efficient way to export energy without exporting electrons. In Sweden, AI training can run a roughly one third of the carbon footprint of a typical US hyperscaler operations. This transforms us from energy exporter to intelligence exporter, a fundamentally more valuable position. But energy alone is not enough. And that brings me to the second Swedish strength, industrial depth. Sweden has deep expertise in scaling complex industrial systems. We are modernizing traditional industry while building new AI.

Infrastructure. And Europe cannot be underestimated. You cannot bypass the European Union in the AI stack. Consider just ASML in the Netherlands, the only company in the world producing extreme ultraviolet lithography machines essential for advanced ships. Or ARM in the United Kingdom, whose architectures power most of the world’s smartphones and an increasing share of data center properties. Processors. Or SAP in Germany, embedded in the mission -critical enterprise systems of the global economy. And of course, Ericsson from Sweden, a global leader in 5G and a frontrunner in 6G, the backbone of edge computing and AI -enabled networks. You cannot build the AI ecosystems with Europe. And you shouldn’t, because we’ll be a reliable partner. Third, but not least, trusted institutions.

When you make a deal with a Swede, that is a handshake that you can trust. And Sweden offers the ability to move from strategy to execution. In the Nordics, Sweden, Norway, Finland and Denmark, we are now building AI gigafactories, manufacturing intelligence at industrial scale with near zero carbon energy. We combine clean power, political stability, rule of law, technological sophistication and a culture of trust. We see ourselves as a sort of pathfinder, helping define the routes that will shape global AI infrastructure for decades. At the same time, we are making strategic commitments. During this parliamentary term, We have committed a substantial amount of funds to AI research, AI development and implementation, therefore ensuring that Sweden seizes the economic and societal benefits of this transformative technology.

Building on that foundation, we are today presenting in Sweden an AI strategy with high ambitions. The strategy will outline concrete steps that will steer Sweden towards sustained AI leadership. Our strategy not only demonstrates the scale of current commitment, but also maps a path forward for Sweden’s future. And we have launched an AI workshop to help public sector adopt AI safely and efficiently, because trust is built not by slogans, but by implementation. And this implementation brings me back to India. India understands scale. India understands development. Your investments in sovereign AI models ensures that AI speaks all of your languages, reflects your society and serves your people. This is what real inclusion truly looks like. When 1 .4 billion people gain access to AI tools that empower farmers, small businesses, teachers and doctors, that is not just innovation, that is transformation.

Information partnerships between India and Sweden combine scale with engineering excellence, market dynamism with institutional trust. Together, we can ensure AI strengthens democracy, drives sustainable growth and expands opportunity. I’d like to sum up by saying people fear what they do not understand. But what people understand and see value in. They will defend. Our task as leaders is not merely to regulate AI, it is to make it legitimate, to make it understandable, and most importantly, to make it beneficial. If we succeed, AI will not be feared like the printing press. It will be embraced like electricity, invisible, indispensable, but empowering. Let us shape this new industry together, open, competitive, democratic, and inclusive. The future of AI must empower our people and

M

Moderator

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

AI Summit Significance

Explanation

The moderator highlights that sessions at the AI summit are providing fresh viewpoints that deepen understanding of AI challenges and future expectations. These new perspectives are seen as essential for shaping how stakeholders anticipate AI developments.


Evidence

“But, capital but, B -U -T, such sessions are actually adding such new perspectives to our understanding of AI, the challenge, and also the future, what to expect in future.” [2].


Major discussion point

AI Summit Significance


Topics

Artificial intelligence


E

Ebba Bush

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Strategic Long‑Term Partnership

Explanation

Ebba Bush describes the Sweden‑India AI partnership as built on long‑term trust and strategic alignment, emphasizing that both nations view the collaboration as a durable and mutually beneficial relationship.


Evidence

“It is long -term and built on trust.” [16]. “the Nordics are deeply engaged here in India, and we are here because we believe this partnership is strategic.” [19].


Major discussion point

Sweden‑India Strategic AI Cooperation


Topics

Artificial intelligence | The enabling environment for digital development


Scale Meets Precision for Inclusive AI

Explanation

The speaker explains that combining India’s massive digital scale with Sweden’s systematic trust and engineering excellence enables AI solutions that can serve 1.4 billion people, delivering inclusive benefits across sectors.


Evidence

“And when we combine India’s digital scale with Sweden’s systematic trust, we do more than build code.” [30]. “And as India makes strategic investments in sovereign and democratic development, we have developed different AI models and advanced research ensuring that 1 .4 billion people can benefit from AI.” [31].


Major discussion point

Sweden‑India Strategic AI Cooperation


Topics

Artificial intelligence | Social and economic development | Closing all digital divides


Fear‑Trust‑Legitimacy‑Transformation Curve

Explanation

Bush outlines a historical emotional trajectory for transformative technologies, stating that AI moves from fear to trust, then legitimacy, and finally to worldwide transformation.


Evidence

“It goes from fear, then trust, then legitimacy, and finally, a worldwide transformation.” [41]. “Fear turns into trust when we understand.” [25].


Major discussion point

Legitimacy, Public Trust and Perception of AI


Topics

Artificial intelligence


Democratic Translation of AI Complexity

Explanation

She argues that for AI to be accepted in democracies, policymakers must convert its technical complexity into clear, tangible benefits that voters can see and evaluate.


Evidence

“If AI is to become electable in our democracies, policymakers must find a way to translate complexity into tangible benefit.” [9]. “People vote for outcomes.” [48].


Major discussion point

Legitimacy, Public Trust and Perception of AI


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


AI Sovereignty Framework

Explanation

Bush defines true AI sovereignty as resting on three pillars: jurisdictional control of data, sovereign compute capacity for advanced models, and the strategic choice of partners based on strength rather than dependency.


Evidence

“First, jurisdictional control, knowing where your data is stored and processed.” [56]. “Second, infrastructure capacity, having sovereign compute for advanced models.” [53]. “And third, strategic choice, selecting partners from a place of strength, not dependency.” [24]. “True sovereignty, the way I see it, rests on three pillars.” [57].


Major discussion point

AI Sovereignty Framework


Topics

Artificial intelligence | Data governance | The enabling environment for digital development


Data Centers as Job Anchors & Renewable Enablers

Explanation

While acknowledging that data centers are energy‑intensive, Bush points out they can become local employment hubs and support renewable energy investments when properly implemented.


Evidence

“local job anchors if implemented and used correctly.” [60]. “They can enable renewable energy investments.” [59].


Major discussion point

Data Centers, Energy, and Economic Impact


Topics

Environmental impacts | Social and economic development


Swedish Clean Energy Lowers AI Footprint

Explanation

She highlights that Sweden’s abundant clean, low‑carbon energy allows AI training to operate with roughly one‑third the carbon footprint of typical U.S. hyperscaler operations, turning the nation from an energy exporter to an intelligence exporter.


Evidence

“In Sweden, AI training can run a roughly one third of the carbon footprint of a typical US hyperscaler operations.” [70]. “This transforms us from energy exporter to intelligence exporter, a fundamentally more valuable position.” [68].


Major discussion point

Data Centers, Energy, and Economic Impact


Topics

Environmental impacts | Artificial intelligence


Sweden’s Competitive AI Advantages

Explanation

Bush enumerates Sweden’s strengths: abundant clean energy, deep industrial expertise in scaling complex systems, and trusted institutions that provide precision and trust, positioning Sweden as a reliable AI partner.


Evidence

“But first of all, we have an abundant of clean and reliable energy.” [76]. “Sweden has deep expertise in scaling complex industrial systems.” [38]. “Europe and Sweden can provide precision and trust, the filter that ensures that what we extract is the amrit, the nectar of progress for all.” [40].


Major discussion point

Sweden’s Competitive Advantages in AI


Topics

Artificial intelligence | The enabling environment for digital development | Environmental impacts


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon

Session transcript

Announcer

That was really an interesting session by CEO Cisco, highlighting the agentic AI, the role of agentic AI, as well as the physical AI and the current scenario. And also the last line was really an assuring line saying that the future will not be built by AI, but by humans who can confidently put AI to use. Well, ladies and gentlemen, moving on. Now it’s my honor to introduce a leader who’s been at the forefront of shaping the future of wireless technology and intelligent computing. Mr. Cristiano Amon is the president and chief executive officer of Qualcomm, a company that has defined and continues to redefine the global compute connectivity and AI landscape. And well, AI doesn’t just live in the cloud, it runs in your pocket, in your car, in the factory floors.

And Mr. Amon is leading Qualcomm’s push to bring powerful AI processing to the edge. enabling billions of devices to think locally and act intelligently. Ladies and gentlemen, it’s my pleasure to invite Mr. Amon, President and CEO of Qualcomm, to the stage. Please give a round of applause.

Cristiano Amon

Good afternoon, everyone. Very, very happy and privileged to be here. I’m incredibly excited and energized about what’s happening here in India with AI and I think what’s happening with AI in general. What I’d like to talk to you today is about the next chapter of AI. And this is something that’s very near and dear to Qualcomm. We’ve been talking about this because I think we’re really entering now the next phase of AI. As AI gets developed, it’s going to be part of everything that we do. And especially… the interaction that we have with computers and with digital… So intelligent is now shifting for something that we kind of started and we all experience going to, you know, a chat box and asking questions into something that is going to be all around us and everywhere all the time, especially with the devices.

I actually love the presentation right before from my friend Jitu from Cisco when he talked about the traffic change from chat box to agents. And this is important. You know, I’ve been often talking about this, how we should be thinking about AI in a much broader sense. And it’s easier for a company like Qualcomm to talk about this because we build a lot of the chips that go into devices where the humans are. So as you create AI in the data center and you train and create those models, all this data. And you deploy this, you’re starting to see that this gets utilized in different ways. One fundamental thing that AI is doing for us.

it is changing the human computer interface because we don’t have to now learn how to use a computer if you know i’ve been uh often talking about this in different presentations we learn how to use an s2 keyboard and we still use that on a laptop then we use like to touch a screen but now the ai understands what we see what we hear what we say what we write so in itself it’s changing computers it’s changing the devices we interact with and uh it’s becoming a pervasive technology that is going to be everywhere and i think that’s the mission i think of qualcomm when i think about uh what we’re going to do is the same way that what we did with mobile communications and the creation of of the computer that fits in the palm of your hand is the ability to take that intelligence everywhere so we’re going to be creating a number of important shifts in the industry and i want to start talking about the mobile industry we may have had the privilege as a company to be part of every single transition of wireless technologies and let’s talk today I’m going to talk about the next one that is coming as well and what we saw with the transition of wireless technology that fundamentally at every generation of wireless you saw big shifts not only in devices and companies and because of the transition especially for example when you went to the ability to have a phone that you carry with you all the way to connect the phone or the internet all of a sudden that phone became a computer and it started to drive the future of the internet like a country like India that leapfrogged I think the internet and went straight to the mobile internet and that’s going to be true again when you think about AI for example in the mobile ecosystem AI is constantly changing and it’s changing and it’s changing and it’s changing and it’s changing going to fundamentally change how we think about the mobile device All of you today, and me included, I think we look at our smartphone, our inseparable device, most of our digital life is.

And the smartphone today is at the center of everything that we do. But now that’s going to get replaced by an agent. Now, when you think about the entire value chain that got created, for example, for the mobile industry, there’s an enormous amount of value on things like OSs and application stores. And that becomes like the platform when you’re going to develop an application that you’re going to do different things into the platform. An agent that now understands human intentions because, you know, you just need to tell him what you want. Or he’s going to see what you see and make a decision for you, assuming you will authorize it. it. When that happens, that’s where the value is because then the agent is free.

It can go to the internet and do things. It can go to your phone and do things. And you’re no longer bound by constructs of your hardware or your apps in the application. So as a result, we expect the AI is going to have a fundamental shift in the mobile industry where the agent is going to be at the very center. And as the agent is at the very center, everything surrounds the agent. You can access the agent from your mobile phone, but you can also access the agent from your glasses or for a pendant or for anything that you wear. And I think we’re going to look at the mobile ecosystem right now, not only as a single device experience, but you’re going to connect to agents across multiple types of devices.

And I think that’s incredibly exciting. And that’s not only unique to what you’re going to see in consumers. That’s going to happen also with things, because you can also have create AI that’s going to get trained on different things. on physical signals, like physical AI, on sensor data, and you’re going to deploy that in every computer. So what’s exciting about AI, it’s going to very quickly evolve for something you go to a browser and you ask a question. And I think, as my colleague from Cisco said, it’s got train and all the public available data on the Internet. You’re now going to go to a different type of AI experience that’s going to be the fundamental software that is going to run in all the devices around us and how you’re going to have interaction with the devices.

So I also want to basically, you know, as we think about this future, I just want to give you an example. What we saw across the industry is workloads or use cases have shifted. Devices didn’t go anywhere, but their workloads shifted. We used to do a lot of things in the early days of the Internet on your laptop. And forget. For example, e -commerce, you will do it on your laptop. Now, most of the e -commerce in the world is done on a phone. Tomorrow, or it could be like as early as, you know, within the end of this year, as you start to see the proliferation of glasses. If you have a glass that has agents, is connected to the Internet, has camera on those smart glasses, the glass see what you see.

You can just look at something and say, I’d like to buy this. What is, you know, can you check this? For example, check this on Flipkart. Just buy it for me. I’d like to buy this. Integration of payment system. You got a bill, say, pay this, notify me when I’m done, and so forth. So I think we’re going to see this fundamental change of devices. But that’s also going to be true about the revolution that’s happening in robotics and the revolution that exactly happened on industrials. So that’s an incredible opportunity. And we have been incredible. Incredibly focused as a company to basically drive that future of computing. There’s also a big debate, which I believe is the wrong way to look into that, which is about cloud and edge.

There’s a lot of debate about, oh, this is going to be running on the cloud. This is going to be running on the edge. And actually, it does not matter. Think about your device today. Your smartphone today has incredible amount of processing power, and there’s a number of different things that run in your smartphone. If you put it on airplane mode, you probably don’t use it. You just put it back and wait until you get connectivity again. It’s the most cloud -connected device because those things work as a one. And you’re going to have now intelligence that’s going to be incredibly distributed across the cloud, across the near edge, the network in itself, in and on device.

And it’s all going to work similar. There are going to be things that you’re going to be able to do on the device because they’re… They require an instant response or require unique context, unique information that is relevant to you. Something is going to do on the cloud and they’re both going to be growing and it’s going to be transforming how we think about computers. So I like to provide the simple, I think, a description. Let’s say we are all using agents and you’re going to pick the agents that you like and the agents to be useful. It needs to be fast. It needs to be relevant for you. Let’s say, go back to the example I provided on the glasses.

And you have those smart glasses and you’re walking around and you have a camera. Then all of a sudden you see somebody and you ask this glass, like it’s your friend next to you and say, who is this person? And you want to get a response. This is so and so. Or you’re going to say, can you translate this for me? What is this? Can you pay this for me? You want to, this thing has to be similar. Similar is no friction. So certain things are going to be done on your device and the thing’s going to be on the cloud. It’s going to be completely transparent to you. But the interesting thing is those agents, for them to be very useful, they needed to be contextually aware of what is relevant to you.

So over time, the agent I’m going to be using, the agent you’re going to be using, they need to be relevant to me. So you’re going to have a lot of things that are going to be being processed and understood about you. So much so that I believe that in the end game, I think it was said in the prior presentation from Cisco that all this available data that is publicly on the Internet that you train models, it’s a fraction of the data that is going to be generated. If you have, for example, a glass of a camera that sees everything that you see, try to annotate the image, get information about the image and the context, reads what you read.

And so forth, that is an incredible amount of data, and that’s going to be providing a lot of important context for those models that are going to be relevant to you. That is the future, and it’s an incredible transformation. It’s going to transform every industry. No industry is immune to this. And I think what we’re doing at Qualcomm is really creating the future hardware and software that will help enable this future across all the devices. We’re a very unique semiconductor company. I think we’re probably one of the few companies that can be working on chips from sub -2 milliwatts to a smart earbud that you’re going to wear all the way to now 2 ,000 watts per chip on the data center.

But I think that’s the incredible future that AI is going to transform every single computer. And the agents are going to be at the center of the experience. It’s going to replace a lot of the OSs and applications. And that is the new future of technology, including the future of mobility. And that’s why we’re incredibly excited about this. And with that, I want to talk about something that is happening, which is about the next generation of wireless technologies. I would like to provide an example from the past. When you think about telecom networks, and I think we’re probably one of the, you know, American telecom companies that really focus on the evolution of cellular technology.

When you think about the evolution of this sector, when this all started, it was about providing a telephone. I think all of us was an incredible thing. You have a twisted copper pair to get to your home. You pick up. You get a dial tone. You dial. And eventually, you could dial. Anybody in the world of a telephone. Even how cellular started was about making sure all of us had the ability to carry a telephone. That was 2G, that you can call everyone. That’s different today. Now you have a very high performance broadband network for data. Voice is just one application in the many applications that you do with the network. It fundamentally changed the nature of the infrastructure.

The equipment was different. The use case is different. We’re heading to the next big transformation of the telecom sector. So 6G is going to provide an evolution of connectivity, faster speed, lower latency, higher coverage. But that’s not the story. That’s just a piece of the story. It’s just continue to improve the connectivity. The biggest part of 6G is AI, like I said before, is now going to come to the telecom network. And that becomes a large scale. 6G. AI network that is processing and get trained on all of the signals that happens at the network and providing new capabilities. One of the biggest features of 6G is the network, is the sensing network at scale.

I’m going to give an example. The network not only will provide a connectivity between your device and the Internet, but will sense everything that’s around you. We’ll use techniques that you see today in autonomous driving cars, like radars, as an example, to detect your environment. It’s going to provide a map of everything that is happening at scale. And you’re going to have completely different type of services for different industries. It will provide context for your agents. Very important. And the network will have that role. It will provide traffic management systems and some of the use cases that are going to be part of full self -driving cars. It will do drone detection and manage the traffic control.

Off the economy, there’s going to be an aerial in the wide area network and much more. because AI is also going to the network. It’s going to be one of the biggest transitions I think we have, as big as going from voice to data, and it’s all going to be part of this future of AI. And I just want to now make another parallel, I think, to the presentation from my colleague from Cisco. It puts a fine point on the network that needs to be built, the capability of the infrastructure, the security and trust, but that is an incredible future with technology. And as I get to the end of the presentation, I want to highlight that India has an incredible opportunity with this transformation.

We have seen that those big shifts in technology creates opportunity, change players. It changed, I think, the role of different countries as they provide globally. It’s a global scale for the technology, and that’s an incredible opportunity for India. I look of what happened in mobile in India, and one of the largest data consumption per user in mobile devices in the world is in India. The whole Internet is mobile. When you think about the potential and all of the things that I just discussed about how AI is going to change everything, creates new device, new experiences, new services, that becomes a massive opportunity. And when I look at the ambitions that were set by the AI Summit, I’m going to provide just some examples.

Those are just examples. It can be very broader, but I just want to connect with some of the ambitions of the Summit. There is a process of jumping into a large -scale industrialization. India is becoming a global manufacturing hub as well. And with AI, you… You go from the very beginning. with smart manufacturing and automation with incredible change that is happening in this sector enabled by those technologies. Same thing with smart cities, the ability to continue to evolve the infrastructure, the ability to use AI to increase the scale, the reach, the access for healthcare. How you change education. Those are incredibly powerful learning tools. The ability to actually use some of those technologies to empower people with information and you’re going to have an ongoing learning experience.

Think about those agents with you all the time answering questions, telling you how to do things, especially when you think of context, for example, of those new devices such as smart glasses. And it can fundamentally change industries, for example, such as agriculture. Right. Right. Right. Just a few examples of the potential of connecting this technology with everything, I think, that is going on in India. It’s an incredible and exciting future enabled by AI. And really, it’s about meeting the ambition of democratizing this technology for everyone and actually have an important role in increasing the global welfare. And, you know, as a company that has always been focused on enabling our partners and other industries to innovate, I think the history of Qualcomm, we never believe is the job of one company to be responsible for all the innovation.

It’s really to enable many industries and partner. We’re incredibly excited to play a very small part on this mission. Thank you very much for the opportunity to talk with all of you and

C

Cristiano Amon

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

AI as a pervasive, edge‑driven technology

Explanation

Amon describes AI moving beyond simple chat‑bots to become an omnipresent agent that can understand vision, audio and text across all devices. He also says AI will fundamentally change the human‑computer interface, removing the need for keyboards and touch screens.


Evidence

“So intelligent is now shifting for something that we kind of started and we all experience going to, you know, a chat box and asking questions into something that is going to be all around us and everywhere all the time, especially with the devices.” [7]. “You’re now going to go to a different type of AI experience that’s going to be the fundamental software that is going to run in all the devices around us and how you’re going to have interaction with the devices.” [5]. “it is changing the human computer interface because we don’t have to now learn how to use a computer if you know i’ve been uh often talking about this in different presentations we learn how to use an s2 keyboard and we still use that on a laptop then we use like to touch a screen but now the ai understands what we see what we hear what we say what we write so in itself it’s changing computers…” [12]


Major discussion point

AI as a pervasive, edge‑driven technology


Topics

Artificial intelligence


Agent‑centric transformation of the mobile ecosystem

Explanation

Amon argues that agents will become the core platform of the mobile ecosystem, supplanting traditional operating systems and app stores. Value will shift from hardware and apps to agents that operate across phones, glasses, wearables and other form‑factors.


Evidence

“we expect the AI is going to have a fundamental shift in the mobile industry where the agent is going to be at the very center.” [1]. “It’s going to replace a lot of the OSs and applications.” [17]. “Now, when you think about the entire value chain that got created, for example, for the mobile industry, there’s an enormous amount of value on things like OSs and application stores.” [35]. “You can access the agent from your mobile phone, but you can also access the agent from your glasses or for a pendant or for anything that you wear.” [30]. “you’re going to connect to agents across multiple types of devices.” [23].


Major discussion point

Agent‑centric transformation of the mobile ecosystem


Topics

Artificial intelligence | The digital economy


Distributed intelligence: edge vs. cloud

Explanation

Amon says the debate over cloud versus edge is misplaced because AI workloads will be distributed seamlessly across cloud, near‑edge, network and on‑device, delivering a transparent user experience.


Evidence

“And you’re going to have now intelligence that’s going to be incredibly distributed across the cloud, across the near edge, the network in itself, in and on device.” [36]. “There’s also a big debate, which I believe is the wrong way to look into that, which is about cloud and edge.” [37]. “So certain things are going to be done on your device and the thing’s going to be on the cloud.” [38]. “Something is going to do on the cloud and they’re both going to be growing and it’s going to be transforming how we think about computers.” [39].


Major discussion point

Distributed intelligence: edge vs. cloud


Topics

Artificial intelligence | The enabling environment for digital development


6G and AI‑enabled networking

Explanation

Amon outlines that 6G will embed AI into the network, creating a large‑scale sensing layer that provides contextual data for agents. This AI‑powered network will enable services such as autonomous‑driving maps, drone detection and advanced traffic management.


Evidence

“One of the biggest features of 6G is the network, is the sensing network at scale.” [44]. “The biggest part of 6G is AI, like I said before, is now going to come to the telecom network.” [45]. “because AI is also going to the network.” [48]. “It will do drone detection and manage the traffic control.” [54]. “It will provide traffic management systems and some of the use cases that are going to be part of full self -driving cars.” [55]. “We’ll use techniques that you see today in autonomous driving cars, like radars, as an example, to detect your environment.” [58].


Major discussion point

6G and AI‑enabled networking


Topics

Artificial intelligence | The enabling environment for digital development


Opportunities for India and global AI democratization

Explanation

Amon highlights India’s massive mobile data consumption as a unique opportunity for AI‑driven transformation in manufacturing, smart cities, healthcare, education and agriculture. He also stresses Qualcomm’s role in enabling partners rather than monopolising innovation, thereby democratizing AI for global welfare.


Evidence

“I look of what happened in mobile in India, and one of the largest data consumption per user in mobile devices in the world is in India.” [59]. “with smart manufacturing and automation with incredible change that is happening in this sector enabled by those technologies.” [60]. “It’s a global scale for the technology, and that’s an incredible opportunity for India.” [61]. “And it can fundamentally change industries, for example, such as agriculture.” [63]. “Just a few examples of the potential of connecting this technology with everything, I think, that is going on in India.” [64]. “And as a company that has always been focused on enabling our partners and other industries to innovate, I think the history of Qualcomm, we never believe is the job of one company to be responsible for all the innovation.” [67]. “And really, it’s about meeting the ambition of democratizing this technology for everyone and actually have an important role in increasing the global welfare.” [68].


Major discussion point

Opportunities for India and global AI democratization


Topics

Artificial intelligence | Social and economic development | The enabling environment for digital development


Qualcomm’s unique hardware capabilities

Explanation

Amon points out that Qualcomm can design chips across a vast power spectrum, from sub‑2 mW wearables to 2 kW data‑center processors, enabling AI to run everywhere.


Evidence

“I think we’re probably one of the few companies that can be working on chips from sub -2 milliwatts to a smart earbud that you’re going to wear all the way to now 2 ,000 watts per chip on the data center.” [75].


Major discussion point

Qualcomm’s unique hardware capabilities


Topics

Artificial intelligence | The enabling environment for digital development


A

Announcer

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Human‑centric vision of the AI future

Explanation

The announcer emphasizes that the future will be built by humans who confidently apply AI, not by AI itself, underscoring a human‑centered approach to technology.


Evidence

“And also the last line was really an assuring line saying that the future will not be built by AI, but by humans who can confidently put AI to use.” [78].


Major discussion point

Human‑centric vision of the AI future


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Climate-Resilient Systems with AI

Building Climate-Resilient Systems with AI

Session transcript

Uday Khemka

Very exciting sessions. I’ll just wait. Guys. So we are meeting for a tremendously important subject. And this has been a great summit. I know you’re all energized, inspired, excited and exhausted at the same time. And we will get a moment when this subject becomes a room of 5 ,000. So that’s what we’re going to work towards. But you’re here today. We’re delighted to have an absolutely tremendous panel with us today. I’m deeply honored, flown across from the U.S., from Europe, from Singapore and so forth. And we have a lot of material to cover. I should say that the triple challenge that we are dealing with in this panel is perhaps the most important challenge any of us will face in our lives.

Which is to promote development on the one side. While dealing with the creation of a sustainable planet. and in terms of climate change, your self -selecting group, you’re all here with us and there’s a reason for that. You already know about climate change. You already know about AI. Is that, as you know, we are not necessarily winning the battle on climate as yet and so we need to deal with both mitigation and adaptation and this panel will address both of those two things. We have very little time in the panel so I am going to speed along but that’s a good metaphor for the very little time we have to do something about climate change.

So we’re in action mode. It’s a call for collaboration. We’re not going to be, I apologize to our speakers and panelists for a number of things. One, this is not a real panel. We’re not going to be having discussions. This is just boom, boom, boom, talking to you about what everyone’s doing. Secondly, there’s going to be a kind of switcheroo moment when some other speakers come up and some of us are replaced up here. Apologies for that. It’s just the intensity of the panel and for all those things I apologize in advance but I don’t apologize. for the incredible quality of our panelists today. These are amazing people. And I would just end by saying that this is not a normal session.

This is an invitation for radical action -oriented collaboration with all of you. On that basis, let me begin by talking a little bit about a summit we held last year in London and the background to it and an organization that some of us are very deeply involved with and almost everyone here is a friend of called the Green Artificial Intelligence Learning Network. You will immediately note it has the cutesy acronym of GRAIL, like the Holy Grail. And what we’re trying to do is really see what the synergy is between the development agenda and the climate agenda through the application of AI. I’m going to speed through this. We’re going to then move to Professor David Sandelow, who actually anchored our summit last year, which is the first major global summit on the application of AI, to climate change.

and has very kindly flown in from Colombia. I’m going to ask our speakers one more favor, which is instead of my coming up and introducing all of you, if you don’t mind introducing yourselves, that will speed us along the way. So let’s go through this. So as you all know, and perhaps Professor Sandler will talk about it, the IPCC gave us a target, 43 % decarbonization from 2019 levels to 2030 levels. We were meant to reduce GHG emissions by that amount. In 2021, some of us at COP26 in Glasgow had a meeting to look at the likelihood of that happening. And we came to the conclusion that the likelihood was very low. And therefore, traditional approaches to climate mitigation and adaptation needed to be enhanced with new solutions.

And we thought, what was J -curving as fast as climate change was J -curving? And the only thing… There was nothing we could think of. There was nothing we could think of. There was nothing we could think of. was the application of AI, this great new suite of technologies, including, of course, quantum and all the other things that go with it. And we started to talk to people, and we talked to a whole bunch of people in the AI community, a whole bunch of people in the industrial and power, automotive, all the different sectors that produce emissions. And we said, are you talking to each other? And shockingly, people were not talking to each other.

There’s very little going on with some honorable exceptions at Google. Very few people were really in the AI community focusing on downstream issues around climate change. And similarly, the big industrial domains were not really focused on the use of AI for decarbonization and economic value creation. So with that lens, think about this session as throwing one J curve against another J curve. Can we throw the crazy increase in AI technology represented by this great summit against the world’s greatest challenge? That’s the purpose of the Grail organization, which is a not -for -profit based in London. It’s a vast terrain. We don’t have time to cover it all. It’s obviously mitigation. It’s obviously adaptation. We have to hit both.

And within that, there are endless taxonomies of all the wonderful things that AI can do. And, of course, you’ll be worried about the increased Gs from data centers, but that’s not the primary focus of our session today. That has been quantified. The Grantham Institute did a quantification last year of 0 .5 to 1 .4 gigatons of extra GHGs from data centers, but that’s from every kind of utilization against the potential benefit of 3 .5 to 5 .4 gigatons being sucked out of extra GHG emissions. So there’s clearly a very strong balance towards what AI can do to helping the planet in its shift towards a clean and green economy. Grail, what’s Grail? Grail is an attempt to create… …to create a collaborative network.

of great academic institutions, commercial institutions, AI companies, industrial companies, philanthropic institutions, private sector sustainability networks like WBCSD, bringing them all together with governments to try and create massive collaboration. In the next slide at the bottom, you see that same group of institutions. Bottom right, the ideas and deal flow. Going back into Grail, bottom left, the fact that this becomes a collaborative community to get all these solutions scaling at speed and at the top, then getting that deal flow funded through grants, through government programs, through venture capital, corporate funds, but to move the agenda to real solutions at massive scale as quickly as we possibly can. All of this led to a summit that occurred that I mentioned earlier last year.

Sean, will you keep me real on the time? Thank you. Okay. And that led to… 200 people. 115 organizations, including all the organizations represented here today, 60 speakers, and we looked at AI for power, AI for building materials, AI for everything you could think of vertically and horizontally, looking at the issues of materials innovation, looking at the issues of value chains, looking at carbon markets, and so forth. What has happened after the summit? Three things. One, we’ve created an online collaborative platform, and we invite all of you to join it, to co -create those solutions that can make a difference. Second, we’ve started to engage with governments around the world. Imagine a summit like this that was focused, yes, on development, but with a central climate focus as well.

How amazing would that be? And most importantly, we’re focusing on taxonomies that lead to massive calls for action from the innovation community. So we started to work on taxonomies for a variety of sectors, the energy sector, the built environment, materials innovation, and we worked with groups of AI experts and power experts and figured out what the big wins were, what are the big opportunities for companies to create economic value while at the same time massively decarbonizing. And this was an intellectual process, including many experts, some of whom are here today, that led to this astonishing work and identified the big win -win opportunities for economic value creation and decarbonization. On the bottom right, the teams from academia, industry, industry associations, a variety of other people and countries, eight country teams as well were involved in various ways.

So where’s this all going? Well, thank you to McKinsey for your very kind collaboration to after we had done all that work saying, hey, we want to help and kicking in and working with us to further refine those offerings and look at cost benefits and cost curves and all sorts of things. Delighted about it. And then there are, apart from working on the power sector, to look at generally what we can do, apart from working on the built environment, generally what we can do, apart from looking at materials innovation, generally what we can do to accelerate solutions for decarbonization through AI. We have two big partnerships that are emerging. One, and I want to slow down here.

Okay, 250 companies are part of the World Business Council, their network. That represents, in scope one, two, and three, 26 % of World GHGs. They’ve realized that that’s mainly in supply chains who are going into a partnership, so is McKinsey, so are other partners, to look at what are the AI opportunities to take startup and scale -ups into these decarbonization opportunities at massive scale with the 250 largest companies in the world representing 24 % of world revenues and 26 % of World GHGs. Finally, working with coalitions of energy companies, and Nalin, you’ll hear, and we’ll hear from later on, and we’re deeply partnered with Nalin. on this and others on this. How can we take this into accelerating?

For example, UNESA has 71 energy companies, 750 gigawatts of clean power. They want to go to 1 ,500 gigawatts of clean power by the end of the decade. How can we help them with AI? It’s a very practical lens. We invite you to join us and be part of this. On that note, I would like to briefly to invite to introduce Professor David Sandlow. David, I’m not going to go through your very distinguished background to take the whole panel to do that, except to say that you have worked in every different field, most importantly in the past in very senior positions in government, but now, of course, you’ve been the luminary on AI solutions on climate.

That is the worst introduction you’ve ever got. I’m sorry about that, just in the interest of time, but I’m… Really very honored that you’ve flown all the way to come here, and I’m handing them over to you. to you.

David Sandalow

Thank you so much. Uday. Uday, thank you. Your energy, your enthusiasm, your passion, they’re all infectious. And your intellect is remarkable. What you’re driving forward in this area is world changing. You are not just an inspiration, you’re a gravitational force that are pulling people together to work on this, so thank you so much. Your, what you did in London was remarkable. What you’re doing here is incredible and I’m looking forward to being part of what you’re doing in the future. So I’ve, it’s been my privilege to lead some teams that have been working on these issues over the course of the past couple of years and I’m going to talk about one of the projects that we’ve done.

It looks like the slides are not there. There’s a certain, turning on the screen. There it goes. I will say that while we wait, I’ll say that I really like the metaphor that you had, Uday, about two hockey sticks. And this is just a remarkable convergence of two of the most important trends that are happening in human history right now. One of them is, alas, the increase in greenhouse gas emissions, which is happening at such an astonishing pace. But the second is the exponential growth of the capacities of artificial intelligence. What’s driving me is we need to find a way to make sure that that second trend, artificial intelligence, helps to solve the first problem.

And that’s the study that we did, which we brought together a team of 25 experts. Just wonderful people. One of them was . Song Lee, the head of the IP, last head of the Intergovernmental Panel on Climate Change, and some top experts. And the question we asked was, it was very simple, how do you use artificial intelligence to reduce greenhouse gas emissions? There it is. It came up on the screen. Thank you so much. I appreciate it. And so it’s a very simple question. How do you use AI to reduce emissions of greenhouse gases? We came up with 17 chapters. We wanted to do more than just provide analysis. We wanted to provide actionable ideas for what to do.

So every chapter has recommendations. You can find a print version available on Amazon, and free downloads of the entire volume, including chapter -by -chapter versions, are available at these websites. I want to thank the government of Japan, including NITO and MEDI, for supporting this work. They’ve been very important supporters of work on AI and clean energy more broadly. Oh, I’m going to promote my podcast later. But I have a podcast that’s talking about this topic as well. But so here’s our – Here’s our – Here’s the table of contents for our work. We talked – We have introductions to both AI and climate change in this volume. One of the things we’re trying to do is target this both to experts and to people who are beginners in this topic.

And, you know, Uday talked about bringing together different communities. One of our basic conclusions was we need to bring together experts in climate change and experts in AI. And there are a lot of people who know a lot about climate change but don’t know a lot about AI. A lot of people who know a lot about AI, they don’t know about climate change. So we decided to have primers on each of those topics. And then we talk about eight different sectors and a number of cross -cutting topics. So we have five key takeaways. This was an interesting exercise with all of our authors taking 300 pages and trying to distill it into five key takeaways.

But here’s what we came up with. The first one, I mean, this is a kind of bottom line, but it’s important. AI does have significant potential to contribute to reductions in greenhouse gas emissions. And we categorized it with two categories. One is climate change. It’s incremental gains such as just improving efficiency. It’s output itself. And then we have the other category, which is the environmental impact. solar farms, building energy efficiency. There are lots of incremental gains that can be made, but also transformational gains. In particular, new tech, new materials and other things. We looked at whether greenhouse gas emissions are causing increases in, or greenhouse gas emissions are increasing as a result of computing operations.

We decided, based on the available evidence, that the best estimate is less than 1 percent and maybe much less than 1 percent of greenhouse gas emissions are currently coming from AI. That tracks with the Grantham study that Uday talked about. That tracks with what the IEA has said as well. The main barriers to AI’s impact in reducing greenhouse gas emissions are a lack of data and a lack of trained personnel. There’s other barriers as well, but obviously you need data. A lot of places we don’t have the data for this purpose and you need people. Trust is essential. People aren’t going to use AI unless they trust it. And then every organization with a role in climate change mitigation should consider opportunities for AI.

And we need AI to contribute to its work. I think as AI grows in the public… consciousness at summits like this that’s becoming less and less of a kind of radical recommendation. But it’s just so important. I think if you’re working in climate mitigation, you need to have a team dedicated to AI and how AI can help. So I’ll just run through quickly because we only have a little bit of time, some of our chapters. We have a chapter on the introduction to AI, and if you’re a climate person that doesn’t know a lot about AI, this might be helpful to you. And one of the things we did is we broke down AI capabilities into four basic categories at a very high level.

The first thing AI can do is detect patterns. And how can that be helpful in climate change mitigation? Well, one example is detecting methane emissions from satellite data. You know, some of you probably know this, but I mean, we know much, much more today than we did 10 years ago about methane emissions. And that has helped us dramatically to begin to reduce methane emissions. That’s entirely dependent upon the optical sensing process. And we’ve been able to do that over the last 10 years. So, you know, we’ve been able to do that over the last 10 years. And we’ve been able to do that over the last 10 years. So, you know, we’ve been able to do that over the last 10 years.

And we’ve been able to do that over the last 10 years. And we’ve been able to do that over the last 10 years. So, you know, we’ve been able to do that over the last 10 years. And we’ve been able to do that over the last 10 years. impact so far. AI can also predict, such as weather patterns at solar and wind farms. It can optimize, such as power flows on transmission lines. And it can simulate, such as battery chemistry action. So I think for me, in fact, I’m teaching a course at Columbia right now where we’re emphasizing this framework of detecting, predicting, optimizing, and simulating. And those are, broadly speaking, the capabilities that AI brings to the table.

A lot to say about climate change, but just for those who aren’t paying attention, atmospheric concentrations of heat -trapping gases are now higher than any time in human history. In fact, higher than any time in the past three million years. And July 22nd, 2024, was the warmest day ever recorded. 2024 was the warmest year ever recorded by far. And the warmest 11 years ever recorded were in the last 11 years. So we are living in an era of climate change. We do deep dives into a number of different sectors. I’m just going to talk about a few of them. Power sector is… is maybe the most important just because it’s already 28 % of greenhouse gas emissions, and our strategy for reaching decarbonization requires us to electrify lots of things.

So we need to grow the power sector and decarbonize the sector at the same time. I don’t think we’re going to be able to do that without AI tools. AI is already helping decarbonize the power sector, optimizing location of generation transmission, increasing output at solar farms, but it can do much more. Dynamic line rating is optimal power flow analyses. But to do this, we need standardized data. We need trained personnel. The utility business model is a challenge. So this is a really important area that requires a lot of attention and work. Oh, and a final point, the last bullet on this slide. Using AI in real -time operations can cause real security and safety risks.

So we need to be very careful about generative AI in context. So even as we look to deploy AI to help reduce greenhouse gas emissions, we need to be very attentive to these risks. risks. I kind of find it amazing how few people pay attention sometimes to food systems and climate change, that 30 percent or more of greenhouse gases are in some way related to the food system, and the food system has, is threatened by climate change. AI can do a lot to improve both mitigation and resilience in the food system. We just, there’s a few examples here, integrating data from soil sensors to create fertilizer management plans, creating virtual farms. There’s lots of things that can be done here.

But coming back to this issue of lack of data, it’s a huge problem, especially in the Global South. So the efforts to build up a digital public infrastructure that are happening here in India are so important in this regard. I’m going to go quickly here. We look at buildings where there’s tremendous potential. I think materials innovation is one of the most important areas. And, you know, 150 years ago, when Thomas Edison invented the modern light bulb, he literally spent a year, he spent a year, he spent a year, he spent a year, he spent a year, he spent a year, he spent a year, he spent a year, running electricity through dozens, I think hundreds of different filaments to figure out how much light and heat would be produced.

So today, we can simulate a million of those interactions in a second. And there’s already tremendous advances in the pace of innovation in battery chemistry and some other areas using AI tools. And for me, this is one of the most promising areas in terms of transformational gains in reducing greenhouse gas emissions. Extreme weather response is extremely important from a resilient standpoint, and we don’t have a lot of time to get into it, but I think that the AI ML -enabled forecasting is transformational because it’s so much cheaper, for example. I mean, at really 1 ,000x the cost, 1 ,000x less the cost, we can run AI ML weather prediction tools and make a big difference on extreme weather response.

We have findings and recommendations throughout this report. You can see it here, again. We just did a new report in the same series on sustainable data centers. And our main message is there are that with this data center construction boom happening now, this is the time to be paying attention to data and sustainability. We are investing right now in multi -decade assets. We need to be paying attention to this. Smart siting is a key. And finally, here’s a plug for my podcast. It started about a year ago. I’ve had some great guests, Jensen Huang, Dami Lola, Ogunbi, the head of sustainable energy, for all Jennifer Granholm, the U.S. Energy Secretary under Biden. Listen, as they say, available on all major podcast platforms.

Uday, once again, thank

Uday Khemka

I feel horrified we’ve got speakers of this caliber and so little time. So thank you so much for your leadership. May I invite both of you to speak? You can speak from here if you prefer. We’ve got two great leaders from Google. Obviously, you know, in the sphere. of corporate AI leadership on climate, there is no one that parallels all of you and we look forward to hearing from you your thoughts. Thank you.

Vrushali Gaud

Thank you very much for hosting this. I don’t know if that’s a privilege or that’s pressure when you start with that sentence about the leadership position Google has. I just want a quick question. Raise of hands, how many of you have used Google today for either maps or searching something? Thank you. So you know who we are. This is my car. I’m Vrushali Gaud. I’ll introduce myself and then Spencer you can answer that. I lead Google’s in a nutshell decarbonization water and circularity strategy for the company. Essentially what that means is I’m responsible for quite a few things that you had in your slide that we should be doing and a good way to introduce myself is also I like getting things done and so I feel like my inner calling around this is we’ve had a lot of conversations we’ve had a lot of playbooks and research and things But it’s almost like, how do you act on it?

How do you execute on it? How do you start delivering the outcome that I think we all are looking for? So that’s the kind of space that I come from. And a privilege to be at Google, who allows us to kind of expand that space. So the reason I asked you all to show your hands, most of you know Google as a search or map and similar pieces, information source. One of the other things Google is now, I think, as a company, is a full -stack company. And when I say full -stack, that means the search and the information is a top layer of it. But underneath that sits the entire physical infrastructure that drives that.

And so that’s data centers. That’s the way you operate that. That’s the networks that feed into all of the applications. And so when we look at climate, and my title actually is Global Director of Climate Operations. So I say that out of humility because when we look at climate, we’re trying to put it across our operations the best we can. Thank you. So we, and good examples of that, I’ll start with data centers. The big topic right now, how do we operationalize them? Where do we cite them? The location, what impact it has on the community, what impact it has on the infrastructure there. Citing is a big part of it. Access to clean energy is something we’re looking at, and pretty much we have a carbon -free energy goal.

So I think for us, if you look at climate, a big portion of climate is emissions. How do we impact emissions? It’s from electricity. What do we do with electricity? Shift to clean energy. Or renewables. And so that’s the spectrum that we look at. And so a lot of our investments are in carbon -free energy and how we think about it. And it also is more not just take from the grid or expect the government or, you know, sort of the infrastructure to get you there, but how do we invest and bring more clean energy to the grid? I think that’s a big piece of, I think, what companies can do at the speed at which we are all moving is, how do we take these sort of bigger picture systems problems and embrace them and solve them?

So one is big, one is small. is generation of clean electricity, and the other is grid, and how do you solve the grid problems? So that’s the infrastructure of AI. Then using AI, I think that’s going to some of the other things, Professor, you were saying is, we look at how we could use AI to drive our operations more efficiently. It’s very boring pieces. It’s not really shiny superstar things, but a lot of the impact comes just in general. I look at water taps, and I remember the amount of leakages we have on water taps, the amount of electricity wires that are not connected, just the inefficient use of resources is a big one, and how can we use AI to sort of optimize, whether it’s optimizing within the use of our chips, optimizing the grid, optimizing which applications run from where.

That’s a big part of our strategy. And then the third piece is, what do you use AI, and how do you use it for climate? Now, clearly, our business is information and search, but which means we also have access to a lot of data. And so one of the ways we consider, as what you can do in AI is, how do you use these large data sets? A, find a way to open source them, encourage different use of them, but also incubate certain initiatives that can help to show the light to others. So Earth AI is a big one in which we you’ve got satellite images, you’ve got weather data, you’ve got all of these big chunks of information that we can put out there.

And then there’s an application layer, which I think is of interest to you in terms of resiliency or mitigation. So one of the things, you know, which you probably haven’t heard of as much is Flood Hub. So we have a lot of information put out there as to flood risks of different region, which then other companies can use for whatever products they’re launching, whether it’s insurance, whether it’s real estate, fire sat, wildfire risks. How do you do prediction around it? Something all utilities companies, especially from in California, where I’m based, is we’re very passionate about using that for prediction. I can go on about the list of sort of what data can be used and how it can be leveraged.

The thing I think I’m going to go back to the, you know, crux of what you had in this is. is we’re in the timeframe of two hockey sticks. One is the impact on emissions, and I completely appreciate that the tech companies, hyperscalers, data centers are at a scale contributing to it, which we want to obviously help mitigate or replace with clean electricity as much as we can. And then the other is, how do you use the innovation curve on this? And I think we’ve just scratched the surface. And there’s, of course, like, you know, there’ll be trials and errors, but the surface around how do we democratize data, how do we encourage innovation, and how do we scale it very quickly?

Because I think those are the three, the trifecta of how do you drive this change? And so one of the ways I’ll end with saying I’m super proud of what we’ve done this week, trying to bridge those two gaps is we’re working with the Principal Scientific Advisory of the Government of India to launch a Google Center of Climate Tech. We call it Climate Tech because it does, those are the two hockey sticks that you’re trying to get in, right? The tech scale and the climate impact. and we are our goal is to encourage academic research but research that is actionable so five pilots first of all kinds and how you can scale um and there’s a lot of uh you know focus already on electricity so we are trying to do the non -electricity pieces in that which is around low carbon steel low carbon materials built environments um low carbon sustainable aviation fuel and then the biggest one we don’t talk we talk about what i think is a big lever across everything is green skills you need to embed this sort of a thinking which is green climate first across every domain and how can we encourage that in in india and especially the tier two cities so super excited about those two hockey sticks and how we’ve as a company can bridge those gaps

Spencer Low

Intensity of what is produced in this part of the world actually is really important globally. But what’s really distinctive about APAC is actually the third major topic, which is livelihoods. As I mentioned, this is the part of the world which has a lot of developmental ambitions, and livelihoods are key. So my colleague touched on, Vushali touched on Chapter 3, power systems. Actually, I would like to touch on agriculture and food systems, which is your Chapter 4. So agriculture and other land use is actually the largest employment sector in the Asia -Pacific. I mean, I believe in India it’s about 46 % of jobs. And for the region, it’s the largest sector, about the same as the next two sectors added together, which are actually manufacturing and wholesale and retail trade.

Add those together, you get the same number of jobs as in agriculture. Now, over 80%. 80 % of farms around the world, and especially in India and the rest of the global south, are smallholder farms. farmers. And that creates an issue because a lot of the technology for agriculture is developed for large commercial farms, satellite imagery, et cetera. So this is one example I’d just like to delve into in terms of what Google is doing to contribute to the data, the digital public goods that Rushali spoke of. So if you want to use satellite imagery and actually understand agriculture so you can do things with it, you need to find the boundaries of your individual farms.

That’s your individual unit and often less than two hectares, if not smaller. And so you can do that with people poring over maps or satellite imagery, but that’s not scalable. But this is a really interesting problem for AI. And so for those of you who’d like to know more about this, there’s actually an exhibit at the Expo at the Google Pavilion. But this is what we call agricultural landscape understanding and agricultural monitoring and event detection. So we’ve trained AI to actually digitally enhance the environment. The field boundary. and you can say, well, that’s interesting because you can zoom into India and look at the Indo -Gangetic plain and see all the field boundaries. We’ve also trained the model to distinguish what crops are being grown through multispectral imagery.

And with that, we can detect events like tillage, sowing, harvest, et cetera. And all this data is now available. It is part of the Krishi DSS. So it’s contributing to the digital public infrastructure of the Indian government through the Ministry of Agriculture, through state governments, for example, like that of Telangana and the ADEX system. And what this does is it allows NGOs, government bodies, et cetera, to actually give advice to farmers because you now understand what’s going on on the ground, which is a critical driver for mitigation benefits, but also adaptation as actually the planting and the growing of crops. So we’re seeing best practices for planting. and what to plant is actually changing over time with climate change.

So do find out more at the pavilion. But one thing that I like to double -click on, as we say, is actually the innovation part of it. This digital public infrastructure is only helpful if it can be really used. And it’s not just governments and NGOs. It’s also startups. They’re innovating and finding new ways of using this information. So companies like Carbon Farm, they’re in France. They’re using this data. Varaha, which is a social startup, entrepreneurship. But also, Wadwani AI is another startup that we are supporting as well in terms of driving innovation in the agricultural space. So this is really all going to be accelerated through the use of AI, and we’re very excited to contribute to that.

Thanks.

Uday Khemka

Wonderful. You can see – I’m going to just grab one of these. Thank you. So, hello. Yeah, you can see that Google represents the convergence of the two themes we were talking about. And I think you have a wonderful web. At least I’ve seen access to materials about your sustainability strategy online. So if people want to know more, I’m sure they can go there.

Vrushali Gaud

Yeah, I’ll make a plug. Website, sustainabilitygoogle .com. It has all of our information. And the expo booth has all of our information. So thank you very, very much.

Uday Khemka

Now, we are inevitably vastly behind schedule as we are with climate change. However, we’re going to keep going with great focus. And we’re going to turn to the energy and power sector. Now, this is a bit embarrassing because we have to do a little bit of a switchover of people. But we don’t have time to put up the new names here. So we’re just going to announce them and listen with great attention. So we have two fantastic speakers from the energy sector, which, as you know, is one, if not the most important sector. Do you want to come closer, Nalim? And we can just be together here. I would ask, obviously the decarbonization of the energy sector is absolutely critical without that nothing happens so I’m going to hand over straight away first to you if I may Nalin, to set the stage a little bit and what you’re trying to do with Climate Collective and UNESA and then Dan more specifically to what you’re up to so over to both of you and obviously introduce yourself sorry I haven’t done it for you

Nalin Agarwal

no, thank you I understand we are short on time so I’ll keep very brief I’m Nalin Agarwal, one of the founding partners of the Climate Collective I think we’ll have the slides up soon great, so today I’m just going to talk about very quickly a program that we’ve been running for 6 years and where we partnered with Graylon to really drive decarbonization and grid modernization starting with India but across the global south so if I can move on who’s operating that? let me go there to you I’ll do it. Okay, just quick snapshots. We are an ESO enterprise support organization, largest in the global south, about 1 ,500 startups supported. Key partnerships, UNESA is a key one.

I don’t want to spend too much time here, but that’s what we’re going to spend some time on as well. We do a lot of work in AI, in power but beyond. So next week we’re doing the Delhi Climate Innovation Week. In fact, Google is a sponsor and partner there, and of course, Grail is as well. But happy to chat about this later. Here’s what we’re trying to do. I think what’s happening is that a lot of the challenges on renewables are being solved. They will be solved. I think one of the increasing recognitions is that the grid is a key bottleneck now, and we need to really work on grid transformation. That includes both decarbonization and modernization.

So that’s what we’re working towards. So we work with utilities. There’s about 22 of those that we’ve worked with so far. Work on a problem statement approach. Get startups to apply. Select startups. Get them to create business cases and pilot plans and eventually lead to pilots, right? So there’s 22 utilities that have participated and have led to about 20 pilots, a subset of which have become large deployments, right? So it’s a very unique program in the global south, actually. It’s the only one, right? High conversion ratio, so about 30 % of the pilots that have been proposed have come in. Key partners, I mean, 22 utilities and all the people that are working in power sector reform are part of this program.

I mean, again, I won’t spend too much time, but there’s a lot of this information available online. All the startups that are vetted, ready to deploy, are available for utilities to engage with. we have a bunch of case studies also but the key point is this we are now developing this along with Grail into a global AI for power innovation platform which has three components the open innovation program which is electron wipe on the top there is the knowledge hub which is basically a peer sharing platform where we do convenings co -located at COPS at climate weeks etc and then there is an online solution database of pre -wetted solutions. I’ll stop there and hand it over to Dan

Dan Travers

Thanks. Thanks Nalan I’m going to stand up too because I like to stand up and talk so my name is Dan Travers I’m from Open Climate Fix we are a startup doing AI for grid. I’m going to dive a little bit into the grid area which has been talked about a bit in order to get to net zero we need to electrify we need to green the grid and we need to electrify everything the grid of the past had Usually in each country there was tens of generators and the grid operator would know those people on a first -name basis and they would ring them up and tell them when to turn up and down.

We’ve now got millions of generators with solar panels, wind turbines everywhere. The grid of the past had variability from just demand. We’ve now got variability from demand and the wind speed and the clouds, right, so three sources of variability. The grid of the past had a normal demand that we understood well. We’ve now got data centres, we’ve got EVs, we’ve got batteries, we’ve got AC, so the demand is changing shape incredibly. How are you possibly going to address this balancing of this grid with a bunch of people in a room, right? You need AI solutions. You need a highly digital grid. You need something which can schedule and marshal all of these assets in a digital at sort of AI speed.

So… That’s really important. And why is it important… It’s important because if we don’t do it, we’ll have blackouts, and if we don’t do it, we’ll have costs increasing because the way that grid operators are currently dealing with this challenge is they’re actually scheduling a lot of backup generation. It’s usually gas -fired generation. It’s very expensive, so bills are going up. And if you look around what’s happening now, there’s a push back against the green revolution. If we don’t address these problems, we’re going to have a democratic pushback, and we will have a reversal. So AI solutions can really help us in fighting the battle for hearts and minds as well as the actual physical battle.

So myself, I came from sort of banking tech space. Jack, my co -founder, came from Google DeepMind, who the name keeps coming up. We both saw there was a big gap between the amazing tech that was available in some of these industries and grid operators and the electricity industry, which is by nature very risk -averse. It has to be worried about things failing all the time. So we saw the gap between those two, and we formed Open Climate Fix to really try and address that gap. to bridge that, to take sort of moonshot ideas and actually build a rocket ship that is going to fly to the moon and actually implement something and give data to researchers.

The company’s non -profit, we’re open source, and that’s about the scaling, which I think is a key part of the title of this talk. So we’ve built the best solar forecast in the UK, we think, by about 20 % or 30%, like quite a long way. We now want to take that, we are starting to take that to India. We’re working with Adani, we’re working with Rajasthan Grid Operator, and with a combination of open source plus commercial sort of expansion, we see the AI tools as super transferable across grids. So I’m really excited that we can take tools from one grid and apply them to all the grids in the world and use AI to solve climate change.

Thank you.

Uday Khemka

Thank you so much. You can imagine if we had more time, we would have had a panel on the built environment, a panel on industrial decarbon, a panel on transportation. We don’t have the time. But thank you for that fantastic presentation. A couple of interventions. Now we turn to the last segment. We have three very distinguished institutions with us, all involved at the strategic level. with Grail and with this process. And I’ll start, Ankur, with you at McKinsey, who have been close partners.

Ankur Puri

Thanks a lot. Another race against time. So what would I like to… I should say that Sean went out of the room and negotiated five minutes more for us in the room. So… Okay. So, while the slides come up, it’s firstly a privilege to be here. And thank you for the opportunity for McKinsey to be part of the journey that you are leading, Uday. And thank you all for being here and shaping this in your own special way, at your own scale. I’m Ankur. I’m a partner based out of our India office. I lead Quantum Black in India, which is our AI team. And I work across sectors, because that’s really… a lot of fun.

And part of my work has been in energy. Part of my work has been in the built environment. Thank you. but I’m representing the team which is quite global that has had the privilege to work with the GRAIL effort. So I’d like to just talk about how this little effort with us is shaping the larger movement that GRAIL represents. So everybody’s talking about the impact of AI, so I’m not going to talk more about that, but the promise of AI, let’s just be clear. I think the way large global efforts have sort of found shape is to focus around a few challenges. So one of the big pieces that the GRAIL work has been about is shaping these four challenges and articulating them.

They’re about operational improvement in our current way of working. Big consulting words, strategic intelligence and foresight, basically better planning. Okay, build things better. Transformation, innovation. So can we do new things that don’t exist? That will help the future. And the last one is autonomous operations, which is essentially do you do current operations in a very different way? Use drones instead of people to go see how the wiring is in a large electric plant. and create more impact. Several of the examples you heard about will fit into this across energy -built environment materials and this can keep expanding food systems perhaps. Now, there’s a huge amount of work going on in just collecting the knowledge on each of those challenges.

Then you think about those fields of play, the energy -built environment. Within that, there are stakeholders. So for each stakeholder, what’s relevant? And then for each stakeholder, let’s say if you talk about system operators here as an example, there’s network planning is a domain to think about. Asset management is a domain to think about. Delivery is a domain to think about. Field force execution. Think of this as you’re now bringing in the language of the industry into this knowledge base so that if someone manages a power plant, they’ll be like, okay, what’s my library of things I need to look at? Tomorrow, that can then connect it to people who are innovating or providing these solutions.

One important gap in the middle is, okay, how valuable is it? each of these ideas when it comes to cost, when it comes to emissions. And the work’s not yet ready to be unveiled, but we are quite privileged to work with the Grail team and, of course, global experts to start to now quantify, both in terms of economic impact, but also in terms of direct emissions impact, what each of these applications could be worth. Because then our scarce resources and limited time can be focused on the most important problems. And I think that’s what’s coming up ahead, and I look forward to all of you pushing the boundary further, and it’s a privilege to be part of this.

Thank you.

Speaker 1:

Okay, I will kick off. So as a metaphor for the climate, we’re drastically running out of time, and I can see a clock ticking down in front of me. So I’m Rob. I’m from University College London. 200 years ago, University College London was founded with a… a purpose to drive… change to be impactful and to create useful knowledge. That’s really important for the climate because we no longer have the ability to let knowledge sit on the shelf when it comes to climate. So in 2026 at UCL, the way that we bring our community together is through what we call the Grand Challenges. These are a self -funded, cross -university way of tackling problems that are too complex for any one discipline.

Climate crisis at UCL sits alongside challenges like mental health and well -being and data -empowered societies, and they’re found in all 11 of UCL’s faculties, from engineering to health and arts and humanities. So where does AI come into this? Well, AI at UCL is not seen as a single discipline, but as an enabling layer embedded across the entire institution. It builds on our heritage in AI. We’ve got three Nobel Prizes, we’re the birthplace of Google DeepMind, we’ve got several Nobel Prizes, we’ve got the Nobel Prize for the companies all at unicorn valuations. Four quick examples at UCL at the moment. Starting at home we use our own campus as a living lab we’ve got sensor data from across our estate that forecasts energy demand and detects unusual patterns across UCL’s buildings and we turn that into insights for practical intervention.

Second example we’ve got our spin -out Carbon Re which uses deep reinforcement learning and digital twin optimization to cut fuel use and emissions in energy intensive processes like cement production. Third example is a partnership UCL Center for sustainability and real tech innovation is created in partnership with PGM real estate it links computer science to the built environment and accelerates AI enabled sustainability in the real estate it drives impact on the environment but also value for the real estate investors. And then we’ve got our digital innovation center which uses deep reinforcement learning and digital twin optimization to cut fuel use and emissions in energy intensive processes like cement production. Third example is a partnership UCL’s center for sustainability and real tech innovation is created in partnership with PGM real estate it links computer science to the built environment and accelerates AI enabled sustainability in the real estate it drives impact on the environment but also value for the real estate investors.

And fourth UCL Grand Challenges has supported an inclusive and AI tool that transforms satellite and drone imagery into accessible web -based sea ice classification that’s being used to support safer travel for Inuit communities. Aviation is another frontier for us. It’s a grand challenge in its own right. And in there, we are looking at short -term and long -term interventions. AI is used to create short -term interventions that drive down its impact on the climate, while engineering is undertaking long -term technology transform in electrification and hydrogen propulsion. And finally, for UCL, convening really matters. In April 2025, as Uday mentioned, UCL, along with GRAIL, hosted our International Summit on AI Solutions for Climate Change, exploring sectors like energy and the built environment, and moving from discussions and pilots to deployment and impact.

I’ll finish with a quick call to action, which is that the grand challenges created by the U.S. government have been the greatest challenge for us. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past.

We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of

Adam Sobey

Cheers, and we’re properly into Alex Ferguson overtime now. So hopefully not with the climate change, so I’ll try and leave a little bit of time for Uday. So I’m Adam Sobey, I’m from the Alan Turing Institute. This is the UK’s National AI Institute. We focus on five missions across environment, which is focused on environmental forecasting and climate change, on sustainability, on defence and security, on health, and on foundational research. And as the Director for Sustainability, obviously I think that’s the most important mission, and that’s why I’m here. But we believe that the time for action is now. is literally on fire. We saw fires in the US which have been linked heavily to climate change.

We are seeing droughts in India which is affecting the food and people’s lives. We’re seeing pollution in Southeast Asia which is affecting health. We cannot wait for new fuels for the energy transition to occur. We need to do something immediately starting today. And we believe that AI can play that role. We know this because we’ve, as a part of our institute, have applied AI and data science to shipping and reduced emissions by 18%. We have done this in buildings where we’ve improved HVAC optimisation to reduce emissions by 42%. And we’ve created an underground urban farm that works entirely off renewable energies in the UK, allowing us to grow crops without using any CO2. However, I think we’ve done some relatively impressive things for a relatively small institute.

We’ve done some really impressive things for a relatively small institute. We can’t do this alone. we realised that this is a global problem and the Sustainability Missions chief funder is Lloyd’s Register Foundation which is a global charity heavily focused on the global south and so we think that it’s really important that we work together both within the UK and outside of the UK to solve these problems and that’s why we’re really pleased to be part of Braille to look for global solutions to global problems so thank you very much

Uday Khemka

It’s a tribute to all our speakers that they managed to put extraordinary quality into this ridiculously short time frame. I’ll just end on three words. One word is that we have come through our work together to find hundreds of examples of opportunities where businesses for example can save money or increase revenues, improve their economic value while at the same time massively improving their emissions profiles on the mitigation side. On the adaptation side… to your points. There are many examples from Google to all of your institutions where these technologies are already being deployed to save lives at a big scale. And the last point I’d make, apart from I’d ask you to thank our speakers with a big round of applause, is first to say that again and again you’ve heard one theme coming out of this group, which is radical collaboration.

Work with us to make the difference that we all believe and know can be made through the application of AI solutions to climate change. So maybe we could give our speakers a round of applause. Thank you very, very much.

D

David Sandalow

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

AI can deliver significant emission reductions across sectors

Explanation

David Sandalow states that artificial intelligence has a strong capacity to cut greenhouse‑gas emissions in many industries. He highlights its current role in decarbonising power generation and its broader potential.


Evidence

“AI does have significant potential to contribute to reductions in greenhouse gas emissions.” [1]. “AI is already helping decarbonize the power sector, optimizing location of generation transmission, increasing output at solar farms, but it can do much more.” [3].


Major discussion point

AI’s Climate Potential and Urgency


Topics

Environmental impacts | Artificial intelligence


AI’s own emissions are minimal (< 1 %) making net benefit positive

Explanation

Sandalow points out that the carbon footprint of AI itself is tiny compared with the emissions it can help avoid, estimating it at well under one percent of total greenhouse‑gas output.


Evidence

“We decided, based on the available evidence, that the best estimate is less than 1 percent and maybe much less than 1 percent of greenhouse gas emissions are currently coming from AI.” [16].


Major discussion point

AI’s Climate Potential and Urgency


Topics

Environmental impacts | Artificial intelligence


AI can optimise generation location, transmission flows and dynamic line rating

Explanation

Sandalow explains that AI can improve power‑system operations by finding optimal generation sites, managing power flows on transmission lines and applying dynamic line‑rating techniques.


Evidence

“It can optimize, such as power flows on transmission lines.” [98]. “Dynamic line rating is optimal power flow analyses.” [99].


Major discussion point

AI Applications in the Power & Grid Sector


Topics

Environmental impacts | Artificial intelligence


AI can improve food‑system emissions through soil‑sensor data and virtual farms

Explanation

Sandalow notes that AI‑driven analytics of soil‑sensor information and virtual‑farm modelling can cut fertilizer use and overall emissions from agriculture.


Evidence

“integrating data from soil sensors to create fertilizer management plans, creating virtual farms.” [127].


Major discussion point

AI for Agriculture, Food Systems & Livelihoods


Topics

Social and economic development | Artificial intelligence


Report provides actionable AI‑climate chapters and primers

Explanation

He emphasizes that the new report deliberately bridges AI and climate expertise, offering practical chapters and primers for both communities.


Evidence

“One of our basic conclusions was we need to bring together experts in climate change and experts in AI.” [31]. “We talked – We have introductions to both AI and climate change in this volume.” [32].


Major discussion point

Knowledge Mobilization & Academic Initiatives


Topics

Capacity development | Artificial intelligence


U

Uday Khemka

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Call for radical, action‑oriented collaboration to harness AI

Explanation

Khemka frames the summit as an invitation for bold, collaborative action, urging all participants to unite AI expertise with climate ambition.


Evidence

“This is an invitation for radical action -oriented collaboration with all of you.” [35]. “It’s a call for collaboration.” [36].


Major discussion point

AI’s Climate Potential and Urgency


Topics

Enabling environment for digital development | Artificial intelligence


GRAIL network creates a collaborative ecosystem for AI climate solutions

Explanation

He describes GRAIL as a platform that brings together academia, industry, governments and funders to scale AI‑driven decarbonisation projects.


Evidence

“Grail is an attempt to create… …to create a collaborative network.” [46]. “Going back into Grail, bottom left, the fact that this becomes a collaborative community to get all these solutions scaling at speed…” [49].


Major discussion point

Collaborative Networks & Institutional Frameworks


Topics

Enabling environment for digital development | Artificial intelligence


Economic value can be created while massively decarbonising

Explanation

Khemka stresses that AI projects can generate profit for companies while delivering large emissions reductions, highlighting win‑win opportunities.


Evidence

“what are the big opportunities for companies to create economic value while at the same time massively decarbonizing.” [14]. “Hundreds of examples of opportunities where businesses … can save money or increase revenues, improve their economic value while at the same time massively improving their emissions profiles on the mitigation side.” [138].


Major discussion point

Corporate AI‑Driven Climate Operations


Topics

Financial mechanisms | Environmental impacts


A

Adam Sobey

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

AI has already cut emissions in shipping and HVAC

Explanation

Sobey cites concrete reductions achieved by applying AI to maritime logistics and building climate control systems.


Evidence

“We know this because we’ve, as a part of our institute, have applied AI and data science to shipping and reduced emissions by 18%.” [17]. “We have done this in buildings where we’ve improved HVAC optimisation to reduce emissions by 42%.” [23].


Major discussion point

AI’s Climate Potential and Urgency


Topics

Environmental impacts | Artificial intelligence


Alan Turing Institute partnership expands AI climate work to the global‑south

Explanation

He notes that the Institute’s collaboration with a global‑south funder is scaling AI‑driven climate research beyond the UK.


Evidence

“This is the UK’s National AI Institute.” [56].


Major discussion point

Collaborative Networks & Institutional Frameworks


Topics

Enabling environment for digital development | Artificial intelligence


V

Vrushali Gaud

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Google Center of Climate Tech in India drives actionable research and green skills

Explanation

Gaud explains that Google is launching a Climate Tech centre in India to fund pilots, promote actionable research and develop climate‑focused skills across the country.


Evidence

“we are working with the Principal Scientific Advisory of the Government of India to launch a Google Center of Climate Tech.” [70]. “Our goal is to encourage academic research but research that is actionable so five pilots first of all… and there’s a lot of focus already on electricity… the biggest one we don’t talk about is green skills you need to embed this sort of a thinking which is green climate first across every domain.” [71].


Major discussion point

Collaborative Networks & Institutional Frameworks


Topics

Enabling environment for digital development | Capacity development


Google’s carbon‑free energy goal and water‑leak detection reduce emissions

Explanation

She highlights Google’s internal sustainability programmes, including a carbon‑free energy target and AI‑driven detection of water‑tap leaks to cut waste.


Evidence

“Access to clean energy is something we’re looking at, and pretty much we have a carbon -free energy goal.” [130]. “I look at water taps, and I remember the amount of leakages we have on water taps… how can we use AI to sort of optimize…” [100].


Major discussion point

Corporate AI‑Driven Climate Operations


Topics

Environmental impacts | Artificial intelligence


Flood Hub provides AI‑derived flood‑risk data for insurance, real‑estate and resilience

Explanation

Gaud describes the Flood Hub platform that uses AI to generate flood‑risk maps which can be reused by insurers, property developers and other stakeholders.


Evidence

“So we have a lot of information put out there as to flood risks of different region, which then other companies can use for whatever products they’re launching, whether it’s insurance, whether it’s real estate, fire sat, wildfire risks.” [122].


Major discussion point

AI for Agriculture, Food Systems & Livelihoods


Topics

Social and economic development | Artificial intelligence


S

Spencer Low

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Digital public infrastructure enables startups, NGOs and governments

Explanation

Low stresses that a robust digital public infrastructure is essential for AI tools to be adopted by a wide range of actors, including startups and NGOs.


Evidence

“This digital public infrastructure is only helpful if it can be really used.” [78]. “It’s also startups.” [81]. “And it’s not just governments and NGOs.” [82].


Major discussion point

Collaborative Networks & Institutional Frameworks


Topics

Information and communication technologies for development | Data governance


AI mapping of smallholder field boundaries and crop types supports mitigation and adaptation

Explanation

He outlines how AI models can delineate field boundaries and identify crop varieties, providing data for both climate mitigation and farmer adaptation.


Evidence

“We’ve also trained the model to distinguish what crops are being grown through multispectral imagery.” [119]. “The field boundary.” [120].


Major discussion point

AI for Agriculture, Food Systems & Livelihoods


Topics

Social and economic development | Artificial intelligence


N

Nalin Agarwal

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Climate Collective offers open‑innovation program, knowledge hub and solution database for power sector

Explanation

Agarwal describes the three‑pillared platform—open‑innovation programme, peer‑sharing knowledge hub and an online solution database—that supports AI‑driven decarbonisation in electricity.


Evidence

“we are now developing this along with Grail into a global AI for power innovation platform which has three components the open innovation program … the knowledge hub … and then there is an online solution database of pre‑wetted solutions.” [45].


Major discussion point

Collaborative Networks & Institutional Frameworks


Topics

Enabling environment for digital development | Environmental impacts


Climate Collective pilots have led to large‑scale deployments with utilities

Explanation

He notes that the pilots run with 22 utilities have already produced several large‑scale roll‑outs, demonstrating the scalability of AI solutions.


Evidence

“So there’s 22 utilities that have participated and have led to about 20 pilots, a subset of which have become large deployments, right?” [106].


Major discussion point

AI Applications in the Power & Grid Sector


Topics

Environmental impacts | Artificial intelligence


D

Dan Travers

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Grid variability from renewables, EVs and data centres requires AI for real‑time balancing

Explanation

Travers explains that the modern grid faces multiple sources of variability and needs AI‑speed scheduling to keep supply and demand balanced.


Evidence

“We’ve now got variability from demand and the wind speed and the clouds, right, so three sources of variability.” [95]. “You need something which can schedule and marshal all of these assets in a digital at sort of AI speed.” [96].


Major discussion point

AI Applications in the Power & Grid Sector


Topics

Environmental impacts | Artificial intelligence


Open Climate Fix’s open‑source AI tools for solar forecasting are transferable globally

Explanation

He highlights that the open‑source AI models developed by Open Climate Fix can be applied to grids worldwide, accelerating climate solutions.


Evidence

“we see the AI tools as super transferable across grids.” [135].


Major discussion point

Corporate AI‑Driven Climate Operations


Topics

Environmental impacts | Artificial intelligence


A

Ankur Puri

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

McKinsey’s four‑challenge framework prioritises AI projects and quantifies impact

Explanation

Puri outlines that GRAIL’s work with McKinsey has shaped four strategic AI challenges to guide investment and measure climate impact.


Evidence

“So one of the big pieces that the GRAIL work has been about is shaping these four challenges and articulating them.” [52].


Major discussion point

Collaborative Networks & Institutional Frameworks


Topics

Enabling environment for digital development | Artificial intelligence


McKinsey identifies operational improvement, strategic intelligence and autonomous operations for power assets

Explanation

He details three value‑creation levers—operational improvement, strategic intelligence/foresight and autonomous operations—that AI can bring to power‑sector assets.


Evidence

“They’re about operational improvement in our current way of working.” [109]. “Big consulting words, strategic intelligence and foresight, basically better planning.” [110]. “And the last one is autonomous operations, which is essentially do you do current operations in a very different way?” [108].


Major discussion point

AI Applications in the Power & Grid Sector


Topics

Environmental impacts | Artificial intelligence


S

Speaker 1

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

AI is embedded across university research to drive climate solutions

Explanation

Speaker 1 explains that at UCL AI is treated as an enabling layer across all faculties, supporting concrete climate interventions.


Evidence

“Well, AI at UCL is not seen as a single discipline, but as an enabling layer embedded across the entire institution.” [26]. “AI is used to create short‑term interventions that drive down its impact on the climate, while engineering is undertaking long‑term technology transform in electrification and hydrogen propulsion.” [11].


Major discussion point

AI’s Climate Potential and Urgency


Topics

Capacity development | Artificial intelligence


UCL Grand Challenges mobilise AI for concrete climate projects

Explanation

He notes that the Grand Challenges framework brings together interdisciplinary teams to deliver AI‑enabled climate pilots, such as sea‑ice classification for Inuit communities.


Evidence

“UCL, along with GRAIL, hosted our International Summit on AI Solutions for Climate Change… moving from discussions and pilots to deployment and impact.” [33]. “And fourth UCL Grand Challenges has supported an inclusive and AI tool that transforms satellite and drone imagery into accessible web‑based sea ice classification that’s being used to support safer travel for Inuit communities.” [59].


Major discussion point

Knowledge Mobilization & Academic Initiatives


Topics

Capacity development | Environmental impacts


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI for Social Empowerment_ Driving Change and Inclusion

AI for Social Empowerment_ Driving Change and Inclusion

Session transcript

Sabina Dewan

say, you know, it’s yet to unfold. We don’t know what the impact is and it’s yet to unfold. I believe that that contention is actually largely untrue. And let me tell you why. When you talk to companies privately, publicly they will not own up to the potential job disruptions as a result of AI. And partly that is because many of the big companies actually are known to be formal job creators, right? And that is a very important part of their image and their contribution to economies and societies. But when you talk to them privately, in India especially, our research shows that they will own up to anywhere between 30 % to 40 % time saving, right, productivity gains, which then translates into significant workforce cuts.

We already have plenty of empirical evidence that suggests that… that AI systems are enabling surveillance, they’re influencing decisions about who gets work, when, and what entitlements people have access to. We also know that AI systems are grossly exacerbating inequality. If you just look at the market caps of some of the top technology companies, you know, NVIDIA’s $5 trillion market cap, right? So there’s a massive accumulation of capital that really, you know, capital share is growing and labor share of income is getting smaller and smaller. So I guess, you know, this discussion that talks about social empowerment, a key question in that is the question of the impact on jobs. And the question that I, you know, put out there is, so if you even buy the idea that we don’t know, that we don’t know what the impact is, what the impact is going to be.

Can we afford to just wait, right? Or do we need to take every action possible in terms of regulations, in terms of building social institutions, in terms of really working to build systems that can manage this inevitable evolution of AI, whether we like it or not. The last thing I’ll say is just, you know, yes, there have been technologies before. Yes, they’ve had their own forms of inclusion and exclusion. But at the end of the day, this is the first time where you have the very pioneers of that technology, Jeffrey Hinton, Stuart Russell, Dario Amadai, the very pioneers of the technology themselves are ringing alarm bells. And would we not be wise to heed them?

So with that, I hope, provocative context setting, I am really grateful. On behalf of the Just Jobs Network, again, with support from IDR. CNF CDO to welcome our really esteemed panelists. Mr. Anurag Bihar, who is the chief executive officer of the Azeem Premji Foundation, has very graciously agreed to chair this conversation, moderate the discussion. We have Dr. Julie Delhanti, who is the president of IDRC Canada. Thank you, Julie. And Ms. Sandhya Ramachandran Arun, who is the chief technology officer of Wipro Limited. Thank you so much for being here, Sandhya. So, Anurag, over to you.

Anurag Behar

Thank you. Thank you, Sabina. Good evening, everybody. Thank you. There’s so much. There’s so much investment going into AI. why is it going into a why is so much investment there in AI? We are in the fifth day of the AI summit. So this is like the 42nd kilometer of a marathon. Right? At this stage, such investment has to be justified by some monetization. And where is that monetization going to come from? It’s either going to come from productivity, which comes from labor reduction, or it is going to come from new products and services or both, a combination of both. That’s where it’s going to come from. Right? We will talk more about that. At this moment, my job is easy.

I’m going to just ask Sandhya, because she’s the representative of the technology world here really, that which way is this technology headed? And in very simple terms, what is she seeing its implications on jobs? I mean, what kind of jobs are going to get displaced, destroyed? And what kind of jobs are going to get created? and what’s the underlying dynamic because of which these jobs will be created and the jobs will be destroyed. So how does she see it in the world of technology? Let’s start with that.

Sandhya Ramachandran Arun

Sure, thank you so much. Thanks, Anurag, for the question. So as far as the tech industry is concerned, we are really witnessing a very huge impact of the AI evolution as a disruptor. We’ve had to revisit how job roles are created. We’ve had to revisit how talent has to be reskilled. And we have also revisited the responsibility, not just in terms of security, safety, but also in terms of what does it mean to our colleagues and our hiring. I think initially there was a huge amount of fear that we would not hire from colleges, which is now… despair because we’re broken. continues to hire from colleges, and so do our competitors. But the criteria for hiring has shifted to a more nuanced, a more calibrated way of looking at learnability, looking at whether a person communicates well, technical ideas, looking at whether a person is adaptable.

Because AI is a technology that is changing as we speak. So no one can claim to be an expert in AI and remain that way for the next five days, possibly, because there’s things that’s going on changing every day. With regard to our own talent, we have created role personas, and we have created very specific learning modules on how the role changes with AI. And everybody from the board to the CEO down to the youngest employee is going through a very calibrated learning process. And there is also a very… calibrated way in which services and ways of working are changing. So to that extent, we see a change. We are not seeing a displacement because most of the work that we do is consultative in nature, inspired of the market valuation erosion that we saw some time back because of a news from Anthropic and Palantir.

The insiders in the technology world were already aware of the transformative nature of these solutions coming up. And we have already been using these solutions significantly for over a year. So from a market sentiment point of view, possibly there was an erosion, but from a technology impact perspective, we have been bracing ourselves for the change and our journey of transformation continues.

Anurag Behar

I just have a follow -up on that, and then I’ll move to Julie. I’ll put it very, I mean, let’s say, a very, very simple, commonsensical question. Which is that, we are hearing about these tools where coding has become so much more easier, right? So, and this is not just about Wipro, it’s about the IT industry in general. So if coding is becoming so much easier, and 50 % or 70 % of coding can be done by these AI tools, then isn’t it inevitable that IT sector jobs will be lost? Or if there’s business or volume growth, much less hiring will happen. So that’s part one to my question. Part two is, if you move away from the IT world, and if you go to let’s say design and marketing, or, I mean, let’s say my world of the academy, the world of research, so many of research assistants and those of you who have used research assistants or work with research assistants, so much of that job is being done easily by AI.

So part one of my question, if coding is becoming so much more efficient, isn’t it inevitable jobs will be lost? so much hiding will not happen, whichever way. And aside from that, in the outside world, in other industries, what is it that you’re seeing?

Sandhya Ramachandran Arun

Sure. Let me just address the coding part of it. I think for over 15 years, the industry has been trying to explain to the outside world and as well as to the talent aspiring for careers with us that we do not have coding roles primarily. Coding is a very small task in what a software engineer does or a software developer does. There is the need to understand business outcomes. There’s a need to understand customer experience. There’s a need to understand architecture and what is a well -engineered code, right? So this is not new today. This has been in existence. I mean, I’ve been doing digital transformation for the last 15 years, and we’ve been trying to change how the world thinks about these roles.

Yes, the day is here when coding can be completely handed off to an AI agent. And that is indeed a fact, right? But the fact that supports the success of this code in business is really the ability to have a human oversee the design, the engineering, the architecture, the security, as well as delegating the coding work to an agent. So the role of a junior developer really becomes that of a little manager of AI, as opposed to saying, you’re displacing my job. The person’s actually going up if the person really is aware and aligns to what the organization needs in terms of figuring out what is required. And those are the trainings that are happening.

That’s what’s happening in terms of selection. We now have COEs inside engineering colleges where we are talking to universities about this as well. And what about other industries? whatever you’re seeing? So other industries we work with, there is a variation. So if you think about it, marketing, there’s a lot of work that gets offloaded. The strategy, the planning, the oversight on execution, the ROI on marketing still remains a strategic thinking job that remains with humans. But you can generate a lot of good quality visual, audio, and video content using AI today. And probably it’s making marketing a whole lot more efficient. Now, if you take finance, for example, again, a lot of processing gets taken over by AI, but it still needs a human to bring in wisdom in terms of how the data gets interpreted, how decisions are being made, and also to make sure that the AI aligns to human values in some sense.

So those kind of changes are happening in these functions. And that’s why I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. Industry -wise, there is a lot happening positive, I would say, in, say, healthcare, for example, even in banking, for example, where we are able to fight financial crimes a whole lot better. In healthcare, we are augmenting technicians, clinicians, and doctors with more intelligent input for decision -making. And while AI can make the decision, you don’t allow it to make

Anurag Behar

So, Sandhya, just put a pin on something that you said, and I’ll come back in the second round. You used the word human and wisdom. So just put a pin on that, and I’m going to come back to that in my second round. Julie, if Sandhya was less than as optimistic as she is, she wouldn’t be representing the tech world, you know. So one should expect that she’s as optimistic. she is. But what I wanted to ask you was that, you know, eventually, and, you know, from your vantage point, you know, you’re seeing how governments are dealing with this evolving situation, and not just an AI safety and, you know, all the other things, but particularly on labor markets.

So how can governments and institutions govern AI responsibly, such that any disruption in labor markets is sort of minimized or handled well, or the transition happens well? So let’s assume this picture that Sandhya has painted, that, of course, there’s something that’s going on, the reproductive, like she talked about the marketing and advising. So some people are going to lose jobs there. So what should government institutions do? How does one govern this situation, such that the benefits are maximized? And I’m talking particularly about labor. markets, not the other stuff, while harms are minimized.

Julie Delahanty

Yeah, thank you so much. I’m going to answer that question, but the last question just made me think about two things. One was, you know, I’m old enough to remember when computers first came around in the 70s and, you know, what we thought would happen with computers and the job losses that we anticipated. And, of course, we did lose jobs. There was a lot of labor disruption related to, you know, typing pools and different kinds of ways. But at the time, even home computers, nobody could even fathom what you would do with a home computer. The conversation then was that home computers would be used to develop recipes and that you’d have recipes because homes were only where homemakers were.

People couldn’t even, there’s such gendered ideas that people just could not understand what you would do with a home computer. So I think in the same way, some of what… is going to happen with AI in the labor market, we may not be able to anticipate just yet. So just as a reminder of where we came from with other important technologies. But when it comes to governance, I think the important issue is that it’s not really only about the technology, it’s really about institutions, it’s about workers, and it’s also about research. So when it comes to institutions, really without the kind of strong institutions in countries, regulatory institutions, labor institutions, strong research ecosystems that are able to really understand what’s happening in the labor market, I think it’s very difficult to end up having a strong regulation of what’s happening in the labor market.

So just those institutions are incredibly important to understanding where job losses might be, where biases might happen, and really investing in people and institutions is something that has to go hand in hand with our thinking around technologies. Another… Another area is around making sure that when we’re thinking about new technologies, that we’re making it very human -centric. And one of the things that the AI4D program does when we think, what do we mean by human -centric? It’s really about making sure that we’re co -creating new technologies with the co -creation of workers, of communities, of employers, so that we can understand how to enhance job quality, how to enhance productivity, rather than increasing inequalities or changing who benefits.

So really understanding who benefits, who’s going to face the kinds of disruptions is really important so that we’re not thinking about that as an afterthought. That we’re really shaping AI systems using that knowledge. And similarly… I think the importance of research in, and I’ll just give an example from our AI4D work is we’ve done a big research program with partners in sub -Saharan Africa that’s looking at, that’s collecting household data, firm level data, worker data, to understand what the real world impacts of AI are on labor markets. And it’s that kind of tracking, who’s going to benefit, understanding who’s going to be displaced, and how the tasks and skills are really changing that’s going to allow governments to better design and think about what kind of skills development they need, what kind of social protections they need, and how to support labor rights.

So really, I think growing AI responsibly doesn’t mean avoiding innovation or avoiding change, but it’s really about shaping AI so that it, it does strengthen labor markets and supports workers and creates more opportunities.

Anurag Behar

Thanks, Judy. Thank you so much. I’ll move to Sabina Sabina I mean since you are the labor market expert here amongst us and the researcher what is it that you see what is it that I mean there’s so much of news we have had this five days of this grand summer what is really going on what do we understand what we don’t understand in the context of the impact of AI on jobs how do you stack up

Sabina Dewan

so just a little tongue -in -cheek we go back to the 1600s we’d asked chat GPT then if Galileo was correct it would have said no way right so this technology you know for all the possibilities that it brings notwithstanding it is not just a technology we can’t just at AI as machine learning, large language models. It is a system, it is an instrument that is being utilized for social, political, and economic engineering. And my job is to look at the impact of that in labor markets. So if we limit ourselves just to the question of how many jobs will be lost, how many jobs will be gained, that’s A, not even an appropriate question.

Two, I agree with my fellow panelists that we don’t necessarily know what sort of new possibilities there might be. But what we do know, what we already see, is also something that Sundar talked about, which is the efficiency gains. And any time there are efficiency gains, there are layoffs. And please, you do the research, right? Like, I do my job. But look at the newspapers. Companies are laying off thousands of workers already. All the big tech companies have in recent years been laying off workers. Now, sure, they can say that this is a confluence of many factors. It’s not just AI, and most of them will not just ascribe it to AI. They might ascribe it to macroeconomic conditions, to the confluence of various other forces like the pandemic or trade shocks, all of which is true.

But AI is one really big disruption that comes on top of all the other disruptions, and there’s already plenty of evidence that is suggesting that these disruptions are not just changing the quantity of jobs in terms of how many companies are already laying off workers. Again, I mean, we’ve heard also projections from the, tech companies themselves, right, what the possible projections are. of disruptions and layoffs are going to be. But we also already have evidence of people being laid off. But then on top of that, I would say let’s look beyond just how many jobs are lost and how many jobs are gained to actually look at, I mean, take the gig economy, for example, and algorithmic management of gig workers.

That is a labor market issue. If a gig worker is wronged, the platform just, you know, they just get kicked off the platform. There’s no mechanism for redressal because it’s an algorithm that’s managing the worker. So who do you talk to? I mean, I can go on and on and on. Now, we might be separating out platforms from AI, but actually the algorithms are AI, and it’s embedded in a platform economy that is increasingly becoming the architecture for transactions, and it’s deeply troubling. And then the last thing I’ll say is, so I’ve already said… like in terms of quantity of jobs, we are already seeing evidence of layoffs, right? We’re already seeing the evidence of layoffs.

It’s just that people aren’t necessarily able to pinpoint and ascribe it to AI. That’s point number one. Two, we need to go beyond the question of quantity of jobs and also look at the impact of this technology on quality of jobs. And third, we need to really deeply think about, again, to Julie’s point, the architectures that can help mitigate some of the potential adverse effects of this technology, both on the quantity and the quality of jobs. And we don’t have the luxury to sit and wait and say, hey, let’s get the empirical evidence and then we’ll figure out what to do. That will be way too late, right? So what do we need? We need countries to think about competition policy.

We need to look. We need to look very closely at tax policy. We need to look very closely at how labor laws need to change. We need to look at social protection systems. We need to look at skill systems, everything that Julie just mentioned, right? But we have to start from an urgency about this is having a huge impact already. It is likely to be, you know, even bigger, and we don’t have the luxury of time to just sit back and wait and say, hey, we need more empirical evidence before we figure out how to mitigate the negative or potentially negative circumstances. So that is what I think is, you know, really, really urgent, that everyone get on that bandwagon and say we need to create these systems and ask for them and do it in our work and do it in our advocacy.

Anurag Behar

Yeah, thank you. I’ll just follow up. I’ll just follow up with it. So, and Julie, please. Pardon. for saying this. I’m saying this tongue in cheek and all my friends and colleagues here who are not from India please pardon me for what I’m going to say. So, you know, we Indians, why should we care about all this? And the reason I’m saying that is because, you know, well just about 9 or 10 % of our employment is in the formal sector. So even if there is huge disruption in labour markets, maybe 2 % of these people are going to lose their jobs, right? So why should we care about all this stuff? Do you have any comments?

Sabina Dewan

I do. You can be sure I do. You can be sure I have a comment about that. So if you look at the numbers, we are more than 90 % in India in formal employment. So Anurag’s exactly right. He knows his numbers. So, you know, essentially what you’re saying is 1 out of every 10 people stands to be potentially affected, right? That’s one way. of looking at it. The other way of looking at it is we have such few good jobs, right? We have such few jobs in the formal labor market. Only one in 10 people get to have a formal sector job. And now you’re taking that away as well, right? That stands to be disrupted.

So again, we’re moving to a world of work that is much more precarious, much more insecure, much more uncertain, where workers don’t, they’re not even called workers anymore. We call them self -employed contractors. They have no health insurance. They have, you know, this is the precaritization of the labor market. So not only do you have, you know, pandemic, climate change, energy transition, trade shocks, and AI destruction, but you have a world of work that is much more precarious, disrupting everything, but you also now are moving to a place where work is becoming more and more informal. Formal jobs are jobs are being, you know, gotten rid of in the name of, please apologize, in the name of efficiency gains, right?

And so, yeah, so that’s why in India we should be really scared because we have such few formal jobs. And then imagine if you have these jobs in the IT sector in Bangalore disappearing, all the workers that used to go to bars and restaurants and get loans to buy houses and cars, that starts to disappear and it has cascading effects across the economy. So, you know, so the impact of this is definitely in the global south. It is definitely beyond the few formal sector jobs. And it’s deeply disturbing. And we need to actually work to understand from technologists very clearly, you know, how these efficiency gains are going to happen and how they’re going to, how.

What can different governments. and so on, and for architecture, public architecture, manage some of these changes. So we do need to care. Definitely need to care. We need to care urgently.

Anurag Behar

All right. So I’m going to come to Julie on this and come back to you, Sandhya, because I put a pin on something that you said, right? So, Julie, I mean, let’s assume that the alarm that Sabina is raising is at least half true, right? It’s more than half. You know, I have a deep conflict of interest, and I’ll tell you once I’m sort of done with this. So, Julie, how can, you know, what are the lessons that you’re seeing across countries, you know? You’re seeing the vast landscape, right, and IDRC has a view across the continents. So what lessons can be learned? From across the continent. such that AI is able to create opportunities, right, part of what Sandhya talked about, and doesn’t really deepen inequality or it minimizes it.

What are you seeing across the countries? Something, some good stuff.

Julie Delahanty

What is that? What is that regulation? And I think one of the – we have this AI – the Global Index on Responsible AI that some of you may have heard about. It’s been talked about a lot during the conference, or at least some of the sessions that I’ve been to. And really what that is, it’s the largest global rights -based data set on responsible AI. And what is distinctive about it is that it includes a dedicated focus on labor protection and the right to work. And by providing that country level, that sort of comparable data, it looks at 138 countries. So by providing that comparable data, it’s helping governments to understand what they might need to do better, what some of the issues are, how they can improve.

So really using that information to support governments in understanding what is the regulation, what is the solution that they need, not just – You know, it has to be based on some evidence. And I think the third big thing, which won’t be a surprise to anybody here that I’m saying this, is that we really need to have good evidence, and evidence really matters when it comes to these issues. So tools like the Global Index on Responsible AI really allows policymakers to move beyond kind of the abstract must -fix regulation to assess how governance of AI actually affects people’s rights, affects their jobs, affects their working conditions, and supports more proactive policymaking on labor regulations, again, skills, social protections, et cetera.

And I think equally important is that we’re still learning. There is no standardized, here is the regulation that you need codified. Through the kind of work that we’re doing, I think we’re learning what’s the balance between… supporting innovation… and still supporting regulation and safety. And I think working together across many countries to share that kind of information is what’s going to support us in finding the right tools.

Anurag Behar

Thanks, Julie. I’m going to come to you, Sandhya. But I just want to disclose something to all of you. That’s my conflict of interest. You know, Sabina is a labor market researcher, and naturally I would think she’s saying what she’s saying. Julie represents IDRC, and therefore she’s saying what she’s saying. Sandhya is the tech person here, so she’s saying what she’s saying. My problem is I’m responsible for this organization, Azeem Premji Foundation. And my problem is the following. My problem is that the foundation owns about 70 % of Wipro. Okay. So whatever is good for a tech company. is good for us, right? On the other hand, my job is not to take care of the technology and this world.

My job is to take care of the most vulnerable people in the country, right? The very poorest, the most marginalized, those who have no recourse to social protection. That’s my job. So I am a deeply conflicted person, right? Very deeply conflicted person. And I wanted to disclose that because I’m going to come to that towards the end. And it has a specific bearing on the question that I’m going to ask Sandhya, which is, you said something fascinating. And I want to put a pin on that. And I’m pulling your leg, you know, which is that rarely do you hear such words from a tech person. She talked about human care and wisdom, right? Didn’t she?

Okay. So, you know, really, my takeaway from what you were saying is that the tech tech stuff, you know, the coding and that kind of stuff, that can get automated. but something that is human understanding people understanding desires how do you work with people that’s what is hard to do and that’s something that you’re already seeing right so would you want to sort of comment on that

Sandhya Ramachandran Arun

yeah so the stereotype of techies aren’t human is a little unfair I think so don’t anchor it in your heads but then yeah so where do I start at the end of the day what does technology consulting and technology services try to do they try to help our client businesses become more successful and our client businesses in turn become more successful when they are innovative when they are creative when they are growing when they are growing when they are making their business and they are doing their business and they are doing their business and they are doing their business profitably and they are doing their business and they are doing their business and they are doing their business and they are doing their business and they are doing their business Or if they have already reached a state of maturity, they are trying to bring in a whole lot of efficiencies as well, right?

So it’s the S curve where you have an idea, you nail it, and then you kind of scale it, and then you kind of start sailing. And when you’re sailing, that’s when you become a big battleship and you have to focus on discipline and efficiency and ensure that you’re making profits just the same even while you’re running this big ship. But then the cycle doesn’t end there. It kind of keeps going. You keep coming up with new ideas, you keep scaling it, and you keep sailing it. And so profitability starts off with an investment, it grows, and then you have to become super efficient to remain profit. And I’m saying this to my boss because every dollar that we earn funds to the tune of about 66 cents whatever efforts the KMG Foundation uses for welfare, right?

And I think it’s a beautiful thing. It’s a beautiful thing. It’s a beautiful thing. model and I don’t think an AI could have thought of it. So therefore I do believe very strongly that creativity, wisdom, vision, foresight, human centricity is core to any technology disruptor that comes about. So if you imagine the days when there were horse carriages all the horses would have been kind of crowding the roads and people would have been going from place to place and at the end of the day you would have had a whole lot of methane which would have kind of ended the year a long time back because of global warming. But yes, vehicles did come and you did have carbon fuel and the evolution continues.

So I don’t think technology is going to stop. So human ingenuity is going to keep bringing technology disruptors. These technology disruptors are going to be more and more exponential in terms of what they can do. And it is up to humans to figure out how to create policy, how to create a governance mechanism, and how to ensure that we derive benefits, mitigate the risks, and at the same time ensure that humanity is at the center of all of this. Right? Now, this is easier said than done, but we’ve done it with nuclear energy. Despite the disasters, the fact that you and I are still alive today and thriving and living a better life than we ever lived in the last 100 years is an example that, yes, you could have accidents that are preventable, but accidents are created by humans.

And it’s up to the leadership to ensure that they put the required guardrails. It could be policy. It could be governance. It could be guidelines, whatever you call it. And you can even hire a leader. And you can even hire a leader. some of the

Anurag Behar

Yeah, it’s good to hear that, you know. I’m just going to come to one round and then perhaps have the last word, if I may. Yeah, okay. So, Sabina, what’s your take? What should we do? What should we do, really?

Sabina Dewan

So I’ve already kind of said what we should do, but first, Sandhya, everything you said really resonated with me, right? And I fully agree that, you know, that the humans have to take responsibility. I can think of a few very worrying scenarios where there are leaders in the world that have access to, you know, nuclear weapons that perhaps… shouldn’t have access to nuclear weapons, right? So how much confidence do we have in people, and particularly when you look at the overall trend of growing precarity? Again, take India alone. Fifty -eight percent of our employment is now self -employment. It is not, you know, and these are people, workers, that have no coverage of health insurance or any kind of safety net.

Add to that the fact that, like, there’s all these different forces coming that we don’t know, you know, if AI disrupts jobs or pandemics happen. We all saw what happened with migrant workers walking back to their villages, hundreds and thousands of migrant workers, right? There is a lot more precarity in the labor market than there ever has been in the past in modern history. And the problem is that regulation, and the regulation of the labor market, and the regulation of the labor market, and the regulation of the labor market, across the globe are getting weaker and weaker in this respect. And then we don’t have precedent, as Julie said. Like, we’re still trying to figure out exactly what we should do, right?

But I will say, I mean, I’ve said many, and I will say that, you know, in the meantime, AI is different because this is also the first time research is now showing, the first time that the current generation of young people have shown cognitive decline, right? So, I mean, rates of depression, rates of anxiety, cognitive decline. How does cognitive decline affect your ability to operate at work and then be replaced by machines that are more efficient because you’re getting stupider? Like, right? Sorry, but this is a really worrying scenario. So what should we do? I think I’ve said this. Multiple times. Regulation and building of social institutions. institutions, but I’ll take Julie’s challenge and say, okay, let’s go a level deeper.

I think we need to look at competition policy very closely. We need to look at antitrust. We need to look at tax, and within tax, we need to look at, you know, how do we do look at the full gamut from, you know, certain kinds of transaction taxes to what person, like a wealth tax, you know, the whole corporate tax rates, the whole gamut of tax tools that we have at our disposal. We certainly, in an area that I know well, need to look at labor regulations, right? There’s a lot of discussion now about what should happen in the gig economy, but, you know, what about, how do we, how do you distinguish if two people have lost their job, how do you distinguish, you know, between them?

You can’t say, okay, this person lost their job to AI, so we’re going to give them health care and, you know, other kinds of support, but… person we’re not right like you need to have universal systems of support for workers of health care of other forms of Social Security that that enable consumption smoothing as well so the economies keep functioning we need to invest heavily in our skill systems for all the talk I can talk about Indian numbers till I’m blue in the face of all the investment and talk about skills training in India only 4 .1 percent of respondents in our labor force survey acknowledge you know identify as having any kind of formal skills only 4 .1 percent despite you know us saying skill India and talking about investments and skills for the last you know well over a decade and a half skill systems.

There’s also well -documented research about how education, you know, the quality of education is so poor. So how do you take a young person in a remote part of India who can barely read and write, might say that I’ve graduated, I’ve done eighth grade or tenth grade, eighth class, tenth class, you know, even twelfth class, but can barely do foundational reading or math, right? How do you take them and say, I’m going to train you for AI. Yeah, that’s what I’m going to do. Like, it doesn’t work. It doesn’t work. So we need to actually fundamentally think about regulations. We need to very urgently work on our education and skill systems that meet people where they are.

We need to definitely think about universal social protection systems. That enable workers to transition between occupations from one sector to another, from one to another. to another from one occupation to another. And I can go into much more detail because this is something that my organization has worked a great deal on. What kind of systems do we need to enable workers to be better protected and be able

Anurag Behar

Thanks, Sabina. We’ve got, I think, five minutes or so, so I’m going to try and wrap up. Judy, would you want to comment?

Julie Delahanty

Yeah, I just want to make a fairly random point, I think. And that is, in addition to the Artificial Intelligence for Development program that we have, we also have a Future of Work project. And I think one of the interesting things there that we don’t talk about as much, everybody is very worried about job loss. That’s kind of the big, it’s job loss. But actually, one of the bigger issues that’s happening is rethinking how to work and ways of working and the disruption that’s happening within jobs and within the workplace. And so I think that’s a really good point. institutions and organizations, that’s not necessarily about job losses. It’s about a complete shift in the way that we do our work and how workers are going to adapt to that fundamental shift in the way that they work.

So it was just a random thought.

Anurag Behar

I don’t think it’s a random thought at all. I think it’s a salient foundational thought, you know, for this discussion. You want to comment on that one line? Because that’s such an important point.

Sabina Dewan

Yeah, no, I mean, just to say that, you know, the Future Works Collective is a global consortium of researchers that IDRC funds that JustJobs is part of that focuses exactly on that. So I agree 100 % that that is a foundational and very important issue.

Anurag Behar

Sandhya, what about you? How would you want to respond to everything Sabina has said?

Sandhya Ramachandran Arun

Look, I think… Watching and waiting is certainly not an option. I mean, we don’t want to be in a Game of Thrones situation when you’re saying winter is coming for some 22 seasons and then it comes. Nobody’s going to wait for it. So we know what’s coming, and we know what’s coming is also capable of evolving and changing tremendously. So we need to learn to change. And yes, we do need to elect good leaders. We do need to have policy at all levels. We need to have policy embedded in platforms. And of course, we need to have a lot of reimagining work and training of workforce. So yes, I think to some extent, painting doom and gloom is good.

Then we start acting, right? But to some extent, I think it also shouldn’t make you paranoid that you become deer in headlights. So yes, we should act, and we should move forward on all of that that all of us agree on.

Anurag Behar

It seems so. It seems so, absolutely. No, but, you know, I think that’s, in some senses, a very good summary, what you just now said, right? What I wanted to say was that this phrase that’s used, boomer and doomer, boomer and doomer. So in a sense, my head is the boomer and my heart is the doomer, given my role. I want to take you just for a minute, which is my job is more to do with education. So we run three universities. We work with, at any point in time, we are working with more than 100 ,000 teachers, right? And so I’m an education person. I’m not the labor market or the tech person here, right?

And I am deeply concerned by the effect of AI on education, deeply, deeply concerned. In fact, I feel that AI is attacking the very foundation of education. The very foundation of education. What AI is doing is saying, the phrase artificial intelligence, it suggests what it does, which means you essentially outsource your thinking. So teachers are outsourcing their thinking and students are outsourcing their thinking. So essentially, and that’s what Sabina was referring to, but she was referring to in the context of social media, that for the first time in this round of sort of assessments, we are seeing cognitive declines, or on test measures we are seeing declines in student performance. I cannot tell you how serious the issue is.

And it’s impossible to regulate this. It’s impossible to regulate this because it’s everywhere. So the only way we are able to deal with this, in the universities at least, is that all assessment, examination, is now returning to the old world paper and pencil, in class test. No home assignments, no project planning, no test. No project work, nothing. Just come here and sit. and write the examination. It is truly serious. I mean, we don’t know how to tackle this right now. And the reason I talk about that is I want to go back to what the analogy that Sandhya used. And I’m so glad that she did that, which is that it is as serious as the nuclear technology.

It is as serious as the nuclear technology. And in one very deep way, it is far more serious than the nuclear technology because nuclear technology did not reach out and affect every individual human being. The possibility of policies and governance to be able to circumscribe, to put boundaries, to manage, those possibilities were far greater. And the possibilities here were highly disruptive, not highly, perhaps the most disruptive of technologies is in retail form, right? This is retail transformation of humanity. It is so hard. to do this. But I’m really glad. I’m glad that with the three of you here, we have this sort of reasonable conclusion, if I may say so, that we are really facing something as serious as the nuclear technology.

And you can’t run away from it. It’s happening. You can’t run away from it. Job losses will happen. We’ve got to figure a way out of it. And I would want to close on this human note, that eventually, perhaps, those jobs that require wisdom, empathy, care, human understanding, they are going to be the hardest to replace if at all. And they will stay. And that’s what one can see in the tech world. So with that, I want to thank all three of you. Thank you so much. I want to thank all of you for coming here. Thank you very much. Thank you.

S

Sabina Dewan

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Productivity gains hide workforce cuts

Explanation

Companies publicly promote AI‑driven productivity, yet privately acknowledge that the resulting 30‑40 % time savings translate into substantial staff reductions. This gap reveals that AI is being leveraged primarily to cut labor costs rather than solely to improve output.


Evidence

“But when you talk to them privately, in India especially, our research shows that they will own up to anywhere between 30 % to 40 % time saving, right, productivity gains, which then translates into significant workforce cuts.” [1]. “When you talk to companies privately, publicly they will not own up to the potential job disruptions as a result of AI.” [54].


Major discussion point

Impact of AI on Jobs and Inequality


Topics

The digital economy | Artificial intelligence


Layoffs and gig‑economy algorithmic management

Explanation

AI‑driven efficiency is already prompting large‑scale layoffs in big‑tech and exposing gig workers to algorithmic control without redress mechanisms. These trends threaten job security and exacerbate precarity in the informal sector.


Evidence

“We’re already seeing the evidence of layoffs.” [10]. “Companies are laying off thousands of workers already.” [11]. “But then on top of that, I would say let’s look beyond just how many jobs are lost and how many jobs are gained to actually look at, I mean, take the gig economy, for example, and algorithmic management of gig workers.” [31]. “We already have plenty of empirical evidence that suggests that… that AI systems are enabling surveillance, they’re influencing decisions about who gets work, when, and what entitlements people have access to.” [33]. “There’s no mechanism for redressal because it’s an algorithm that’s managing the worker.” [36].


Major discussion point

Impact of AI on Jobs and Inequality


Topics

The digital economy | Social and economic development | Artificial intelligence


Urgent policy reforms: competition, antitrust, tax, social protection, skills

Explanation

To mitigate AI‑induced labor disruption, immediate reforms are needed across competition policy, antitrust, taxation, universal social protection, and skill development systems. Without such measures, the gains from AI risk deepening inequality and informalisation of work.


Evidence

“We need to look at competition policy.” [68]. “We need to look at antitrust.” [69]. “We need to definitely think about universal social protection systems.” [70]. “I think we need to look at competition policy very closely.” [71]. “We need to very urgently work on our education and skill systems that meet people where they are.” [72]. “…only 4 .1 percent of respondents in our labor force survey acknowledge … having any kind of formal skills only 4 .1 percent despite … skill India…” [73].


Major discussion point

Regulation, Governance, and Institutional Response


Topics

The enabling environment for digital development | Artificial intelligence | The digital economy


AI pioneers sound alarms

Explanation

Leading AI researchers such as Hinton, Russell and Amadi have publicly warned about the existential risks of AI, underscoring the urgency for precautionary measures. Their warnings signal that the technology’s impact may be far more disruptive than currently recognised.


Evidence

“But at the end of the day, this is the first time where you have the very pioneers of that technology, Jeffrey Hinton, Stuart Russell, Dario Amadai, the very pioneers of the technology themselves are ringing alarm bells.” [117]. “But we have to start from an urgency about this is having a huge impact already.” [119].


Major discussion point

Optimism vs. Caution about AI’s Trajectory


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


A

Anurag Behar

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Coding automation threatens IT jobs

Explanation

AI tools that can automate 50‑70 % of coding tasks are poised to displace a large share of IT sector employment. The ease of code generation raises concerns that many traditional developer roles will become redundant.


Evidence

“So if coding is becoming so much easier, and 50 % or 70 % of coding can be done by these AI tools, then isn’t it inevitable that IT sector jobs will be lost?” [18]. “So part one of my question, if coding is becoming so much more efficient, isn’t it inevitable jobs will be lost?” [114].


Major discussion point

Impact of AI on Jobs and Inequality


Topics

The digital economy | Artificial intelligence


Broad job displacement and need for responsible governance

Explanation

Beyond software engineering, AI is automating research assistance and other knowledge‑work, widening the scope of potential job losses. Effective governance frameworks are required to minimise disruption and ensure a just transition.


Evidence

“Part two is, if you move away from the IT world, … so many of research assistants … that job is being done easily by AI.” [24]. “So how can governments and institutions govern AI responsibly, such that any disruption in labor markets is sort of minimized or handled well, or the transition happens well?” [48].


Major discussion point

Regulation, Governance, and Institutional Response


Topics

The enabling environment for digital development | Artificial intelligence | The digital economy


Conflict of interest and boomer/doomer framing

Explanation

The speaker acknowledges a personal conflict of interest while highlighting the “boomer” optimism versus “doomer” pessimism that frames the AI debate. This dichotomy illustrates the tension between enthusiasm for AI’s potential and concern over its societal risks.


Evidence

“That’s my conflict of interest.” [91]. “What I wanted to say was that this phrase that’s used, boomer and doomer, boomer and doomer.” [129]. “So in a sense, my head is the boomer and my heart is the doomer, given my role.” [130].


Major discussion point

Optimism vs. Caution about AI’s Trajectory


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


S

Sandhya Ramachandran Arun

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Junior developers become AI managers

Explanation

AI‑assisted coding shifts the role of junior developers from writing code to overseeing AI agents, effectively turning them into managers of the technology rather than eliminating the position outright.


Evidence

“So the role of a junior developer really becomes that of a little manager of AI, as opposed to saying, you’re displacing my job.” [16].


Major discussion point

Impact of AI on Jobs and Inequality


Topics

The digital economy | Artificial intelligence


AI boosts efficiency but human oversight remains essential

Explanation

AI makes marketing, finance and healthcare processes more efficient, yet strategic thinking, interpretation of data, and alignment with human values still require human expertise. This hybrid model underscores the continued need for skilled oversight.


Evidence

“And probably it’s making marketing a whole lot more efficient.” [5]. “Now, if you take finance, for example, again, a lot of processing gets taken over by AI, but it still needs a human to bring in wisdom in terms of how the data gets interpreted, how decisions are being made, and also to make sure that the AI aligns to human values in some sense.” [25]. “In healthcare, we are augmenting technicians, clinicians, and doctors with more intelligent input for decision -making.” [42]. “The strategy, the planning, the oversight on execution, the ROI on marketing still remains a strategic thinking job that remains with humans.” [40].


Major discussion point

Impact of AI on Jobs and Inequality


Topics

The digital economy | Social and economic development | Artificial intelligence


Hiring now values learnability, adaptability and role‑specific learning modules

Explanation

Recruitment criteria have shifted toward assessing learnability, communication and adaptability, while companies create role personas and targeted learning modules to help employees transition with AI. This reflects a broader re‑skilling imperative.


Evidence

“But the criteria for hiring has shifted to a more nuanced, a more calibrated way of looking at learnability, looking at whether a person communicates well, technical ideas, looking at whether a person is adaptable.” [97]. “With regard to our own talent, we have created role personas, and we have created very specific learning modules on how the role changes with AI.” [22].


Major discussion point

Reskilling, Education, and the Future of Work


Topics

Capacity development | The digital economy | Artificial intelligence


Need for governance guardrails and human‑centric policies

Explanation

Technology firms must embed policy, guardrails and human‑centric safeguards into AI platforms to ensure benefits are realised while risks are mitigated. Human oversight and leadership commitment are essential for responsible AI deployment.


Evidence

“We need to have policy embedded in platforms.” [75]. “And it’s up to the leadership to ensure that they put the required guardrails.” [77]. “So therefore I do believe very strongly that creativity, wisdom, vision, foresight, human centricity is core to any technology disruptor that comes about.” [79].


Major discussion point

Regulation, Governance, and Institutional Response


Topics

The enabling environment for digital development | Artificial intelligence


J

Julie Delahanty

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Strong institutions needed to monitor AI’s labor impact

Explanation

Effective regulation of AI’s effects on work requires robust labor, regulatory and research institutions capable of tracking job losses, biases and working‑condition changes. Without such institutions, policy responses will be weak and fragmented.


Evidence

“So when it comes to institutions, really without the kind of strong institutions in countries, regulatory institutions, labor institutions, strong research ecosystems that are able to really understand what’s happening in the labor market, I think it’s very difficult to end up having a strong regulation of what’s happening in the labor market.” [59].


Major discussion point

Regulation, Governance, and Institutional Response


Topics

The enabling environment for digital development | Artificial intelligence | The digital economy


Global Index on Responsible AI as evidence‑based policy tool

Explanation

The Global Index on Responsible AI provides policymakers with rights‑based data to assess AI’s impact on jobs, rights and working conditions, enabling proactive labour‑policy design. It exemplifies how evidence‑based tools can bridge innovation and regulation.


Evidence

“So tools like the Global Index on Responsible AI really allows policymakers to move beyond kind of the abstract must -fix regulation to assess how governance of AI actually affects people’s rights, affects their jobs, affects their working conditions, and supports more proactive policymaking on labor regulations, again, skills, social protections, et cetera.” [45]. “And I think one of the – we have this AI – the Global Index on Responsible AI that some of you may have heard about.” [85]. “And I think working together across many countries to share that kind of information is what’s going to support us in finding the right tools.” [90].


Major discussion point

Regulation, Governance, and Institutional Response


Topics

Artificial intelligence | The enabling environment for digital development | Monitoring and measurement


Future of work requires rethinking work practices and skill tracking

Explanation

AI is prompting a fundamental shift in how work is organised, demanding new ways of working, continuous skill tracking and redesign of job roles. Understanding who benefits and who is displaced is essential for designing skill development and social protection systems.


Evidence

“But actually, one of the bigger issues that’s happening is rethinking how to work and ways of working and the disruption that’s happening within jobs and within the workplace.” [39]. “It’s about a complete shift in the way that we do our work and how workers are going to adapt to that fundamental shift in the way that they work.” [101]. “So I think in the same way, some of what… is going to happen with AI in the labor market, we may not be able to anticipate just yet.” [52]. “And it’s that kind of tracking, who’s going to benefit, understanding who’s going to be displaced, and how the tasks and skills are really changing that’s going to allow governments to better design and think about what kind of skills development they need, what kind of social protections they need, and how to support labor rights.” [66].


Major discussion point

Reskilling, Education, and the Future of Work


Topics

Capacity development | Social and economic development | Artificial intelligence


Balancing innovation with proactive, evidence‑based regulation

Explanation

While AI innovation should not be stifled, proactive regulation grounded in evidence is necessary to prevent deepening inequality and protect workers. Policymakers must use data‑driven insights to craft safeguards that coexist with technological progress.


Evidence

“Through the kind of work that we’re doing, I think we’re learning what’s the balance between… supporting innovation… and still supporting regulation and safety.” [86]. “So really using that information to support governments in understanding what is the regulation, what is the solution that they need, not just – You know, it has to be based on some evidence.” [87].


Major discussion point

Optimism vs. Caution about AI’s Trajectory


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Indias Digital and Industrial Future with AI

Building Indias Digital and Industrial Future with AI

Session transcript

Debashish Chakraborty

convergence of AI, telecom, and data sovereignty all weaved around the digital public infrastructure. I’m Devish. I represent GSMA. I’ll request Julian Gorman, head of APAC GSMA, to give his keynote address and then we start with the panel discussion. Julian.

Julian Gorman

Good morning, everyone. Warm welcome, distinguished guests, colleagues and partners and speakers who have joined us today. It’s a great honour to actually open this session for GSMA. GSMA, for those who don’t know, is the global organisation uniting the mobile economy, that means mobile operators and the ecosystem, to unlock the power of connectivity so industry and society thrive. And this session really goes to the core of that around intelligent networks, intelligent telecom networks for digital public infrastructure, a topic that sits right at the intersection of where the telecom industry is heading and where national digital public infrastructure is heading. And that’s where we’re being built. Of course. India is really at a pivotal point in its digital journey and a key player in this space.

They’ve been on the digital public infrastructure journey for a lot longer than the rest of us, but over the last decade, we’ve really seen the rise of digital public infrastructure recognised from identity and payments to digital commerce and data empowerment and has shown the world what is possible when scale, innovation and public purpose come together as delivered inclusion, trust, economic impact at a level few countries have achieved. But as we enter this next phase, which is shaped by AI, real -time data and increasingly autonomous systems, we need to ask a fundamental question, and that is what are the role the telecom networks play in this new digital infrastructure? For years, networks were viewed simply as connectivity providers and that view is changing.

Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure. and they’re shaping how AI models perform and will perform and how services are optimised at the edge, how fraud is stopped before it happens and how digital identity remains secure in a world of growing complexity. In India, networks already support core DPI functions, identity verification, payments, emergency response and major public service platforms. As AI becomes embedded in these systems, the networks don’t sit back anymore in the background it becomes part of the decision -making fabric providing context and priority for tokens or the critical elements of data which digital public infrastructure information is the predecessor of. Through this, the network becomes a contributor to governance, resilience and trust.

And that brings us to the second major theme of the day, digital sovereignty. In an AI -driven world, sovereignty is no longer just about where the data is stored, it’s about having strategic control over the infrastructure. The key to this is the ability to manage the infrastructure the standards, and increasingly, the intelligence that underpins the national digital system. Countries want to know, how do we build AI -enabled public infrastructure that is safe, interoperable, and aligned with national priorities, while still remaining connected and interoperable with global markets and innovation? This is exactly where global standards matter. Fragmentation, whether technical, regulatory, or geopolitical, slows down. Interoperability, open APIs, harmonized frameworks, help countries scale confidently, while staying part of the global digital economy.

India is uniquely positioned to show how this balance can be achieved. Open, yet sovereign. Scalable, yet secure. National in ambition, but global in design. And our goal today is not just to talk about these themes, it is to translate them into direction. To identify practical next steps. To create space for collaboration. and to learn from India’s experience in ways that matter for economies that are at every stage of digital development. So I’m looking forward to the discussion and to the concrete actions we can shape together and I look forward to very big contributions from the panel today and also to hear more from the audience later. So thank you. Debashish, I hand over to you.

Debashish Chakraborty

Thank you, Julian. Thanks for the opening remarks. Am I audible? Looks like yes. So let’s begin. We have a fantastic panel here of experts. So let’s start with this discussion. So what we have seen over the past few decades that telecom networks have evolved. They have evolved a lot from just enabling voice to powering mobile broadband to becoming the trusted digital infrastructure that we use today underpinning the modern economies, right? So today’s network are no longer passive carriers of data. They are becoming intelligent platforms where AI is deployed either as an add -on or embedded already into the network, where digital identity is authenticated, where fraud is mitigated, where sovereignty over data and decision -making is increasingly exercised.

As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain trusted, interoperable, and globally compatible while avoiding fragmentation and duplication. And that is the conversation which we aim to explore today. Let me start with Rahul, who is the Chief Regulatory Officer for Airtel. Rahul, we often talk about digital public infrastructure as applications and platforms, but at the foundation sits the network which you drive. So from Airtel’s perspective, what makes the telecom networks uniquely positioned in the digital world? It is as India’s trusted infrastructure layer. beyond just connectivity.

Rahul Vatts

Thank you, Devashish and GSMA for this particular session. It’s a session of particular interest to me as a user in the digital ecosystem and of course to the entire digital fraternity because if there’s one thing which India is doing great, it’s really the digital public infrastructure to the extent that President Marcon yesterday actually mentioned about it. It’s the biggest export which India has done across the globe. So let’s talk about what’s really happening today. If you look at the data of January alone, India transacted 28 lakh crores rupees of money through its UPI infrastructure. It was spread across a billion people and all this is happening on what? On what is the foundation layer? It is the foundation layer, is the connectivity layer.

and so for us at Airtel this is not just a plumbing job it’s the very heart of the foundation we are laying for trust how are people transacting this much amount of money because they trust the ecosystem to which they really want to do this and so beneath this layer is really the connectivity which has powered the country look at the numbers of connectivity in a country like ours we have got more than a million BTSs powering the entire country we have got more than 500 lakh kilometers of fiber running across in various shapes and forms across the country we have got as an industry more than a thousand edge and large hyperscale data centers now can you imagine each of the mobile switching center carries a load of at least 30 to 50 million people sometimes or even larger at times so this is the scale with which the infrastructure is becoming the layer with which we are operating what is all this enabling let’s look at that that.

What it is enabling is every transaction you do, there is a OTP or SMS which is coming out, right? So this OTP and this SMS is what? It’s a layer of trust that people are trusting the message which they are trying to get on their system. Let’s look at the Aadhaar enabled payment system. More than 500 million rupees done on that alone. And how is that enabled? Through a connectivity which is happening in less than 2 milliseconds. So this again is an example of that same ecosystem. Let’s go further. What’s really happening and how are we doing? I don’t know how many of you actually visited the Airtel stall. We have got solutions where banks can use the telco indicators to make a smart choice about giving you loans, right?

We rank a person’s history based on a low risk or a high risk which enables the bank to be able to take a smart decision in a matter of milliseconds. Remember, in India, it’s not the large loans that matter. A lot of loans which are happening in the ecosystem are less than 200 lakh rupees, right? Just 2 lakh rupees or below are also a large amount of loans which happen. there is a financial risk fraud indicator which the department has created banks can dip into that risk indicator and also get a score out of that to say okay what is it that we are really you know trying to get out of this all this is what the layer is let’s look at what vs telcos are doing vs telcos are giving you trust to say that the call you are giving call you’re receiving is spam free or not right we have got a at least three products launch over last one year we first launch our you know solution which warned you about a suspected spam right then we went ahead we started blocking fraudulent links you know basis the large database we created with you know global players like google and open fish and mavener at the third stage we just launched around two weeks back a very powerful product you know one of the reasons for spam is urgency that i’m calling you please share your otp urgently right and to remove that now we have created a friction you know one of the reasons for spam is urgency that i’m calling you please share your otp urgently right if you are on a call you get a flash message saying please be careful you are on a call you’re receiving otp this may be spam so it creates a friction for those 30 seconds to say do you want to really do this or not all this is what this is uh reinforcement of the trust we want to create in the ecosystem let me go a little larger uh we are operating in large countries uh uh you know across the globe and one of the things we have been doing wonderfully well in africa is to really take the digital public infrastructure blueprints from india and take them to africa uh so it’s all about identity it’s about payments you know it’s about how they are able to transact and we have got a solution called dpi inbox right which we are in conversation with a lot of african leaders to be able to transplant the india stack onto the african ecosystem and how do we do that we are giving a bundle of hardware and a software we are giving a very air -capped cloud you to do that and we are creating the entire ecosystem for them so that they are able to implement a digital public infrastructure stack in their countries.

So really, Devashish, it’s about trust which we try to create with infrastructure layer but we get smart and make people’s life easier and customers’ life easier is what we are

Debashish Chakraborty

Thanks, Rahul. Those were very key messages which you gave in which the network is being used for citizen -centric services and that’s how the network has evolved the last few years. Coming to you, Martin. Martin represents Vodafone Idea. You heard Rahul speaking about how the network is being used for various citizen -centric services for fraud mitigation, for taking care of spams. A lot is being done by the mobile network operators, right? But my question here is there’s also a growing discussion globally today about avoiding parallel digital infrastructure. structures. India is building new DPI trust layers for authentication and fraud prevention. How do we ensure that the efforts which the MNOs, the mobile network operators are making adding layers, how do we ensure that there is no that these complement and not duplicate the operator -led capabilities like Open Gateway APIs that GSMA has?

Speaker 1:

So, in fact, I was part of one of the entity which set up and contributes to the largest DPI infrastructure today. I used to earlier be associated with the NPCI and then moved on here to Telco for the past five years now. So, the overall DPI infra, if I were to go by, I would want to answer this by bringing in four key words that I want to associate myself with in this. One, context and enrichment. And the second thing that I wanted to touch upon is serviceability and purpose. So when the entire DPI infrastructure evolved for the country, it evolved with two core purposes to be addressed with, right? So we were wanting to take the entire digital infrastructure to reach the last mile civilian.

We also had the objective of financial inclusion to be driven by the country. So the DPI framework was created to meet these two core objectives. The role of a TSP in this, by and large, was to ensure that the goal of digital India and financial inclusion landed up reaching the masses. That’s the role that TSP played. And with every net new tech evolution that has happened, there are various things that come in. Fraud evolved, so because banking happened in doorstep. fraud also started happening in doorstep. You don’t have to go and loot a bank today. You can loot thousands of individuals in the most easiest manner and fraud evolved that way. So in each of these contexts, while we realize the Digital India vision and the financial inclusion for the country as a whole, the DPI networks played a role, TSP’s played a role to ensure that these realizations come in handy.

Now, Rahul briefly touched upon a few of them. We are limited TSPs in the country, three, four of us comprehensively, who work in conjunction context. Amongst us three, we land up working together. So I still remember those days when, as from my previous entity going to TRAI, asking them to land up sitting up, how do I find out fraudulent mobile numbers yearly? Today, we look at it as FRI, which is exposed by, the DOT themselves today to multiple other financial institutes, which can go and look up into and then take a decision. decision. There is something called as digital intelligence platform, which again, amongst all three collaborated TSP data, which is converged and provided by the DOT themselves to rest of the financial institutions to look into.

Now, all of these, I will bring back my word around context, right? So these are information that multiple of us as TSPs are able to provide, provide, collate and make it available. Who can consume? Any of these providers, because fraud is not happening to me as TSP. For me, if there is a call that is connected between person A and a person B, it’s revenue to me. But for a bank, while in call, something else is going, that is a context. And this context is something that you can provide back to enrich the data. And with this enriched data, making a decision making for what do they want to. I see an Aadhar, verification happening live from a location called A. while at the same time there is a call happening showing that the presence of the person is in B, it does not matter to a telco because for me both are actually revenue.

But for an authentication entity versus an entity which is approving a financial transaction, they may consider them as a fraud. So the context and enrichment of the context associated with the data, TSP today has the ability to provide a large amount of context -driven information to these individual players whereby they can consume them for their own utilization and make active decisions. So that’s the way that I would want to try and comment. One good part is at least all three of us, four of us are operated in converged platform. We have done the experience with DLT that we set up during the earlier days of spams. Now spams were those days only. The unwanted telemarketers messages that were coming, it has evolved to spam.

Spam has become scam. So now we are working towards how do we overcome scam beyond scam, whatever comes. Now there are digital errors, humongous money being lost. So as TSPs, we work in conjunction, put them in order, collaborate with the likes of COI and DOT to set up infrastructure as open APIs and then allow these APIs as interfaceable for institutions who would want to take decisions appropriately. Rahul touched upon digital lending, right? So there is not only, if you look at countries serviced today by more than 1100 member banks across the country. We might be knowing as sitting in metros, we might be remembering only few banks, but to service such a large nation, we have 1100 member banks.

Imagine these guys don’t have to always go back to civil only and provide a lending. You may want to relate back by postpaid consumers, the quantum of money that they pay frequently, etc. It’s an inclusive decision. Those are open APIs we are able to set up. And India is. We have been forefront to set it up and we have operated it way too well already. is what I would want.

Debashish Chakraborty

By the way, your team is also working extensively with the GSM team on the GSM OpenGate APIs. Many of them have been even certified now. I can tell you that. Thanks for that context in which you are talking about contextualization of data. That’s again a unique perspective that you’re talking about. Moving on to Deepak Maheshwari. Deepak represents CSCP, Center for Social and Economic Progress. Deepak, you have been attending and speaking in this conference for the last couple of days. Data sovereignty, I’m sure, is a term which you would have encountered several times. I want to ask you this, Deepak. How should India define data sovereignty in an AI -driven DPI era beyond just data localization and control?

But how should India define data sovereignty without control over standards, decision -making systems, and long -term strategy? strategic autonomy.

Deepak Maheshwari

Thank you, GSMF, for having us here. When we are looking at this whole issue about digital sovereignty, data localization, etc., and data localization itself, we could look at it in different ways. For example, it could be about just the physical location of the data. That’s one. That’s a pretty obvious one. The second is also about data context, as Matan was just mentioning, in terms of what is the local context. So, for example, a lot of people think about data localization only in terms of local languages. But suppose you are seeing weather, and it shows you weather in Hindi here in Delhi, but of New York, probably it might not be that useful. So you also need local context.

And then beyond all these things, awfully what is happening is, and again, this is not such a new concept of sovereignty as such. So people have been talking about sovereignty. It’s been around for a fairly long time. Of course, the terminal of sovereignty is the fact that it’s not just about the data the lexicon has evolved but this whole notion has awfully become much more important for example even in India we had the digital, when we looked at the previous versions of the data production law if you look at the previous reports which never become the policy which is the non -personal data framework again in all those things we had this notion that India’s data should remain in India.

Another thing I mean in February 2019, 7 years back we had something called draft e -commerce policy. Now the tagline of that however was India’s data for India’s development. It was not about commerce. It was more about data. So from that perspective when we look at today and even when I was member of the METI’s committee in 2018 when first time the government set up a committee on AI, again this whole thing came up that okay what about data here. Now this is something that we need to look at in three different ways. One is yes Yes, there is some sort of data which India should have within its own physical as well as administrative control.

So obviously things related to defence, national finances, etc., you would like to do that. Second is as far as citizens’ data is concerned, some of that data, yes, so UIDI, voter database, etc., obviously that type of thing, yes. But there is other type of data for which citizens themselves may like to exercise their choice and may like to exercise their own agency in terms of using that data not only in India but also outside India. For example, if I apply for a visa to another country, I will have to provide my data to that country. So there is no way that it can happen without that. And then the third thing is in terms of business aspect when we look at it.

Now in terms of businesses, on one hand we are seeing in India, and we are very proud of it, that for the past, three decades, we have emerged as a global outsourcing hub. are the global hub for data coming from all over the world and which is being processed here. But at the same time, if we try to create these walls around us, that okay, India’s data cannot go outside, but we expect that outside data should continue to come in. I think there’s a challenge in that. There’s a dilemma in that. There’s a dacotomy. Because these are walls. If we create these walls, and these are not walls, because in fluid dynamics, if we go back to our school physics, the walls are something that do allow one -way traffic, not two -way traffic.

But walls are two -way isolations. So that’s another thing that we should keep in mind. So when we’re talking about digital sovereignty within the context of AI, yes, obviously, there are things that we do want to have here and we should continue to do that. But there are also things where we do need more collaboration. So for example, one of the terms that he used was about control. a school, and you’re talking about a school, and you’re talking about I would like to control the standards so much as contribute to those standards. So, for example, whether it is GSMA or CGPP, ISO, ITU, IEEE, et cetera, I mean, so many other standard organizations, whether they are plurilateral, whether they are multistakeholder in whichever form, they all have certain mechanisms of people and countries to participate in that decision making.

So rather than controlling that standard, the effort should be, the endeavor should be about contributing to that standard making as a participant, as a contributor, and then evolve it. Obviously, when you are contributing and you are collaborating, you won’t have everything your own way. There will always be inevitably some give and take because sovereignty by itself in a globalized world has a challenge because the moment we talk about any international organization, we are talking about international organizations. whether it is UN, whether it is WTO, whether it is ITU, whether it is an organization like GSMA, if we want to work there, we’ll have to give up something to get something. The important thing is how do we create an institutional mechanism that we have, are in a position that whatever we are giving, we believe that we are getting more than that.

So there should be some sort of incentives around that. And the last thing that I want to mention is that, yes, often we have been talking about that India’s digital public infrastructure itself is a massive digitalization which is happening, but actually it is not so new. It’s more than one and a half centuries old. Because the original telecom networks that came was in the telegraph era, and that was also in dots and dashes. So it was a binary world even at that time. And people may or may not believe it, but India got its first sub – cable in the same year that the US got. And that was in 1858. Just four years after the first submarine cable came up first time between UK and France.

India got its first law Vivek in 1854. The first Indian Telegraph Act came in 1854. I have written a lot about this in this report. I mean it is available online at CESAC website if people are interested. Using a 3C framework. So carriage, content and conduct. But what is more important is in this world of AI is not just the carriage which is of course fundamental as I mentioned. Carriage is fundamental because without that you just won’t be able to do anything. Content, what’s going through it. But more importantly in terms of

Debashish Chakraborty

Beautiful insights. Thanks for taking us back to the concept of walls and walls. I like to come to Mansi now. Mansi sitting here is representing the World Bank. Manasi, from World Bank’s experience, we are talking about standards and we are talking about the DPI era. What are the risks you see when public digital infrastructure and private digital capabilities, Matan spoke about it briefly, when these two, the public digital infrastructure and private digital capabilities are built in silos, and why are global standards essential in accelerating inclusive digital outcomes?

Mansi Kedia

and Raul spoke about a lot. So systems coming together help build trust and therefore having independent systems means there are more points of, more vulnerability in the system. So systems come together to build trust. Systems have to come together for efficiency. I think that’s the biggest economic argument against a lot of things that you were saying about why banks are coming together, why is data coming together. So that is the, efficiency is the other thing. And the third which was mentioned but again not articulated was innovation. So how mobile data is now becoming a source of data for lending. I mean why are we using that as understanding credit risk and fraud risk and not something else.

So there’s innovation happening on something that was never understood to be for that purpose. So systems that operate in silos, whether it is the public sector or the private sector. Close it. Sorry. Whether it’s the public sector. Maybe it’s off. Oh I didn’t have it on, I’m sorry. I have a loud voice, so I hope everyone was able to listen to me. So I think the risk of building systems in silos, whether it is the public sector or the private sector, is essentially missing out on efficiency capabilities, innovation capabilities, and building trusted ecosystems, which is actually nothing but the foundations of digital public infrastructure. You used standards. I think the World Bank works more towards the ideas of blueprints.

We have been doing a lot of work on trying to develop blueprints, which are slightly more flexible, adaptable, but bring together best practices from different countries and see how they can be made more adaptable to different contexts, something that Deepak sir was saying in his initial remarks, that you want to make systems that bring you the operational ideas and principles, but don’t necessarily require. They may be prescriptive in terms of how they need to do some. So when you have a standard, you know it’s prescriptive, and that’s how the networks are running. So for that, you need a standard. But when you’re building systems. I think the World Bank is approaching it more from a blueprint point of view.

So last year, the bank came up with a digital public infrastructure and development report where it articulated what it meant by digital public infrastructure. What are its principles? What are the objectives? What is DPI? What is not DPI? And I think that’s the way we are going to go ahead, even with AI, AI commons, building common infrastructure, to be able to determine the pathways for the future, which countries can adapt to in their ways. It need not necessarily become, I mean, I’m just trying to distinguish between standards and blueprints here, because standards then get into ideas of commercialization and, you know, there has to be a process around it and there’s a whole private sector play.

Here there’s a private sector play and a public sector play, but the idea is to work more on the approach than on a particular way of running something.

Debashish Chakraborty

perspective back for data sovereignty. So I’d like to ask you as AI moves deeper into network operations, right, not just at the surface level, what does data sovereignty practically mean for an operator in terms of data storage and control, edge processing, cloud reliance, control of the AI models?

Rahul Vatts

Yeah, thank you. I think one of the biggest misconceptions we all have today is, you know, what exactly is sovereignty? And a lot of people confuse to say that any hyperscale account, if it is housed in India, for example, or that country becomes a sovereign, you know, infrastructure. I think nothing can be away from growth than that statement. Why do I say that? I think if I have to define what is really sovereign for me, I will at least take three or four slices, you know, into it. first slice for me will be is the data residing in the country or not and the answer to that may be yes you know it may be residing in the country it’s not a big deal hyperscaler clouds do reside in the country the second indicator for me will be is there a digital sovereignty you know in that data and digital sovereignty for me will be is the control plane of that cloud within India or not in India right how are you really controlling that data and the cloud and the answer to that is not a single hyperscaler will have the control plane in this country that’s the fact the third indicator or a slice for me will be really about the operational sovereignty so you are saying that you want to upgrade the network you want to put a patch on the network right you want to put a software in the network where are you doing it from the fact is you are not doing it locally again most likely you are again doing it outside the fourth indicator for me and a very important one is the jurisdictional sovereignty right today under the US cloud act for example is it not true that if the US government so wants they can demand data now why should any other territorial power have a control on my data right so for me while the answer for data sovereignty may be it is locally residing but the fact is the control pane will not be in this country the fact is that we will not have even the patches coming up within this country and the fact is that we will be subject to jurisdictional controls so how are telcos you know getting aware about this only last week I read about DT you know Daoshi Telecom and they’ll just launch the sovereign cloud offering in Europe why did they launch and by the way six months ago Airtel launched its own sovereign cloud offering and the answer to us was very simple we were already managing data of nearly 500 million people and we were able to get a lot of data and we were able to get a lot of data in our network and we realized where is the data housed?

We said within our own networks. So we really have the capabilities to manage that complex data set. Then why is it that I cannot offer the same thing to my customers? And that’s where this whole, and that’s why telcos are having a renewed interest into getting into the sovereign situation. Why is it important? And let me be very selective about this. Do we need hyperscaler clouds in the country? I’m saying yes, we do need. Because if there are efficiencies of scale, if there are better products to be used, why not? But tell me, why should a KYC data of my customer be sitting outside with somebody? Why should the health record of citizens in this country be sitting outside this country?

Why should any critical data set which relates to defense or security agencies sitting outside this country? I think we have to get selective. We use the efficiencies of scale to the best party who is available to give that solution. But we should get selective. Get selective on what data? should reside and remain in control within this jurisdiction. I think that is an important part and that I think is a discussion we need to do. If I go to the market today, there are a lot of players selling Sovereign Cloud. But really, I mean, there is no sovereignty which is involved. But I think AI rests on data, right? And we cannot take the right decisions on data if we cannot really control it in the proper sense.

Hence, we require dynamism in our regulations and policies, but we also require sovereignty to be practiced in real sense for us to be able to do that. Airtel Cloud, which we made, we do around 140 crore transactions per second. That’s the bandwidth we have built. It was very interesting that day when the Prime Minister came to Airtel stall, he was asking, Rahul, what is the capacity of the thing you have created? And I told him, you tell me, sir, what is the capacity you want us to create, right? It’s really up to you. You have to guide us and say, we want to have these multiple use cases. Thank you. lining up the country and we are most happy to do that.

So I think we are in a very good place. We have got very robust infrastructure. And how do we now navigate this world of AI and provide a real opportunity and sense to our players within the ecosystem is what we are really looking forward to.

Debashish Chakraborty

You reminded me from this conversation which we were having just a couple of days ago when someone was talking about data sovereignty and he said, it’s so utopian to talk about data sovereignty where if we slice and dice, then you realize where is the sovereignty. And you touched on that. Thanks for that point. Martin, I’ll come back to you. This was actually meant for Ambika, but you have to deal with this. So from Vodafone Ideas regulatory lens, what are the biggest policy frictions emerging as networks become AI -driven platforms? If you see any regulatory challenges, how can these be met with data sovereignty? slowing innovation?

Speaker 1:

So I’ll try and answer them in two perspectives. We heard our Honorable PM mentioning AI being responsible and reasonable. The word he used was reasonable in nature, multiple location, right? So it brings in, and there are multiple other contexts with reasonability that comes our way, one being explainability, another being accountability, and so on and so forth. So today, if you look at we as TSPs, TSPs are governed under the ambit of what we want to call ourselves with unified license, which is narrated by DOT. In some of these examples that we, with Rahul touched about, I touched about, and whatever World Bank team as well related back, we are able to see that our portfolio has expanded beyond the conventional TSP governed under the US.

license and today looking at the expanded approach that we are offering to market whether monetization not monetization thank god at least the data privacy is enacted now apparently i’m also the dpo for the firm so by virtue of which when we touched upon this area called data localization or what we would say is data sovereignty i think we largely misinterpreted is my personal view around that data privacy the dpdp at least clarifies that data collected has to be defined with a purpose we put in with a purpose now thankfully although i’m a tsp base is we falling under the ambit of a significant data fiduciary most likely we will be also governed by the data privacy laws of the country So there are regulations which are governing us possibly properly well.

So if I narrate this in three or four broader perspective of looking at accountability and explainability, when we leverage AI, we would want the AI to come and explain. Now, is it covered under the ambit of UL or in the data privacy? Maybe no at all, right? So we would want somewhere, Mansi actually narrated it very well, which is we would want somewhere a referenceable standard coming our way, where all of us can relate back easily and apply back. It could be blueprint, it could be playbooks, it could be. So such framework, does it exist for easily adaptable manner? The larger entities like us, we will be the first one possibly to invent the way to do through, make it as a playbook.

Related back to somebody who can make it as a blueprint and make it as a standard, then apply back to. the rest of the industry as a whole. So that’s the first and foremost. So the role of a TSP also is changing today, right? So from a conventional telecom provider, today we are talking about the previous example that I highlighted as an intermediary providing additional data inside. Now there is a law for digital intermediaries. Now the purpose for which a civilian has shared the data to me is for some other purpose. But the purpose beyond the purpose that he has shared to me, if I have put it to from a monetization standpoint, do I apply the ambit of digital intermediary also on me?

That’s a, that’s a, I wouldn’t want to comment as a, should my regulator look upon that and then put that also as applicable to me. But those are evolving space that we are looking at. And the last very famous topic amongst telcos that is floating around is on the spam and the scam protection, right? So here, let’s look at from again, Honorable PM, perspective, which is reasonable AI. Most of us associate reasonable AI back to explain. Now, imagine we have deployed scam solution which auto blocks things and we would want that AI to explain. Why did I block you? If it were to be blocked, then what am I looking at? I’m actually advancing the ability of scamster to know why am I blocking him so that he refines himself to not get blocked.

So that comes in the context of security. Do I do I make a framework? Do I make a guideline to tell here I would not want to have an explainability where security becomes a far more important element as compared to. So frameworks have to evolve. We need to have standards, but standards do not have the ability to make it universally applicable in all possible manner. So standards are taken, applied back as per individual enterprises and the context that we have to put them to use and then make it work. So I look. Look forward. Regulators will be innovative in allowing us to make the choices as appropriately while regulations can continue to evolve appropriately.

Debashish Chakraborty

Thanks, Martin. I’ll take this conversation slightly global with my attempt, Deepak. How do you think India can leverage its DPI and telecom -led digital architecture to provide a credible, scalable model for the global south, particularly countries seeking digital sovereignty without technological isolation?

Deepak Maheshwari

Okay. So when we are looking at somebody offering any technical solution to someone else, typically it comes with certain – It often comes with certain intellectual IPRs, intellectual property rights. So, for example, somebody is using a particular technology, so there could be patterns, there could be copyright, et cetera. Now, when India is offering its DPI -led model, nothing of that sort is going. Okay. So countries are able to adopt. It’s a framework. It’s a philosophy. And there’s an open protocol. So they can adopt it. They can adopt it. and they can change it the way they wish. So it is really open in that sense. So that’s one very important difference compared to let’s say some other country or company offering some particular technology but then it also involves certain type of monetization in terms of this is what you continue to pay us if you are scaling it to let’s say 1 million population, this is what you will pay us if you are doing it for 10 million or 100 million, this is what you will do.

India doesn’t ask for that type of thing. So that’s one very strong distinction. The second thing is in terms of the enablement. The enablement is also happening not just in terms of offering this as a technical sort of assistance, it is also happening through multiple other organizations. So for example, we have a research and information systems think tank under the Ministry of External Affairs and others is the Indian Council of World Affairs. So they are also doing a lot of work in terms of developing intellectual frameworks and capacity to do this as a matter of diplomacy itself. so that’s another dimension which is not often seen but it’s again a matter of soft diplomacy so for example three years back in 23 at ICW again I had proposed a framework called EOSS which was again basically about taking DPI in India I mean you can of course create a different acronym etc globally and again the focus was more around interoperability security etc there the other aspect is about standards so Mansi did distinguish between standards and systems or blueprints as she mentioned but one very important document I would again refer to a World Bank only so of course she did mention about the DPI report but even more recent document which has just come up a couple of months back from World Bank is the World Standard Development Report on Standards okay so I mean we all you look at traffic lights you look at traffic lights and you look at the traffic lights and you look at the traffic lights okay the three red amber green And this traffic light, the current traffic light standard came up only in 1968.

It’s not very old, okay? But it did happen. And this has become globally acceptable. But the way the design is, yes, you can put it vertical, you can put it horizontal, and there are other variations. So this is what it is doing there. So I think the way India is doing this is something that we are doing a lot of enablement across the global south. In fact, I just published a policy brief called Global South’s AI Pivot by CG of Canada just last Friday. Again, it talks about three things, equity, ethics, and ecology. So India is not only talking about things like, okay, it should be reasonable, it should be responsible, it should be accessible, it should be inclusive, accessible, all of that.

But also looking at things from an efficiency perspective. Efficiency is not just financial efficiency. Here we are talking about resource efficiency. So how do we manage these things with minimum? footprint of material, of energy, water, things like that. So, and this again goes back to something like the Prime Minister keeps on talking about this life, which is lifestyle for environment. Now this whole philosophy of

Debashish Chakraborty

Thanks, Deepak. I’m conscious of time. Mansi, last one to you. You know, India’s approach to the DPI built on open, interoperable and scalable digital rails is increasingly influencing the global conversations. How do you see India’s DPI model shaping digital development strategies across emerging economies?

Mansi Kedia

Thank you. I’ll keep it really short. I think at the bank we started working on ID for development and G2P and fast payments even before this whole big DPI push happened in India and particularly that became more socialized through the G20 process. and many other actors came across foundations, think tanks, technology companies, and started to socialize the idea of DPI and the DPI approach to digital transformation. India, surely for the vast amount of experience and scale and heterogeneity that it has, offers excellent evidence on what works and what doesn’t work. And it’s really great that a lot of the people who were part of the foundation and building of the DPI have now gone ahead and tried to take this to other countries in a way that is adaptable to them.

And there are so many organizations, without taking names, lest I miss out on other important ones, I don’t want to take that chance, but there are several organizations who are doing a fabulous job of doing that. And the government itself, so whether actively or indirectly, they are also trying to talk to the world about how the DPI approach works. And more actively, you know, in UPI and NPCI, as Martin was mentioning, there’s active collaboration on making these fast -paced… and systems work in collaboration with BIS to see can we actually think of the Finternet, the idea of the Finternet that came up with BIS. So I don’t see this dying down. I think we have a lot of, like I said, evidence of the foundations as well as now sectoral applications.

So there are just particularly because this is GSMA session and mobile, I don’t want to forget mentioning this really important part about how the Department of Telecom has begun to think about utilizing mobile data while the telcos are thinking from credit perspective and fraud management. They’re also thinking of it very actively in terms of using it for planning and mobility, which I think is really fabulous. It’s not as if other countries haven’t done it, but the DPI approach that they are taking towards it to scale the access to data, to make models available, to provide compute, and build that whole stack is not something that has happened. And obviously it’s going to evolve. I don’t think it’s perfect.

feel the pressure of making it perfect at go but this learning experiences will surely inform how other countries can do it. Some of the things that we are trying to do it at population scale. Yes exactly.

Debashish Chakraborty

So I think if I can just have one question from the I can see three hands already how much time do we have? Do we have a question for two one question gentlemen please state your name and to whom do you want to address this question

Audience

Mike I am Vijay Agarwal and I am interested in AI by profession I am manufacturer of jewelry so what I wanted to propose was why don’t we have a product like a ring kind of product where the privacy data the KYC data resides on that physically only on that item which is on the body and then we can if it leaves the body it leaves in an encrypted form only and it can only be collated with another key for the purpose for which it has been consent has been given and there is a blockchain record to it.

Debashish Chakraborty

You mean in the form of a jewelry?

Audience

Yeah, so we have Adha ring for every Indian and it will store the KYC record, the medical record which could be accessed in case of emergency but there should, all these control layers that you are talking about could be in the form of cryptography. The concept of data embassies as part of the discussion on data sovereignty, so is there a good case for maybe India to offer data embassies? obviously it will be on a multilateral but any thoughts on that

Deepak Maheshwari

I would say yes if it is on reciprocal basis

Rahul Vatts

let me try and address the first part which you were trying to say I think today it’s not the problem of your data being insecure with Aadhaar I think it’s very secure right there are lot of things which Aadhaar does there is also the masking which they have started so the leakage of data or private data is really not the issue out here the data going out has got various other forms and factors particularly the way the government is taking the data from users it is the government which has to really start looking at for example telcos are required to share the subscriber data every month in physical copies why would you do that right so it is not really the digital aspect which is a problem it is really how you are managing the data is a problem and I think quantum work has already started sir I think Aadhaar itself is working on that on data embassies Vikram I think I completely endorse you know Deepak it cannot be just me right look around and have it and so let’s play it right but you cannot expect the world’s largest data creator and consumer to be the ones to start offering this first it is a two way street right for too long I think as a country we have been you know in a sphere where we are supposed to give and we are not supposed to take anything that has to change

Debashish Chakraborty

the organizer is already standing on my head just wanted to say one thing only mentioned in terms of government taking data so about 20 not now of course now IRCT doesn’t do it but till about 15 years back or so if you are creating an IRCTC ID for first time it used to ask even your marital status and there were apparently no benefits or disadvantages and it was a compulsory field by the way I would like to thank each of the speakers here to make it a very engaging conversation, thank you Mansi Rahul, Deepak, Matan for your time and to have this session, thank you very much audience thank you Thank you.

J

Julian Gorman

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Evolution of telecom networks into AI‑enabled intelligent infrastructure

Explanation

Julian describes how mobile networks are shifting from simple connectivity providers to intelligent, programmable layers that actively participate in AI governance, decision‑making and security, reshaping how services are delivered at the edge.


Evidence

“Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure.” [1]. “As AI becomes embedded in these systems, the networks don’t sit back anymore in the background it becomes part of the decision‑making fabric providing context and priority for tokens or the critical elements of data which digital public infrastructure information is the predecessor of.” [5]. “For years, networks were viewed simply as connectivity providers and that view is changing.” [6].


Major discussion point

Evolution of telecom networks into AI‑enabled intelligent infrastructure


Topics

Artificial intelligence | Information and communication technologies for development


S

Speaker 1

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Evolution of telecom networks into AI‑enabled intelligent infrastructure (contextual enrichment)

Explanation

Speaker 1 explains that telecom service providers now add rich, contextual information to data streams, turning the network into an active decision‑making fabric rather than a passive pipe.


Evidence

“So the context and enrichment of the context associated with the data, TSP today has the ability to provide a large amount of context‑driven information to these individual players whereby they can consume them for their own utilization and make active decisions.” [16]. “And with this enriched data, making a decision making for what do they want to.” [18]. “And this context is something that you can provide back to enrich the data.” [21].


Major discussion point

Evolution of telecom networks into AI‑enabled intelligent infrastructure


Topics

Artificial intelligence | Information and communication technologies for development


Trust and citizen‑centric services (contextual data for inclusive credit)

Explanation

Speaker 1 highlights that the enriched data supplied by telecoms enables banks and other institutions to make inclusive credit decisions, reinforcing trust in digital public services.


Evidence

“We have got solutions where banks can use the telco indicators to make a smart choice about giving you loans, right?” [50]. “So these are information that multiple of us as TSPs are able to provide, provide, collate and make it available.” [46]. “The role of a TSP in this, by and large, was to ensure that the goal of digital India and financial inclusion landed up reaching the masses.” [54].


Major discussion point

Trust and citizen‑centric services powered by telecom infrastructure


Topics

Data governance | The digital economy | Building confidence and security in the use of ICTs


Avoiding duplication and ensuring interoperability (open APIs)

Explanation

Speaker 1 stresses the need for open, shared APIs to avoid parallel DPI layers and to enable collaborative use of network‑derived intelligence across operators.


Evidence

“How do we ensure that the efforts which the MNOs, the mobile network operators are making adding layers, how do we ensure that there is no that these complement and not duplicate the operator‑led capabilities like Open Gateway APIs that GSMA has?” [57]. “But my question here is there’s also a growing discussion globally today about avoiding parallel digital infrastructure.” [58]. “Those are open APIs we are able to set up.” [62].


Major discussion point

Avoiding duplication and ensuring interoperability through open APIs and standards


Topics

Data governance | Internet governance | The enabling environment for digital development


Regulatory challenges for AI‑driven networks (explainability & accountability)

Explanation

Speaker 1 points out that AI‑enabled telecom services require explainable decisions, and regulators need flexible playbooks and standards to oversee accountability while allowing innovation.


Evidence

“Now imagine we have deployed scam solution which auto blocks things and we would want that AI to explain.” [44]. “So if I narrate this in three or four broader perspective of looking at accountability and explainability, when we leverage AI, we would want the AI to come and explain.” [114]. “It could be blueprint, it could be playbooks, it could be.” [115]. “Regulators will be innovative in allowing us to make the choices as appropriately while regulations can continue to evolve appropriately.” [118].


Major discussion point

Regulatory challenges for AI‑driven networks


Topics

Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs


R

Rahul Vatts

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Evolution of telecom networks into AI‑enabled intelligent infrastructure (Airtel AI use‑cases)

Explanation

Rahul outlines how Airtel embeds AI for fraud detection, spam mitigation and OTP security, turning the network into an active trust layer that safeguards billions of daily transactions.


Evidence

“Airtel Cloud, which we made, we do around 140 crore transactions per second.” [26]. “there is a financial risk fraud indicator which the department has created banks can dip into that risk indicator and also get a score out of that… we first launch our solution which warned you about a suspected spam… we started blocking fraudulent links… we created a friction… flash message saying please be careful you are on a call you’re receiving otp this may be spam… reinforcement of the trust we want to create in the ecosystem.” [31].


Major discussion point

Evolution of telecom networks into AI‑enabled intelligent infrastructure


Topics

Artificial intelligence | Information and communication technologies for development


Trust and citizen‑centric services (UPI, OTP, Aadhaar)

Explanation

Rahul emphasizes that massive UPI volumes, OTP/SMS delivery and Aadhaar‑linked payments rely on the telecom layer as the foundational trust fabric for citizens.


Evidence

“What it is enabling is every transaction you do, there is an OTP or SMS which is coming out, right?” [28]. “If you look at the data of January alone, India transacted 28 lakh crores rupees of money through its UPI infrastructure.” [38]. “It’s a layer of trust that people are trusting the message which they are trying to get on their system.” [7].


Major discussion point

Trust and citizen‑centric services powered by telecom infrastructure


Topics

Data governance | The digital economy | Building confidence and security in the use of ICTs


India’s DPI model as a scalable template for the Global South

Explanation

Rahul describes Airtel’s sovereign‑cloud offering and the DPI‑Inbox solution that can be exported to other countries, showcasing a royalty‑free, high‑capacity model for emerging economies.


Evidence

“we are in conversation with a lot of African leaders to be able to transplant the India stack onto the African ecosystem… we are giving a bundle of hardware and a software… we are creating the entire ecosystem for them so that they are able to implement a digital public infrastructure stack in their countries.” [31]. “But at the same time, I read about DT… they just launched the sovereign cloud offering in Europe… six months ago Airtel launched its own sovereign cloud offering and the answer was very simple we were already managing data of nearly 500 million people…” [100].


Major discussion point

India’s DPI model as a scalable template for the Global South


Topics

Information and communication technologies for development | The digital economy | Data governance


D

Debashish Chakraborty

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Avoiding duplication and ensuring interoperability (Open Gateway APIs)

Explanation

Debashish warns that parallel DPI layers risk fragmenting effort and calls for open‑gateway APIs and GSMA standards to keep operator‑led capabilities complementary rather than duplicated.


Evidence

“How do we ensure that the efforts which the MNOs, the mobile network operators are making adding layers, how do we ensure that there is no that these complement and not duplicate the operator‑led capabilities like Open Gateway APIs that GSMA has?” [57]. “But my question here is there’s also a growing discussion globally today about avoiding parallel digital infrastructure.” [58]. “Manasi, from World Bank’s experience, we are talking about standards and we are talking about the DPI era.” [61].


Major discussion point

Avoiding duplication and ensuring interoperability through open APIs and standards


Topics

Data governance | Internet governance | The enabling environment for digital development


Data sovereignty definition and practical aspects

Explanation

Debashish probes what data sovereignty means for operators in an AI‑driven DPI world, stressing the need for control over storage, edge processing, AI models and strategic autonomy.


Evidence

“So I’d like to ask you as AI moves deeper into network operations, right, not just at the surface level, what does data sovereignty practically mean for an operator in terms of data storage and control, edge processing, cloud reliance, control of the AI models?” [14]. “But how should India define data sovereignty without control over standards, decision‑making systems, and long‑term strategy?” [90]. “strategic autonomy.” [91].


Major discussion point

Data sovereignty in an AI‑driven DPI era


Topics

Data governance | Artificial intelligence | Human rights and the ethical dimensions of the information society


D

Deepak Maheshwari

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Data sovereignty extends beyond localisation (strategic control)

Explanation

Deepak argues that sovereignty now includes control over standards, decision‑making systems and the strategic governance of AI‑enabled infrastructure, not merely where data is stored.


Evidence

“So, for example, whether it is GSMA or CGPP, ISO, ITU, IEEE, et cetera… they all have certain mechanisms of people and countries to participate in that decision making.” [87]. “So rather than controlling that standard, the effort should be, the endeavor should be about contributing to that standard making as a participant, as a contributor, and then evolve it.” [106]. “So when we’re talking about digital sovereignty within the context of AI, yes, obviously, there are things that we do want to have here and we should continue to do that.” [98].


Major discussion point

Data sovereignty in an AI‑driven DPI era


Topics

Data governance | Artificial intelligence | Human rights and the ethical dimensions of the information society


India’s DPI model as a scalable template for the Global South

Explanation

Deepak highlights that India’s open, royalty‑free DPI framework, backed by world‑bank studies, offers a replicable blueprint for emerging economies seeking sovereign yet interoperable digital infrastructure.


Evidence

“And there’s an open protocol.” [67]. “…the focus was more around interoperability security etc there the other aspect is about standards… the World Standard Development Report on Standards…” [68].


Major discussion point

India’s DPI model as a scalable template for the Global South


Topics

Information and communication technologies for development | Data governance | The enabling environment for digital development


Emerging concepts of personal data embassies and decentralized storage

Explanation

Deepak supports reciprocal data‑embassy arrangements, suggesting that citizens could store personal data abroad under mutual agreements while retaining agency over cross‑border use.


Evidence

“For example, if I apply for a visa to another country, I will have to provide my data to that country.” [155]. “I would say yes if it is on reciprocal basis” [156]. “But there is other type of data for which citizens themselves may like to exercise their choice and may like to exercise their own agency in terms of using that data not only in India but also outside India.” [157].


Major discussion point

Emerging concepts of personal data embassies and decentralized storage


Topics

Data governance | Human rights and the ethical dimensions of the information society | Artificial intelligence


M

Mansi Kedia

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Avoiding duplication and ensuring interoperability (global standards & blueprints)

Explanation

Mansi stresses that open, interoperable standards and flexible blueprints are essential to prevent fragmentation and to enable inclusive, scalable DPI outcomes worldwide.


Evidence

“It’s not as if other countries haven’t done it, but the DPI approach that they are taking towards it to scale the access to data, to make models available, to provide compute, and build that whole stack is not something that has happened.” [64]. “It need not necessarily become, I mean, I’m just trying to distinguish between standards and blueprints here, because standards then get into ideas of commercialization…” [78]. “We have been doing a lot of work on trying to develop blueprints, which are slightly more flexible, adaptable, but bring together best practices from different countries…” [81]. “I think the World Bank works more towards the ideas of blueprints.” [84].


Major discussion point

Avoiding duplication and ensuring interoperability through open APIs and standards


Topics

Data governance | Internet governance | The enabling environment for digital development


India’s DPI model as a scalable template for the Global South

Explanation

Mansi notes that India’s open, interoperable DPI rails have attracted global attention, with the World Bank promoting adaptable blueprints for other countries.


Evidence

“You know, India’s approach to the DPI built on open, interoperable and scalable digital rails is increasingly influencing the global conversations.” [71]. “and many other actors came across foundations, think tanks, technology companies, and started to socialize the idea of DPI and the DPI approach to digital transformation.” [72]. “And it’s really great that a lot of the people who were part of the foundation and building of the DPI have now gone ahead and tried to take this to other countries in a way that is adaptable to them.” [74]. “I think the World Bank is approaching it more from a blueprint point of view.” [89].


Major discussion point

India’s DPI model as a scalable template for the Global South


Topics

Information and communication technologies for development | Data governance | The digital economy


A

Audience

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Emerging concepts of personal data embassies and decentralized storage

Explanation

An audience member proposes a wearable ring that stores encrypted KYC/medical data, accessible only with consent and recorded on blockchain, and raises the idea of reciprocal data‑embassy arrangements for cross‑border data use.


Evidence

“Mike I am Vijay Agarwal and I am interested in AI by profession I am manufacturer of jewelry so what I wanted to propose was why don’t we have a product like a ring kind of product where the privacy data the KYC data resides on that physically only on that item which is on the body and then we can if it leaves the body it leaves in an encrypted form only and it can only be collated with another key for the purpose for which it has been consent has been given and there is a blockchain record to it.” [145]. “Yeah, so we have Adha ring for every Indian and it will store the KYC record, the medical record which could be accessed in case of emergency but there should, all these control layers that you are talking about could be in the form of cryptography.” [146]. “The concept of data embassies as part of the discussion on data sovereignty, so is there a good case for maybe India to offer data embassies?” [154].


Major discussion point

Emerging concepts of personal data embassies and decentralized storage


Topics

Data governance | Human rights and the ethical dimensions of the information society | Artificial intelligence


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Population-Scale Digital Public Infrastructure for AI

Building Population-Scale Digital Public Infrastructure for AI

Session transcript

Nandan Nilekani

bought which farmers use and millions of farmers today, 2 .5 million farmers have downloaded this app. And this was built to make sure that farmers have access to the best information about access to prices, access to weather information and so on. And it’s very sophisticated. It took nine months to get this going in Maharashtra. But we learned a lot about how to do these things. And the next implementation was done in Ethiopia. So in Africa, and Ethiopia did the same thing in three months. So essentially what took us nine months the first time around took us three months. And recently, at the request of the Prime Minister, Amul implemented the whole thing. And Amul implemented it for cows and bought for dairy farmers to understand about the cows and whether they’re lactating or whether they’re, you know, milk and so on.

And that was done in three weeks. So I think you went from nine months to three months to three weeks. So what is the message in that is that if you get the lived experience of implementing these kind of systems for public good, you can actually dramatically reduce the time in which you can do that. And we call these ways of reaching the goal faster, we call them as pathways, because once you have a pathway, then you can get, somebody else can get to the same point quicker. And just like we had this notion that we’ll have 50 in five, 50 countries in five years, we are also now setting an ambitious goal for doing 100 diffusion pathways by 2030.

In other words, by 2030, all of us together across the world will develop these pathways to diffuse the use of AI in a positive way to help farmers, improve the life of young kids, allow people to get jobs through something called Blue Dot. There are so many things going on, but all of them are designed to be effective. to improve and make better people’s lives, can meet the aspirations in a very inclusive way so that everybody is in, nobody is left out. And so we announced a partnership. We announced a coalition of this, of 100 diffusion pathways by 2030. We announced that yesterday or day before yesterday. And we have a global coalition. Anthropic is there.

Google is there. Gates Foundation is there. UNDP is there. A whole host of people are there. And it’s a very open, it’s a big tent. Anybody can join the coalition. But our goal is all of us work together to very, in a focused manner, develop these pathways of diffusion of different kinds of positive AI use cases and then actually make it happen in countries around the world. So just like 15 .5 was a DPI goal, 100 diffusion pathways by 2030 is the AI goal we have. And we are confident that all of us collectively can get there. So I think this is important. I think it’s strategic for the world that we show the good use of AI, and it’s strategic that all of us work together to do that.

Thank you very much.

Speaker 1:

Thank you so much, Mr. Nandan. At this point, I would love to invite our panelists up to the stage. We’ll start by taking a quick group photograph together and then begin the discussion. So let me invite Minister Esther Dweck, Mr. Trevor Mundell, Ms. Reena Ghosh, and Mr. Shankar Maruwada, accompanied by Nandan, to be on the stage for a quick group photograph. Thank you. Let me now hand it over to Shankar Maruwala, who will moderate us to the next panel.

Shankar Maruwada

Good afternoon. We have an exciting panel discussion ahead. Let me start off with where Nandan stopped. Hundred pathways. What are these pathways? These are diffusion pathways to AI impact safely and at scale. Let me provide a bit of background. France invented better than Britain in the first industrial revolution yet Britain won that Britain in turn out invented US in steel, Germany out invented US in chemistry yet it’s the US that won the second industrial revolution what was the crucial thing it was not better invention or even innovation the missing ingredient was diffusion which the United States of America did much better diffusing the benefits and the impact of this technology throughout the economy and the society when we say diffusion we don’t mean awareness or access diffusion as Nandan described is the spread of know -how, trust and institutional capability that allows organizations to adopt AI safely and sustainably as he explained Maharashtra was the pioneer to do this in India it’s like Sir Edmund Hillary climbing Mount Everest for the first time he inspires he creates a pathway for others to follow and it would be rather stupid if after he came back he said I am not sharing this with others the pathway I created I have removed it so now you guys find your own pathway the societies that create such pathways allow a whole lot of others to prosper to make progress to create impact inclusively and equitably that is the when Nandan talked about diffusion hundred pathways these are the hundred diffusion pathways across sectors countries continents some are some may be led by proprietary models some may be led by sovereign efforts some may not be it may differ It’s the choice of the AI adopter to decide which pathway works best for them.

So the diffusion infrastructure we are talking about creating isn’t a platform app or model. It’s shared rails that compress learning curves, cost and risk. So that AI can be used by all of society for all of humanity. With that, I would like to begin the panel discussion. Irina, from the model builder’s perspective, what needs to be true for AI to be deployable at population scale? Not just impressive pilots, especially in high -stake public systems. What needs to happen?

Irina Ghose

Thank you so much, Shankar. And absolutely a pleasure and honor to be here with all of you. Thank you so much. The way I think about it is AI deployment would seldom, if ever, have any roadblocks because of a complexity in the model or the performance. The only reason it fails to gain scale is because the perception in our mind about the complexity. And one of the things that we really feel is that you have to be all in, first yourself, diffuse it to people around you to make it happen. Now, if you think about it, in a pilot, you’ve got experts doing it, you’ve got guardrails, you’ve got the intensity of people, and you’ve got a select group.

Now, when that kind of goes and spreads out, you’ve got a teacher in Bihar kind of implementing it, you’ve got a health worker in Coimbatore, you’ve got a small business leader in Indore doing it, who are not into ML, but for them, AI will start having significance when it stops being a scientific tool to something which is as intuitive for them. So three things which come into play. The first one is that for diffusion, it needs to be contextual to the local language that you speak. Second, it needs to be in the workflow of what you’re doing every day and you don’t need to do net new things. And the third is to be, you have to be iterative and be at it to make it happen.

And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked with X -Step to make it diffused across so many realms of life. And at Anthropic also we said that it’s not a technology for the sake of the technology only in the hands of developers and builders. We found that in India, India happens to be the second largest user base of cloud outside the US. So a big round of applause to all of us out here for making that happen. And what we also felt is that when we are building tools, one of the tools you might have heard is co -work, which earlier used to be done a lot by developers.

But now, people who are information workers or who are just thinking as to how to solve things. The idea is that you do not have to develop code, read a lot of intense things. You can make the tool work for itself. So in my mind, diffusion really means, first, how do I think that everything that I do, I have to be AI first. Second, the ecosystem being in India around myself, I enthuse everybody. And third, how am I giving back to everybody in the last mile to make it happen?

Shankar Maruwada

Fantastic. One of the things I liked about what Anthropic CEO Dario Amadei said is very soon, imagine a country with a whole bunch of geniuses living in data centers. What will that country do? Think about it. But till we reach there, and Dario says in two, three years, but till we reach there, Trevor, as president of Gates Foundation looking at global health, you are dealing with a situation where you’ve seen a whole bunch of, you’ve seen a whole bunch of AI pilots. not too many of them have scaled. From your experience, what separates pilots from systems that have scaled and become institutional? What separates an experiment from a scaled, institutional, sustainable impact?

Trevor Mundeli

Thank you, Shanka. And thank you for the invitation to be on this good panel. And also for the overview you gave me a few days ago of the very good work you’re doing at XSTEP. I learned about Open AgriNet and where that has made progress. But on this issue of scaling of AI, I had an opportunity to, this morning, sit down with the heads of entities which we call scaling hubs. There are two of them here in India, and there are three, soon to be four, in Africa. And there’s also a pan -African venture called Smart Africa. And you might say, well, what are these scaling hubs? So the idea is that we would support a partnership with the governments now in Rwanda, Nigeria, Senegal, and soon to be Kenya, wherein we place funding that the government can use to take the pilots that are out there and to really push them to large scale.

And why would we need a hub like this to do that? Well, one of the big barriers that we are currently seeing is the fragmentation that is occurring out there in terms of many, many ventures, some that we fund, other funders, everything with very good intent. Let’s do a small pilot. Let’s quickly do something over here. Thousands of them occurring out there. You take it at a government level. They have people approaching the Ministry of Agriculture, the Ministry of Education, the Ministry of Health, Ministry of Finance. all of them with different groups and on the DPI front, all of them trying to put in place the necessary DPI infrastructure to support their pilots. And now this fragmentation which is occurring over there, which I think is a big inhibitor of scaling to real population scale that we need.

So we are going to invest in these hubs that can be points of aggregation. We don’t want to inhibit diffusion. People have the idea of diffusion as a more random process which goes anywhere, and there’s something good about that. But if we can channel the diffusion into these centers of excellence, I think at the country level, the feedback that we’ve had from the governments is that that is a way that we are really going to get to scale more rapidly. Thank you.

Shankar Maruwada

Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathways. Diffusion pathways. Definition is everywhere, right? Pathways by definition is fixed. So it’s how do you spread. a technology in certain fixed pathways towards certain impact. It is indeed a stress. I believe that stress needs to be there because we are talking of the stress of safe AI impact at scale. But it is indeed a challenge, and together we have to solve it very quickly. I want to talk a bit about Minister Esther Dweck’s ministry, MGI, or the Ministry of Management and Innovation. Isn’t that a cool concept? The government of Brazil has a minister and a ministry looking after the idea of innovation and management.

They are collaborating very closely with India on a range of issues, and it’s my honor, Your Excellency, to have you here. Minister, I want to ask you a question. Scale efforts, diffusion. A lot of times fail inside government, not because of technology. But because of procurement process change and accountability, what has to change inside the state for AI to move from pilots to durable public services?

Esther Dweck

Thank you, Shankar. Thank you for inviting me and also for the partnership that we have with India. And Brazil is looking for this partnership with India because of scale. If anything can be scaled up in India, it can be in Brazil because compared to India, we are not such a big country. But compared to many other countries, very large. So for us, very important, this partnership. But when you talk about the problem inside the state, our ministry was created. The whole name is Ministry of Management and Innovation in Public Service. So we are focusing on innovation inside the public services. And we created a special secretary for state transformation because we saw that the state had to be transformed in order to actually be able to have innovation.

Because if we stand with the same way of doing procurement, actually we won’t be. We won’t be able to. do it. So we think that we need, in terms of AI, we need to transform the state in three main areas. The first one is procurement, for sure. And any kind of innovation procurement needs to be changed. So also the infrastructure, especially the digital infrastructure, and of course the governments. And when I talk about the procurement process, usually people are looking for the lowest price, lowest risk, and usually civil servants are very afraid of doing procurement because the auditing bodies are trying to look if they’re doing something wrong. So they usually try to go for the lowest risk possible.

And this is what prevents innovation inside the government, especially because innovation comes with errors. We know that any innovation might come to error. And if the civil servant cannot make any mistakes, then we never innovate. So one of the things that we found out when we’re trying to ask for how to do innovation procurement in the government, the first thing people say, I’m afraid of doing any mistakes, then the auditing body will come after me and then I won’t be able to be a civil servant. So what have we done is to change the mindset of the procurement process. Instead of more process -oriented, we are looking for a more policy -oriented and looking at the outcomes and not only the lowest price thing.

And with many other ministries, we are discussing how to actually build that culture of innovation procurement with this idea that it must fail. And you can also interact with the one you’re buying off. Because, of course, you’re buying something that doesn’t exist. How do you explain to them what you need? So there are a lot of things that you have to change in terms of procurement in order to actually be able to do AI. And, of course, the second thing is the digital infrastructure. As, of course, as Nandan has said before, Brazil, since 2023, when we came here for the G20 in India, we brought this idea of DPI to Brazil very… as something very strong.

Thank you. and we already know that we had something that could be called the DPI, but we didn’t know the concept before. And one of the things that was very important for us was our digital ID and our platform for services, a digital platform for service, which both called gov .br. And based on this platform, you were able to, what we are discussing now in terms of optimizing, but also in having more personalized services, knowing the people, if you know the citizen, we will be able to provide them specialized service, and we’re doing AI to do this, how to actually specialize service, what the people actually need. So I think using this, having a good DPI infrastructure, especially in terms of identification, and be able to also, of course, to have a better data governance.

That’s the third thing I would like to say is the governance inside the state. When we launched our plan for AI, and this morning, today, we had a session on the Brazilian AI plan. And the first thing the president said is that we need our database. He said we need the Brazilian database. We cannot have silos anymore. We cannot have this minister saying, no, this is my data. No one can access this data. So we have to do it, of course, in a private, preserving privacy in a security way. So we discussed all the data governance. We’re about to launch a new decree on data governance. Having every minister to have a chief data officer, someone who actually knows the data, knows how to use the data.

So we are actually looking at these things in order to access from the state to be able to innovate into this AI. Thank you. That’s it. Thank you.

Shankar Maruwada

Wonderful. Thank you. Irina, you’ve been in the IT space for three decades. You’ve seen the Internet thing boom, bust, and now you’re seeing AI. From your vast experience, what is the most common failure mode when AI moves from pilots to everyday workforce, everyday? And what kind of safety infrastructure actually prevent?

Irina Ghose

yeah I think one of the things that we have to remember is that the failure never happens with a big bang it just slowly dies because people just stop reducing the level of interaction they have gradually and you suddenly realize that it’s not relevant anymore so what really needs to happen that you need to keep it in a way that people use it daily and use it in the way that is contextual for each of them. For example one of the reasons why it might fail is because the data sets are speaking across to a country of a different nature which is kind of setting benchmarks in banking and financial systems which is not the same way where in agriculture is the biggest thing that we require hence collecting data for Indian languages nuancing it by say legal, by agriculture by what people are speaking across in that dialect in that language, this is very critical so if I want to look at three things that needs to happen, first of all keep it contextual to the domain, micro domain in which it is required at Anthropic we have kind of worked closely to ensure that we now have Indic language availability for 10 Indian languages from Hindi to Malayalam to Gujarati to Urdu and it’s available in the latest models and it is incrementally improving day by day and the last part I would say is that ensuring that whatever you are doing the ROI that we look at should be if I invest in a language say Bengali how many net new use cases have been opened up because of that and how many more people have got the benefit of that and I think the work that say we are doing with Aikstep and thanks to the fields employed education, healthcare, everything that’s the litmus test that we should be measuring ourselves on

Shankar Maruwada

I want to ask a question to the audience by raising hands how many of you use UPI keep your hands up if you know how UPI works, what’s the protocol behind it what’s the technology behind it hands are steadily coming down this is my point, we don’t care about technology as long as it works, for something to work at population scale technology has to be boring technology has to be invisible till the time it is not, it has not diffused, it is just some magic mystery thing that we all are stuck with figuring out what to do it’s a long journey from technology as magic to technology as normal boring in fact this wise old man once told me when you stop thinking of something as technology that’s when it has diffused 500 years ago this was magical ocular technology.

It allowed someone to see. Now we don’t think of it as technology. A day will come when we don’t think of AI as technology. That is the day we can say that AI has diffused through all of society. We have some way to go for that. Trevor, when you hear of things like Open AgriNet, some exciting work happening, what makes you think that fees like infrastructure versus yet another project that is going to the path of pilotitis, death by pilots?

Trevor Mundeli

Well, I do look a little bit with envy at Open AgriNet. Having looked across the work that the foundation does in agriculture and in health, traditionally the narrative has been how fortunate those health folks are because there’s such huge funding into the health areas, such huge investment in research, in genomics, in human health. and much less on plant genomics, which admittedly is potentially more complex, the clinical trial infrastructures for developing new products on the human health side versus on the agriculture side. But now we come to AI, and I have to say I look at OpenAgronet, and I think that the agriculture community is ahead of human health in terms of the implementation of a system which is personally useful to a farmer smallholder farmer, for instance, being able to get the information they need, being able to determine what crop disease they have to deal with or a disease in their cattle and what the weather is going to be and how they can maximize the finances in their small farm.

All of these types of things I would love to see in the health space, a personal health assistant. In low – and middle -income countries, so many people are not very close to a tertiary hospital. And they may be 10, 20 miles even from a primary health care clinic. Can we not provide for them with a system that can personally provide them with the information that they need in a safe way? And I think Open AgriNet really puts those components of infrastructure together. The way that it’s modular, the way that you can adapt it to the local circumstances, it’s in many ways exactly what we need in that personal health side of the picture. So I only have some envy, but I hope we can duplicate that on the health side.

Thank you.

Shankar Maruwada

Thank you, Trevor. Open AgriNet is just a group of organizations coming together, collaborating, as Trevor said, each bringing in one piece of the puzzle so that together we can create those diffusion pathways. And as Nandan said, that is what allows us to take something from Maharashtra, which took nine months, to Ethiopia in three months. Back to India in three weeks. from agriculture to livestock, from India to Ethiopia, from Asia to Africa and back. That is the exciting possibility that India has been in the journey of for the last 15 years, what we call as DPI. The thing about DPI is when you start with a strong use case in mind, as Arina and others have said, you harness technology, so technology becomes a good slave to a very powerful cause.

Then you take advantage of rapidly evolving technology. Minister Dweck, if you designed a national diffusion pathway for one public service, what would you prioritize first, institutions, incentives, data readiness or governance?

Esther Dweck

Well, it’s difficult to choose only one thing, I guess. Maybe this perspective from management, you’re always looking for some kind of a systemic approach, trying to look at all these things. Together. Together. And actually, we recently have launched a program, an R &D for AI in Brazil. It’s called INSPIRE in English, but in Portuguese means BREATHE, INSPIRE, but the same acronym, which is AI for Public Service with Innovation, Responsibility, and Ethics. And it has this systemic approach inside of it. Because the first thing, we create this new institutional arrangement. It’s not new, but we had in this R &D project, we have the government, of course, we have some state -owned companies, we have some private companies, and our innovation ecosystem in Brazil, all of them bringing together in order to help the government to have new AI platforms.

Because when we, although we’re already using AI in Brazil, we saw that we have a lot of lack of technological expertise and lack of financial support as well. So we’re trying to create this platform where we can actually offer many bodies of the government different solutions that can be used in many different areas, as you said. As I was saying before. So this idea, well, first thing we are discussing to have the data more sovereignty on the data and how to actually use better, but also for the data to be ready to be used. So one thing I was explaining before. So using AI to help to improve our data set. So it’s going both ways.

Another thing is also in the governance perspective, of course, we’re creating, as I mentioned, this shared tools and common practices and trying to share how, and specifically in this project, we’re creating this generative AI platform, and we’re trying to apply to different solutions. So recently, at the end of last year, we have this university enrollment exam for people finishing high school. So we created this whole complete, for them to know when they’re finishing school, what they’re going to do. Are they going to the job market? Are they going? Enroll school to enroll university? How to apply? What’s the best thing for them? So using AI to help them to actually decide this. And they’re doing the same thing for health care, for.

agriculture sector as well. So we’re looking at all these things. And, of course, in capacity building. So we are a lot training civil servants. We have four trails, actually, for people who actually are the managers, the top managers, for IT experts, for people controlling data, and for regular civil servants. Because one thing, when we’re talking about state transformation, we thought the one thing you have to train and to change, of course, is the civil servants. And nowadays they have to have a digital mind. And some of them have been there for many years. They didn’t have the digital capabilities. So we’re training all of them in digital capabilities and specifically on AI as well in order to think how to use this new technology in their regular life in order to improve civil service.

So I think it’s a more systemic approach there.

Shankar Maruwada

Pathways are like digital rails. What should model developers focus on so that AI can plug into these pathways safely across sectors and countries?

Irina Ghose

Very interesting. And I’ll just try to kind of paint the picture by giving a context. Now, think about it. We’re talking a lot about agriculture. It has the last mile. Now, if you were to solve for that farmer day in and day out, there’ll be various kinds of work that they have to do. Look at what is the weather conditions, one source of data. Look at how the crop yield, et cetera, is performing in other source of data. The market prices in other source of data. Whatever has to be done across for reaping and sowing. So these kinds of data, if they want to infuse, anybody wants to infuse AI on top of that, and if you build it every time, it is so cumbersome.

Now, if you kind of do the same thing that, Nandan, you’ve been talking about, at one point of time, all of us are different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. universal adapter came, it took it away. We all use UPI for digital payments. Do we know anything to do with the technology behind it? Whether it’s earned, whatever is coming across as the small micropayment, we have no idea. So one of the things here to be done is have a universal language which accesses the tools as well as the data. So we came out with this concept in Anthropic in 2024 called the model context protocol. And very simplistically put, I think of MCP as to AI was say what UPI was to payments.

And in effect, what it really does is you develop things once and you make it MCP ready. And anything else that you want to do it further, you do not have to keep on writing again and again. So all the cases of agriculture, healthcare, anything else put together, that can happen seamlessly. Why does it matter for India? There’s a lot of data which already exists in hell. in education, in various ways that citizen services are going across, and that is a rich level of data. So if we kind of make this data AI ready, use the tools which are going across, then the case of diffusion and that accountability of everybody coming together will be that much more quicker.

Shankar Maruwada

Excellent. A lot of people who deploy AI, they have an old notion that it’s like normal software. You buy great software, it is perfected, and you deploy, and you can close that and go away. In AI, that is just the start, because as you use it, data comes in. The data gets better, the models get better. With better models, you provide better services, usage increases, more usage, more data. This cycle, and while it is happening, the models improve, the data improve, so for a lot of adopters, once they go beyond procurement how do you continuously invest to upgrade and evolve? That’s again a very important question. So when we talk of 100 diffusion pathways these are 100 diffusion pathways to safe AI impact at scale which creates a second stress and I’ll come to you on that Trevor.

When lives are at stake where do you draw the line between speed 100 pathways to 2030 and safety and coming from health safety means literally lives, right?

Trevor Mundeli

Yes Shankar and there are a lot of lives at stake and I feel the urgency. Every year we don’t have the next generation of malaria vaccines we’re seeing hundreds of thousands of young children dying. Every year we don’t have a personalized education coach for every child no matter where they are. we see a tremendous amount of human potential wasted. So there is this urgency to get things done and that might make one think very carefully on the safety front and it is that safety issue where people are in the health area saying we need to take a step back, we need to look carefully at the frameworks before we just jump in with like the application I talked about, the self -application, how would that be gated, how would that be guarded.

I do think that because of the excellence of the DPI stack here in India and because of the thousands of application efforts I see, you are going to probe those frameworks for the safe introduction probably first in a context which is, as Nandan was mentioning, the frugal innovation that will be relevant across lower middle income countries and actually beyond. So I do think that we are very much looking at India as a safe introduction. The foundry of AI application. and we want to see those frameworks whereby we can safely introduce the technology. In terms of the technology itself, just having a type of black box system that gives a health recommendation is almost never adequate, almost never satisfactory.

These systems need to be auditable. And I have to say that Anthropic has made quite a lot of progress in their research on how are these concepts, how are these recommendations actually represented in the model. People want to be able to audit that. They don’t just want something that comes out of nowhere. If you have a human clinician that makes an error, you can talk to that person. You can say, well, where did this, why did you think this was the case when you made a misdiagnosis over here? Was it because you didn’t elicit the right question from the patient or you transcribed incorrectly? And that is the kind of transparency that we actually demand of the AI systems at the end of the day.

So I think that… But between the work going on here in India and some of that transparency research, we can get there.

Shankar Maruwada

Thank you, Trevor. Minister Dweck, as you’re thinking of implementing AI solutions at scale, what is the hardest political or economic challenge, and what are some tips on how one should deal with it?

Esther Dweck

Okay. I think it’s kind of a political economy issue now, I think, in Brazil we are looking for. Of course, one thing is the workforce problem, because we may be going to this utopia that no human need to work anymore, and the machines work for us. So how actually create, how divide this wealth in order to come from these machines working? But there’s one point. But more concerning in our current period now in Brazil is about digital sovereignty. Of course, very few countries, maybe only two countries in the world, will be totally digital sovereign right now. But I think we have to. We have to increase our digital sovereignty in terms of being able to.

have our services and be able to operate it, be able to know where our data is, to know how we’ll be able to continue with our services to our populations. So we are discussing a lot of this in Brazil, how to increase our level of digital sovereignty. Of course, we know we’re probably not in a very, in a few years, be totally digital sovereign, but at least we’re to increase. And we’re actually working with our suppliers in order for them to offer us more sovereignty or at least some security that we not have any discontinuity. So I think using the state capacity and using the state procurement purchasing power, it’s very important to do this.

And we’re actually using it in order to talk to our buyers. And we discussed this sovereignty in three levels, in the data level. And for this, we’re bringing back the data to Brazil. We’re trying to have… We have two, as I mentioned before, two federal, state -owned companies that are actually having resident clouds within our companies to know where the data is, but only know where the data is not enough. So we are increasing our operational access to the data. And also, I think the third level is why you’re using technology, something that we’ve been discussing a lot here. And it’s not directly related to AI, but it’s related to digital services. I think one thing that we’re doing together here in India, using a technology that was developed here, a verifiable convention, which was very important for us, we are using right now in two pilot projects yet, but we want to scale it up.

One is related to rural credit, but the second one is related to something that I think the whole world is discussing, how to protect child online. So now in Brazil, we passed a law last year, which is a very important law. It was very quick to pass. After one of the digital influencers showed what was happening to children in the Internet, especially on social media, and we passed the bill and it said by 17 March, you have to know what age the person who’s accessing the Internet is. So how to do this in a way that you protect the privacy? We don’t actually know what people are using. So a lot of things are discussed and we’re trying to do this verified recognition in order to have this age verification in a very simple way, very easy for people and for people not to be afraid that the government is actually looking at the Internet.

So I think this is the way to make things that are actually useful and important to protect our citizens but also to provide them with very good services.

Shankar Maruwada

Thank you. Today’s topic was building publish and scale digital public infrastructure for AI. By 2030, when we would have made a lot of progress on that, we would stop calling DPI digital public infrastructure and we’ll start calling it digital public. intelligence. With that, a big thank you to all my panelists and to the audience. Thank you.

Irina Ghose

Thank you. Shankar, if I can just request you to send a token of appreciation to the panel. Thank you. Now the next session is about to start on a very unique topic, AI for Democracy. So we request all the audience here to remain seated. A very wonderful topic, AI for Democracy, and we are very blessed that today we have with us Honorable Chief Guest, Mr. Om Birlaji, Speaker of Parliament of India, Mr. Martin Chongungji, Secretary General, IPU, Mr. Laszlo Z, Deputy Speaker, Parliament of Hungary, Dr. Chinmay Pandya from All World Gayatri Parivar, Ms. Jimena.

N

Nandan Nilekani

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Goal of 100 diffusion pathways by 2030

Explanation

Nandan sets an ambitious target of creating one hundred diffusion pathways by 2030 to accelerate the spread of positive AI use cases worldwide. He frames this as the AI equivalent of earlier DPI goals and calls for collective effort across countries.


Evidence

“So just like 15 .5 was a DPI goal, 100 diffusion pathways by 2030 is the AI goal we have.” [1]. “And just like we had this notion that we’ll have 50 in five, 50 countries in five years, we are also now setting an ambitious goal for doing 100 diffusion pathways by 2030.” [4]. “But our goal is all of us work together to very, in a focused manner, develop these pathways of diffusion of different kinds of positive AI use cases and then actually make it happen in countries around the world.” [6]. “In other words, by 2030, all of us together across the world will develop these pathways to diffuse the use of AI in a positive way to help farmers, improve the life of young kids, allow people to get jobs through something called Blue Dot.” [7].


Major discussion point

Diffusion pathways as a strategic framework for AI impact


Topics

Artificial intelligence | Information and communication technologies for development


Public‑private coalition to develop pathways

Explanation

Nandan announces a global coalition that brings together governments, foundations, and tech companies to co‑create and scale the diffusion pathways. The coalition includes Google, the Gates Foundation and UNDP, and is open for any organization to join.


Evidence

“Gates Foundation is there.” [25]. “And we have a global coalition.” [26]. “Google is there.” [27]. “UNDP is there.” [31]. “Anybody can join the coalition.” [29].


Major discussion point

Diffusion pathways as a strategic framework for AI impact


Topics

Financial mechanisms | Artificial intelligence | Enabling environment for digital development


S

Shankar Maruwada

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Pathways as shared digital rails

Explanation

Shankar describes diffusion pathways as fixed, shared “digital rails” that compress learning curves, reduce cost and risk, enabling rapid replication of AI impact across sectors and countries.


Evidence

“These are diffusion pathways to AI impact safely and at scale.” [3]. “It’s shared rails that compress learning curves, cost and risk.” [16]. “Pathways are like digital rails.” [17]. “And we call these ways of reaching the goal faster, we call them as pathways, because once you have a pathway, then you can get, somebody else can get to the same point quicker.” [19].


Major discussion point

Diffusion pathways as a strategic framework for AI impact


Topics

Artificial intelligence | Enabling environment for digital development


I

Irina Ghose

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

AI must be contextual to local language and workflow

Explanation

Irina stresses that for diffusion to succeed AI solutions need to speak the local language and fit naturally into users’ everyday workflows, avoiding the need for new processes.


Evidence

“The first one is that for diffusion, it needs to be contextual to the local language that you speak.” [33]. “Second, it needs to be in the workflow of what you’re doing every day and you don’t need to do net new things.” [34].


Major discussion point

Preconditions for AI deployment at population scale


Topics

Artificial intelligence | Closing all digital divides | Capacity development


Model Context Protocol (MCP) as universal plug‑in

Explanation

Irina proposes a Model Context Protocol (MCP) that would let AI models plug into any diffusion pathway the way UPI standardized digital payments, making AI deployment modular and reusable.


Evidence

“And very simplistically put, I think of MCP as to AI was say what UPI was to payments.” [45]. “We all use UPI for digital payments.” [46]. “So we came out with this concept in Anthropic in 2024 called the model context protocol.” [47]. “And in effect, what it really does is you develop things once and you make it MCP ready.” [49].


Major discussion point

Preconditions for AI deployment at population scale


Topics

Artificial intelligence | Capacity development


AI‑first mindset and ecosystem enthusiasm

Explanation

Irina argues that a cultural shift toward an AI‑first mindset, coupled with broad ecosystem enthusiasm, is essential for diffusion to take hold at scale.


Evidence

“So in my mind, diffusion really means, first, how do I think that everything that I do, I have to be AI first.” [13]. “Second, the ecosystem being in India around myself, I enthuse everybody.” [54].


Major discussion point

Preconditions for AI deployment at population scale


Topics

Artificial intelligence | Capacity development


Low‑code “co‑work” tools for non‑technical users

Explanation

Irina highlights low‑code tools such as “co‑work” that let information workers leverage AI without writing code, expanding the user base and speeding diffusion.


Evidence

“And what we also felt is that when we are building tools, one of the tools you might have heard is co -work, which earlier used to be done a lot by developers.” [119]. “The idea is that you do not have to develop code, read a lot of intense things.” [120]. “But now, people who are information workers or who are just thinking as to how to solve things.” [121]. “You can make the tool work for itself.” [122].


Major discussion point

Enabling non‑technical users through low‑code tools


Topics

Capacity development | Artificial intelligence


Gradual failure due to loss of relevance

Explanation

Irina notes that AI projects often decay slowly when they cease to be relevant to daily tasks, emphasizing the need for continuous contextual use and clear ROI metrics.


Evidence

“yeah I think one of the things that we have to remember is that the failure never happens with a big bang it just slowly dies because people just stop reducing the level of interaction they have gradually and you suddenly realize that it’s not relevant anymore so what really needs to happen that you need to keep it in a way that people use it daily and use it in the way that is contextual for each of them.” [99].


Major discussion point

Common failure modes and safety considerations


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


T

Trevor Mundeli

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Scaling hubs to aggregate pilots

Explanation

Trevor proposes government‑backed scaling hubs in India and Africa that will aggregate pilots, provide funding, and act as points of excellence to accelerate large‑scale diffusion.


Evidence

“So the idea is that we would support a partnership with the governments now in Rwanda, Nigeria, Senegal, and soon to be Kenya, wherein we place funding that the government can use to take the pilots that are out there and to really push them to large scale.” [57]. “But if we can channel the diffusion into these centers of excellence, I think at the country level, the feedback that we’ve had from the governments is that that is a way that we are really going to get to scale more rapidly.” [58]. “So we are going to invest in these hubs that can be points of aggregation.” [60]. “There are two of them here in India, and there are three, soon to be four, in Africa.” [64].


Major discussion point

Tackling fragmentation through scaling hubs


Topics

Artificial intelligence | Enabling environment for digital development


Safety and auditability in high‑stake AI applications

Explanation

Trevor stresses that AI systems, especially in health, must be auditable and governed by robust safety frameworks before deployment, even if that slows rollout.


Evidence

“These systems need to be auditable.” [51]. “it is that safety issue where people are in the health area saying we need to take a step back, we need to look carefully at the frameworks before we just jump in with like the application I talked about, the self -application, how would that be gated, how would that be guarded.” [105]. “and we want to see those frameworks whereby we can safely introduce the technology.” [106].


Major discussion point

Common failure modes and safety considerations


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


E

Esther Dweck

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Shift procurement to outcome‑oriented, policy‑driven model

Explanation

Esther argues that government procurement must move away from lowest‑price, lowest‑risk criteria toward a policy‑oriented, outcome‑focused approach that tolerates controlled failure and fosters innovation.


Evidence

“Instead of more process -oriented, we are looking for a more policy -oriented and looking at the outcomes and not only the lowest price thing.” [66]. “So what have we done is to change the mindset of the procurement process.” [67]. “And any kind of innovation procurement needs to be changed.” [68]. “And when I talk about the procurement process, usually people are looking for the lowest price, lowest risk, and usually civil servants are very afraid of doing procurement because the auditing bodies are trying to look if they’re doing something wrong.” [69]. “But because of procurement process change and accountability, what has to change inside the state for AI to move from pilots to durable public services?” [70]. “Because if we stand with the same way of doing procurement, actually we won’t be.” [71]. “And with many other ministries, we are discussing how to actually build that culture of innovation procurement with this idea that it must fail.” [72]. “So there are a lot of things that you have to change in terms of procurement in order to actually be able to do AI.” [73].


Major discussion point

Government procurement, digital infrastructure and data governance reforms


Topics

Enabling environment for digital development | Artificial intelligence


National digital ID and service platform as DPI backbone

Explanation

Esther highlights the deployment of a national digital ID and a unified service platform (gov.br) as core digital public infrastructure that enables personalized, AI‑enhanced public services.


Evidence

“And one of the things that was very important for us was our digital ID and our platform for services, a digital platform for service, which both called gov .br.” [81]. “And based on this platform, you were able to, what we are discussing now in terms of optimizing, but also in having more personalized services, knowing the people, if you know the citizen, we will be able to provide them specialized service, and we’re doing AI to do this, how to actually specialize service, what the people actually need.” [85]. “And, of course, the second thing is the digital infrastructure.” [86].


Major discussion point

Government procurement, digital infrastructure and data governance reforms


Topics

Information and communication technologies for development | Enabling environment for digital development


Unified data‑governance regime and digital sovereignty

Explanation

Esther calls for a unified data‑governance framework, chief data officers in every ministry, and stronger digital sovereignty to keep data and services under national control.


Evidence

“Having every minister to have a chief data officer, someone who actually knows the data, knows how to use the data.” [90]. “We’re about to launch a new decree on data governance.” [91]. “So we discussed all the data governance.” [92]. “So this idea, well, first thing we are discussing to have the data more sovereignty on the data and how to actually use better, but also for the data to be ready to be used.” [93]. “We cannot have this minister saying, no, this is my data.” [94]. “So we are increasing our operational access to the data.” [95]. “And we discussed this sovereignty in three levels, in the data level.” [96]. “We have to increase our digital sovereignty in terms of being able to.” [87]. “So we are discussing a lot of this in Brazil, how to increase our level of digital sovereignty.” [109]. “But more concerning in our current period now in Brazil is about digital sovereignty.” [110]. “And we’re actually working with our suppliers in order for them to offer us more sovereignty or at least some security that we not have any discontinuity.” [111].


Major discussion point

Political and economic challenges, digital sovereignty


Topics

Data governance | Enabling environment for digital development | Artificial intelligence


Workforce displacement and equitable AI wealth distribution

Explanation

Esther raises concerns about AI‑generated wealth potentially displacing workers and stresses the need for policies that ensure equitable distribution of AI benefits.


Evidence

“And it’s not directly related to AI, but it’s related to digital services.” [112]. “So I think this is the way to make things that are actually useful and important to protect our citizens but also to provide them with very good services.” [113]. “So how actually create, how divide this wealth in order to come from these machines working?” [115]. “One thing is also in the governance perspective, of course, we’re creating, as I mentioned, this shared tools and common practices and trying to share how, and specifically in this project, we’re creating this generative AI platform, and we’re trying to apply to different solutions.” [116]. “And one thing is the workforce problem, because we may be going to this utopia that no human need to work anymore, and the machines work for us.” [118].


Major discussion point

Political and economic challenges, digital sovereignty


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


S

Speaker 1

Speech speed

Default speed

Speech length

Default length

Speech time

Default duration

Panel opening photograph and transition

Explanation

The moderator coordinates a quick group photograph before moving into the panel discussion, ensuring a smooth start to the session.


Evidence

“We’ll start by taking a quick group photograph together and then begin the discussion.” [123]. “So let me invite Minister Esther Dweck, Mr. Trevor Mundell, Ms. Reena Ghosh, and Mr. Shankar Maruwada, accompanied by Nandan, to be on the stage for a quick group photograph.” [133].


Major discussion point

Panel facilitation and logistics


Topics

Follow-up and review


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.