Building Population-Scale Digital Public Infrastructure for AI

20 Feb 2026 11:00h - 12:00h

Building Population-Scale Digital Public Infrastructure for AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how to build and scale digital public infrastructure (DPI) for AI diffusion pathways, emphasizing the need for rapid, safe, and inclusive deployment across societies [38-41]. Nandan Nilekani illustrated this by describing a farmer-focused app that took nine months to launch in Maharashtra, was replicated in Ethiopia in three months, and adapted for dairy farmers in three weeks, showing how lived experience can dramatically shorten implementation timelines [4-12]. He announced an ambition to create 100 diffusion pathways by 2030, backed by a global coalition that includes Anthropic, Google, the Gates Foundation, UNDP and welcomes any participant [15-27]. Shankar Maruwada defined diffusion pathways as shared “rails” that compress learning curves, costs and risks, enabling safe, large-scale AI impact across sectors and countries [42-47]. Irina Ghose stressed that successful diffusion requires contextual language, integration into everyday workflows, and iterative refinement, citing Anthropic’s work on multilingual models for ten Indian languages as a concrete example [60-62][66-71]. She also introduced Anthropic’s Model Context Protocol (MCP), a universal adapter that lets AI tools be built once and reused across domains, likening it to UPI for payments [250-254]. Trevor Mundeli warned that fragmented pilots hinder scaling and proposed “scaling hubs” in India and Africa to aggregate funding and expertise, arguing that such hubs can overcome barriers to population-scale deployment [84-99]. He noted that without coordinated diffusion many well-intentioned pilots remain isolated and fail to scale [90-96]. Esther Dweck described Brazil’s Ministry of Management and Innovation, which is reforming procurement, digital infrastructure, and data governance to enable AI, including a new AI R&D program called INSPIRE that brings together state-owned and private actors [122-130][196-204]. She emphasized outcome-oriented procurement, sovereign digital identity platforms (gov.br), and the appointment of chief data officers to break data silos and support AI services [128-143][147-161]. On political and economic challenges, she highlighted digital sovereignty and the need to secure data and services domestically while addressing wealth distribution from automation [286-295][300-307]. The panel agreed that safety, auditability, and transparent governance are essential, with Anthropic’s research on model explainability cited as a step toward trustworthy health applications [274-282]. The discussion concluded that by 2030 the network of diffusion pathways should transform DPI into “digital public intelligence,” making AI an invisible, ubiquitous public good [315-317].


Keypoints


Major discussion points


Diffusion pathways as the strategic framework for scaling AI for public good – Nandan announced an ambition to create 100 diffusion pathways by 2030 and described them as “ways of reaching the goal faster” that can be reused across countries [15-20]. Shankar clarified that diffusion pathways are not just awareness but the spread of know-how, trust and institutional capability that enable safe, large-scale AI impact [41-46]. The coalition of governments, foundations and companies (Anthropic, Google, Gates Foundation, UNDP) is meant to develop and share these pathways [22-27].


Institutional reforms needed inside governments – Esther Dweck explained that existing procurement practices (focus on lowest price and risk) hinder innovation; the ministry is shifting to a policy-oriented, outcome-focused procurement mindset and encouraging “innovation procurement” that accepts failure [124-133][138-144]. She also highlighted the need for robust digital infrastructure (digital ID, gov.br platform) and strong data-governance, including chief data officers and a new data-governance decree [145-151][158-162].


Coordinated “scaling hubs” to overcome fragmentation – Trevor described the creation of scaling hubs in India and Africa that pool funding, aggregate pilots and provide a government-level point of contact, thereby reducing the fragmentation that currently blocks population-scale deployment [84-99].


Design principles for AI diffusion: localisation, workflow integration and reusable interfaces – Irina emphasized three prerequisites for diffusion: (1) local language support, (2) embedding AI into existing daily workflows, and (3) an iterative, continuously-improved approach [60-62]. She also introduced Anthropic’s Model Context Protocol (MCP) as a universal “language” that lets developers build once and deploy across sectors, similar to how UPI standardized payments [250-254].


Safety, auditability and transparency as non-negotiable for high-stakes applications – Trevor warned that AI systems used in health must be auditable and transparent; black-box recommendations are insufficient, and mechanisms to trace model reasoning are essential for trust and regulatory compliance [274-282].


Overall purpose / goal of the discussion


The panel convened to define how the global community can build, share and scale digital public infrastructure (DPI) powered by AI, turning isolated pilots into durable public services. By establishing reusable diffusion pathways, aligning procurement and governance reforms, and ensuring safety and localisation, the participants aim to achieve the collective target of 100 AI diffusion pathways by 2030, thereby delivering inclusive, positive impact across agriculture, health, education, and other public sectors.


Tone of the discussion


The conversation began with an upbeat, visionary tone-celebrating rapid rollout successes (nine months → three weeks) and the ambitious 100-pathway target. As the dialogue progressed, it shifted to a more analytical and problem-solving tone, addressing concrete challenges in procurement, data governance, and fragmentation. When safety and political-economic concerns were raised, the tone became cautiously serious, emphasizing the need for auditability and digital sovereignty. Throughout, the tone remained collaborative and forward-looking, ending on a hopeful note about collective action and the eventual “boring” ubiquity of AI.


Speakers

Nandan Nilekani – Co-founder and Chairman of Infosys Technologies Ltd; Founder of Aadhaar (UIDAI); AI thought leader speaking on diffusion pathways for AI. [S16][S17]


Speaker 1 – Event host/moderator who introduced the panelists and managed the session flow.


Shankar Maruwada – Moderator of the panel discussion on building and scaling digital public infrastructure for AI.


Irina Ghose – Managing Director, Anthropic India; over three decades in IT and AI deployment; expertise in model building and AI diffusion. [S10][S12]


Trevor Mundeli – President, Bill & Melinda Gates Foundation (global health); expertise in scaling health and agricultural AI pilots. [S4][S5]


Esther Dweck – Minister of Management and Innovation in Public Services, Brazil; focuses on digital public infrastructure, procurement reform, and data governance. [S1][S2]


Additional speakers:


Mr. Om Birlaji – Chief Guest; Speaker of Parliament of India.


Mr. Martin Chongungji – Secretary General, Inter-Parliamentary Union (IPU).


Mr. Laszlo Z – Deputy Speaker, Parliament of Hungary.


Dr. Chinmay Pandya – Representative, All World Gayatri Parivar.


Ms. Jimena – Participant (no further details provided).


Full session reportComprehensive analysis and detailed insights

Nandan Nilekani opened the session by describing a farmer-focused mobile application that now serves 2.5 million users, giving them real-time price and weather information and even monitoring dairy cows for lactation status [1-12]. He noted that the first rollout in Maharashtra took nine months, followed by a three-month replication in Ethiopia and a three-week adaptation for Amul’s dairy-farmer programme [4-12]. From this experience he coined the term “pathways” – reusable, experience-based routes that let others reach the same goal far more quickly – and announced an ambitious global target of creating 100 diffusion pathways by 2030, backed by a newly formed coalition that includes Anthropic, Google, the Gates Foundation, UNDP and other partners, and is open to any additional member, announced “yesterday or day before yesterday” [15-27][S58].


Shankar Maruwada placed the discussion in a broader historical context, noting that the decisive factor in past industrial revolutions was not superior invention but the diffusion of know-how, trust and institutional capability [38-43]. He defined diffusion pathways as “shared rails” that compress learning curves, costs and risks, enabling safe, large-scale AI impact across sectors and countries rather than being a single platform or app [44-47]. This framing set the stage for the panel’s deeper exploration of how such rails can be built and operationalised.


Irina Ghose stressed that successful diffusion hinges on three practical prerequisites: (1) localisation into the user’s language, (2) seamless integration into existing daily workflows, and (3) an iterative, continuously-improved deployment model [60-62]. She illustrated these points with Anthropic’s work on multilingual models for ten Indian languages, arguing that language support directly expands the set of viable use-cases [66-71]. To avoid rebuilding AI components for each domain, Irina described the Model Context Protocol (MCP), a universal “adapter” that allows developers to create a model once and then plug it into any downstream application, likening it to UPI’s role in standardising digital payments [250-254].


Trevor Mundeli identified fragmentation of pilots as a major barrier to population-scale impact. He described the creation of “scaling hubs” in India and several African nations (Rwanda, Nigeria, Senegal, soon Kenya) that pool funding, aggregate disparate pilots and provide a single government-level point of contact, thereby turning a chaotic landscape of small projects into coordinated, fundable programmes [84-99]. He argued that without such hubs, the multitude of isolated pilots cannot achieve the critical mass needed for national rollout [90-96].


Esther Dweck outlined the institutional reforms required within governments to make diffusion pathways work. Her Ministry of Management and Innovation in Public Service (MGI) in Brazil is shifting procurement from a “lowest-price, lowest-risk” mindset to an outcome-oriented, policy-focused approach that tolerates managed risk and encourages “innovation procurement” where failure is accepted as part of learning [124-133][138-144]. She highlighted the importance of robust digital infrastructure-specifically a national digital-ID system and the gov.br service platform-as the backbone for AI-enabled personalised services [145-149]. A new AI R&D programme, INSPIRE (AI for Public Service with Innovation, Responsibility and Ethics), creates a joint institutional arrangement among state-owned firms, private companies and the government to develop AI platforms [196-204]. A forthcoming data-governance decree will appoint chief data officers in every ministry to break data silos and ensure sovereign data handling [207-218]. Brazil’s strategy for data localisation is reinforced by two federal, state-owned companies that run resident clouds, supporting digital sovereignty [292-298]. Moreover, Brazil passed a child-online-protection law requiring age verification for internet users, and the government is piloting a “verifiable convention” to implement this requirement [290-304]. Capacity-building is also central: four training tracks target managers, IT experts, data stewards and general civil servants to instil a “digital mind” across the public service workforce [220-228].


Safety and auditability emerged as non-negotiable requirements for high-stakes applications. Trevor warned that black-box health recommendations are almost never adequate; AI systems must be transparent and auditable so clinicians can trace the reasoning behind a suggestion, mirroring the accountability expected of human practitioners [274-282]. Shankar reinforced this tension by asking where the line should be drawn between the rapid pursuit of 100 pathways and the need for rigorous safety safeguards when lives are at stake [265-267], to which Trevor responded that urgency does not excuse lax standards and that India’s DPI stack offers a promising test-bed for safe, frugal innovation [267-273].


Across the discussion, the participants emphasized different aspects rather than outright disagreement. Shankar’s vision of decentralized “shared-rail” pathways contrasted with Trevor’s hub-centric scaling model, reflecting a tension between standards-based diffusion and centralized aggregation [44-47][84-99]. Irina’s call for an “all-in” iterative rollout implied tolerance for early-stage errors, whereas Esther described civil-servants’ fear of audit-driven penalties and advocated outcome-oriented procurement that still manages risk [60-62][124-133]. A balance between speed and safety was debated, with Shankar urging rapid diffusion and Trevor insisting on auditable, transparent health AI before scaling [265-267][274-281]. Finally, Irina’s promotion of a universal Model Context Protocol appeared at odds with Esther’s emphasis on digital sovereignty, resident clouds and data localisation [250-254][292-298][290-304].


Across the discussion, the participants converged on four overarching pillars: (1) structured diffusion pathways-whether as shared rails, scaling hubs or universal protocols-to compress learning curves and accelerate AI rollout [13-15][44-47][250-254]; (2) localisation (language support) and embedding AI into existing workflows as essential for user adoption [60-62]; (3) safety, auditability and transparency as indispensable, especially for health and other life-critical domains [274-282]; and (4) robust digital infrastructure and data-governance, including sovereign data strategies and capacity-building for civil servants, as foundational enablers of scalable AI diffusion [145-151][158-162][220-228].


In concluding remarks, Shankar projected that by 2030 the collective effort will have turned today’s Digital Public Infrastructure (DPI) into “digital public intelligence”, where AI is as invisible and ubiquitous as UPI is for payments [315-317][172-176]. The panel’s discussion therefore mapped a roadmap-from concrete pilot experiences and institutional reforms to technical standards and safety frameworks-aimed at achieving the 100-pathway target and ensuring that AI delivers inclusive, trustworthy benefits across agriculture, health, education and beyond.


Session transcriptComplete transcript of the session
Nandan Nilekani

bought which farmers use and millions of farmers today, 2 .5 million farmers have downloaded this app. And this was built to make sure that farmers have access to the best information about access to prices, access to weather information and so on. And it’s very sophisticated. It took nine months to get this going in Maharashtra. But we learned a lot about how to do these things. And the next implementation was done in Ethiopia. So in Africa, and Ethiopia did the same thing in three months. So essentially what took us nine months the first time around took us three months. And recently, at the request of the Prime Minister, Amul implemented the whole thing. And Amul implemented it for cows and bought for dairy farmers to understand about the cows and whether they’re lactating or whether they’re, you know, milk and so on.

And that was done in three weeks. So I think you went from nine months to three months to three weeks. So what is the message in that is that if you get the lived experience of implementing these kind of systems for public good, you can actually dramatically reduce the time in which you can do that. And we call these ways of reaching the goal faster, we call them as pathways, because once you have a pathway, then you can get, somebody else can get to the same point quicker. And just like we had this notion that we’ll have 50 in five, 50 countries in five years, we are also now setting an ambitious goal for doing 100 diffusion pathways by 2030.

In other words, by 2030, all of us together across the world will develop these pathways to diffuse the use of AI in a positive way to help farmers, improve the life of young kids, allow people to get jobs through something called Blue Dot. There are so many things going on, but all of them are designed to be effective. to improve and make better people’s lives, can meet the aspirations in a very inclusive way so that everybody is in, nobody is left out. And so we announced a partnership. We announced a coalition of this, of 100 diffusion pathways by 2030. We announced that yesterday or day before yesterday. And we have a global coalition. Anthropic is there. Google is there.

Gates Foundation is there. UNDP is there. A whole host of people are there. And it’s a very open, it’s a big tent. Anybody can join the coalition. But our goal is all of us work together to very, in a focused manner, develop these pathways of diffusion of different kinds of positive AI use cases and then actually make it happen in countries around the world. So just like 15 .5 was a DPI goal, 100 diffusion pathways by 2030 is the AI goal we have. And we are confident that all of us collectively can get there. So I think this is important. I think it’s strategic for the world that we show the good use of AI, and it’s strategic that all of us work together to do that.

Thank you very much.

Speaker 1

Thank you so much, Mr. Nandan. At this point, I would love to invite our panelists up to the stage. We’ll start by taking a quick group photograph together and then begin the discussion. So let me invite Minister Esther Dweck, Mr. Trevor Mundell, Ms. Reena Ghosh, and Mr. Shankar Maruwada, accompanied by Nandan, to be on the stage for a quick group photograph. Thank you. Let me now hand it over to Shankar Maruwala, who will moderate us to the next panel.

Shankar Maruwada

Good afternoon. We have an exciting panel discussion ahead. Let me start off with where Nandan stopped. Hundred pathways. What are these pathways? These are diffusion pathways to AI impact safely and at scale. Let me provide a bit of background. France invented better than Britain in the first industrial revolution yet Britain won that Britain in turn out invented US in steel, Germany out invented US in chemistry yet it’s the US that won the second industrial revolution what was the crucial thing it was not better invention or even innovation the missing ingredient was diffusion which the United States of America did much better diffusing the benefits and the impact of this technology throughout the economy and the society when we say diffusion we don’t mean awareness or access diffusion as Nandan described is the spread of know -how, trust and institutional capability that allows organizations to adopt AI safely and sustainably as he explained Maharashtra was the pioneer to do this in India it’s like Sir Edmund Hillary climbing Mount Everest for the first time he inspires he creates a pathway for others to follow and it would be rather stupid if after he came back he said I am not sharing this with others the pathway I created I have removed it so now you guys find your own pathway the societies that create such pathways allow a whole lot of others to prosper to make progress to create impact inclusively and equitably that is the when Nandan talked about diffusion hundred pathways these are the hundred diffusion pathways across sectors countries continents some are some may be led by proprietary models some may be led by sovereign efforts some may not be it may differ It’s the choice of the AI adopter to decide which pathway works best for them.

So the diffusion infrastructure we are talking about creating isn’t a platform app or model. It’s shared rails that compress learning curves, cost and risk. So that AI can be used by all of society for all of humanity. With that, I would like to begin the panel discussion. Irina, from the model builder’s perspective, what needs to be true for AI to be deployable at population scale? Not just impressive pilots, especially in high -stake public systems. What needs to happen?

Irina Ghose

Thank you so much, Shankar. And absolutely a pleasure and honor to be here with all of you. Thank you so much. The way I think about it is AI deployment would seldom, if ever, have any roadblocks because of a complexity in the model or the performance. The only reason it fails to gain scale is because the perception in our mind about the complexity. And one of the things that we really feel is that you have to be all in, first yourself, diffuse it to people around you to make it happen. Now, if you think about it, in a pilot, you’ve got experts doing it, you’ve got guardrails, you’ve got the intensity of people, and you’ve got a select group.

Now, when that kind of goes and spreads out, you’ve got a teacher in Bihar kind of implementing it, you’ve got a health worker in Coimbatore, you’ve got a small business leader in Indore doing it, who are not into ML, but for them, AI will start having significance when it stops being a scientific tool to something which is as intuitive for them. So three things which come into play. The first one is that for diffusion, it needs to be contextual to the local language that you speak. Second, it needs to be in the workflow of what you’re doing every day and you don’t need to do net new things. And the third is to be, you have to be iterative and be at it to make it happen.

And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked with X -Step to make it diffused across so many realms of life. And at Anthropic also we said that it’s not a technology for the sake of the technology only in the hands of developers and builders. We found that in India, India happens to be the second largest user base of cloud outside the US. So a big round of applause to all of us out here for making that happen. And what we also felt is that when we are building tools, one of the tools you might have heard is co -work, which earlier used to be done a lot by developers.

But now, people who are information workers or who are just thinking as to how to solve things. The idea is that you do not have to develop code, read a lot of intense things. You can make the tool work for itself. So in my mind, diffusion really means, first, how do I think that everything that I do, I have to be AI first. Second, the ecosystem being in India around myself, I enthuse everybody. And third, how am I giving back to everybody in the last mile to make it happen?

Shankar Maruwada

Fantastic. One of the things I liked about what Anthropic CEO Dario Amadei said is very soon, imagine a country with a whole bunch of geniuses living in data centers. What will that country do? Think about it. But till we reach there, and Dario says in two, three years, but till we reach there, Trevor, as president of Gates Foundation looking at global health, you are dealing with a situation where you’ve seen a whole bunch of, you’ve seen a whole bunch of AI pilots. not too many of them have scaled. From your experience, what separates pilots from systems that have scaled and become institutional? What separates an experiment from a scaled, institutional, sustainable impact?

Trevor Mundeli

Thank you, Shanka. And thank you for the invitation to be on this good panel. And also for the overview you gave me a few days ago of the very good work you’re doing at XSTEP. I learned about Open AgriNet and where that has made progress. But on this issue of scaling of AI, I had an opportunity to, this morning, sit down with the heads of entities which we call scaling hubs. There are two of them here in India, and there are three, soon to be four, in Africa. And there’s also a pan -African venture called Smart Africa. And you might say, well, what are these scaling hubs? So the idea is that we would support a partnership with the governments now in Rwanda, Nigeria, Senegal, and soon to be Kenya, wherein we place funding that the government can use to take the pilots that are out there and to really push them to large scale.

And why would we need a hub like this to do that? Well, one of the big barriers that we are currently seeing is the fragmentation that is occurring out there in terms of many, many ventures, some that we fund, other funders, everything with very good intent. Let’s do a small pilot. Let’s quickly do something over here. Thousands of them occurring out there. You take it at a government level. They have people approaching the Ministry of Agriculture, the Ministry of Education, the Ministry of Health, Ministry of Finance. all of them with different groups and on the DPI front, all of them trying to put in place the necessary DPI infrastructure to support their pilots. And now this fragmentation which is occurring over there, which I think is a big inhibitor of scaling to real population scale that we need.

So we are going to invest in these hubs that can be points of aggregation. We don’t want to inhibit diffusion. People have the idea of diffusion as a more random process which goes anywhere, and there’s something good about that. But if we can channel the diffusion into these centers of excellence, I think at the country level, the feedback that we’ve had from the governments is that that is a way that we are really going to get to scale more rapidly. Thank you.

Shankar Maruwada

Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathways. Diffusion pathways. Definition is everywhere, right? Pathways by definition is fixed. So it’s how do you spread. a technology in certain fixed pathways towards certain impact. It is indeed a stress. I believe that stress needs to be there because we are talking of the stress of safe AI impact at scale. But it is indeed a challenge, and together we have to solve it very quickly. I want to talk a bit about Minister Esther Dweck’s ministry, MGI, or the Ministry of Management and Innovation. Isn’t that a cool concept? The government of Brazil has a minister and a ministry looking after the idea of innovation and management.

They are collaborating very closely with India on a range of issues, and it’s my honor, Your Excellency, to have you here. Minister, I want to ask you a question. Scale efforts, diffusion. A lot of times fail inside government, not because of technology. But because of procurement process change and accountability, what has to change inside the state for AI to move from pilots to durable public services?

Esther Dweck

Thank you, Shankar. Thank you for inviting me and also for the partnership that we have with India. And Brazil is looking for this partnership with India because of scale. If anything can be scaled up in India, it can be in Brazil because compared to India, we are not such a big country. But compared to many other countries, very large. So for us, very important, this partnership. But when you talk about the problem inside the state, our ministry was created. The whole name is Ministry of Management and Innovation in Public Service. So we are focusing on innovation inside the public services. And we created a special secretary for state transformation because we saw that the state had to be transformed in order to actually be able to have innovation.

Because if we stand with the same way of doing procurement, actually we won’t be. We won’t be able to. do it. So we think that we need, in terms of AI, we need to transform the state in three main areas. The first one is procurement, for sure. And any kind of innovation procurement needs to be changed. So also the infrastructure, especially the digital infrastructure, and of course the governments. And when I talk about the procurement process, usually people are looking for the lowest price, lowest risk, and usually civil servants are very afraid of doing procurement because the auditing bodies are trying to look if they’re doing something wrong. So they usually try to go for the lowest risk possible.

And this is what prevents innovation inside the government, especially because innovation comes with errors. We know that any innovation might come to error. And if the civil servant cannot make any mistakes, then we never innovate. So one of the things that we found out when we’re trying to ask for how to do innovation procurement in the government, the first thing people say, I’m afraid of doing any mistakes, then the auditing body will come after me and then I won’t be able to be a civil servant. So what have we done is to change the mindset of the procurement process. Instead of more process -oriented, we are looking for a more policy -oriented and looking at the outcomes and not only the lowest price thing.

And with many other ministries, we are discussing how to actually build that culture of innovation procurement with this idea that it must fail. And you can also interact with the one you’re buying off. Because, of course, you’re buying something that doesn’t exist. How do you explain to them what you need? So there are a lot of things that you have to change in terms of procurement in order to actually be able to do AI. And, of course, the second thing is the digital infrastructure. As, of course, as Nandan has said before, Brazil, since 2023, when we came here for the G20 in India, we brought this idea of DPI to Brazil very… as something very strong.

Thank you. and we already know that we had something that could be called the DPI, but we didn’t know the concept before. And one of the things that was very important for us was our digital ID and our platform for services, a digital platform for service, which both called gov .br. And based on this platform, you were able to, what we are discussing now in terms of optimizing, but also in having more personalized services, knowing the people, if you know the citizen, we will be able to provide them specialized service, and we’re doing AI to do this, how to actually specialize service, what the people actually need. So I think using this, having a good DPI infrastructure, especially in terms of identification, and be able to also, of course, to have a better data governance.

That’s the third thing I would like to say is the governance inside the state. When we launched our plan for AI, and this morning, today, we had a session on the Brazilian AI plan. And the first thing the president said is that we need our database. He said we need the Brazilian database. We cannot have silos anymore. We cannot have this minister saying, no, this is my data. No one can access this data. So we have to do it, of course, in a private, preserving privacy in a security way. So we discussed all the data governance. We’re about to launch a new decree on data governance. Having every minister to have a chief data officer, someone who actually knows the data, knows how to use the data.

So we are actually looking at these things in order to access from the state to be able to innovate into this AI. Thank you. That’s it. Thank you.

Shankar Maruwada

Wonderful. Thank you. Irina, you’ve been in the IT space for three decades. You’ve seen the Internet thing boom, bust, and now you’re seeing AI. From your vast experience, what is the most common failure mode when AI moves from pilots to everyday workforce, everyday? And what kind of safety infrastructure actually prevent?

Irina Ghose

yeah I think one of the things that we have to remember is that the failure never happens with a big bang it just slowly dies because people just stop reducing the level of interaction they have gradually and you suddenly realize that it’s not relevant anymore so what really needs to happen that you need to keep it in a way that people use it daily and use it in the way that is contextual for each of them. For example one of the reasons why it might fail is because the data sets are speaking across to a country of a different nature which is kind of setting benchmarks in banking and financial systems which is not the same way where in agriculture is the biggest thing that we require hence collecting data for Indian languages nuancing it by say legal, by agriculture by what people are speaking across in that dialect in that language, this is very critical so if I want to look at three things that needs to happen, first of all keep it contextual to the domain, micro domain in which it is required at Anthropic we have kind of worked closely to ensure that we now have Indic language availability for 10 Indian languages from Hindi to Malayalam to Gujarati to Urdu and it’s available in the latest models and it is incrementally improving day by day and the last part I would say is that ensuring that whatever you are doing the ROI that we look at should be if I invest in a language say Bengali how many net new use cases have been opened up because of that and how many more people have got the benefit of that and I think the work that say we are doing with Aikstep and thanks to the fields employed education, healthcare, everything that’s the litmus test that we should be measuring ourselves on

Shankar Maruwada

I want to ask a question to the audience by raising hands how many of you use UPI keep your hands up if you know how UPI works, what’s the protocol behind it what’s the technology behind it hands are steadily coming down this is my point, we don’t care about technology as long as it works, for something to work at population scale technology has to be boring technology has to be invisible till the time it is not, it has not diffused, it is just some magic mystery thing that we all are stuck with figuring out what to do it’s a long journey from technology as magic to technology as normal boring in fact this wise old man once told me when you stop thinking of something as technology that’s when it has diffused 500 years ago this was magical ocular technology.

It allowed someone to see. Now we don’t think of it as technology. A day will come when we don’t think of AI as technology. That is the day we can say that AI has diffused through all of society. We have some way to go for that. Trevor, when you hear of things like Open AgriNet, some exciting work happening, what makes you think that fees like infrastructure versus yet another project that is going to the path of pilotitis, death by pilots?

Trevor Mundeli

Well, I do look a little bit with envy at Open AgriNet. Having looked across the work that the foundation does in agriculture and in health, traditionally the narrative has been how fortunate those health folks are because there’s such huge funding into the health areas, such huge investment in research, in genomics, in human health. and much less on plant genomics, which admittedly is potentially more complex, the clinical trial infrastructures for developing new products on the human health side versus on the agriculture side. But now we come to AI, and I have to say I look at OpenAgronet, and I think that the agriculture community is ahead of human health in terms of the implementation of a system which is personally useful to a farmer smallholder farmer, for instance, being able to get the information they need, being able to determine what crop disease they have to deal with or a disease in their cattle and what the weather is going to be and how they can maximize the finances in their small farm.

All of these types of things I would love to see in the health space, a personal health assistant. In low – and middle -income countries, so many people are not very close to a tertiary hospital. And they may be 10, 20 miles even from a primary health care clinic. Can we not provide for them with a system that can personally provide them with the information that they need in a safe way? And I think Open AgriNet really puts those components of infrastructure together. The way that it’s modular, the way that you can adapt it to the local circumstances, it’s in many ways exactly what we need in that personal health side of the picture. So I only have some envy, but I hope we can duplicate that on the health side.

Thank you.

Shankar Maruwada

Thank you, Trevor. Open AgriNet is just a group of organizations coming together, collaborating, as Trevor said, each bringing in one piece of the puzzle so that together we can create those diffusion pathways. And as Nandan said, that is what allows us to take something from Maharashtra, which took nine months, to Ethiopia in three months. Back to India in three weeks. from agriculture to livestock, from India to Ethiopia, from Asia to Africa and back. That is the exciting possibility that India has been in the journey of for the last 15 years, what we call as DPI. The thing about DPI is when you start with a strong use case in mind, as Arina and others have said, you harness technology, so technology becomes a good slave to a very powerful cause.

Then you take advantage of rapidly evolving technology. Minister Dweck, if you designed a national diffusion pathway for one public service, what would you prioritize first, institutions, incentives, data readiness or governance?

Esther Dweck

Well, it’s difficult to choose only one thing, I guess. Maybe this perspective from management, you’re always looking for some kind of a systemic approach, trying to look at all these things. Together. Together. And actually, we recently have launched a program, an R &D for AI in Brazil. It’s called INSPIRE in English, but in Portuguese means BREATHE, INSPIRE, but the same acronym, which is AI for Public Service with Innovation, Responsibility, and Ethics. And it has this systemic approach inside of it. Because the first thing, we create this new institutional arrangement. It’s not new, but we had in this R &D project, we have the government, of course, we have some state -owned companies, we have some private companies, and our innovation ecosystem in Brazil, all of them bringing together in order to help the government to have new AI platforms.

Because when we, although we’re already using AI in Brazil, we saw that we have a lot of lack of technological expertise and lack of financial support as well. So we’re trying to create this platform where we can actually offer many bodies of the government different solutions that can be used in many different areas, as you said. As I was saying before. So this idea, well, first thing we are discussing to have the data more sovereignty on the data and how to actually use better, but also for the data to be ready to be used. So one thing I was explaining before. So using AI to help to improve our data set. So it’s going both ways.

Another thing is also in the governance perspective, of course, we’re creating, as I mentioned, this shared tools and common practices and trying to share how, and specifically in this project, we’re creating this generative AI platform, and we’re trying to apply to different solutions. So recently, at the end of last year, we have this university enrollment exam for people finishing high school. So we created this whole complete, for them to know when they’re finishing school, what they’re going to do. Are they going to the job market? Are they going? Enroll school to enroll university? How to apply? What’s the best thing for them? So using AI to help them to actually decide this. And they’re doing the same thing for health care, for.

agriculture sector as well. So we’re looking at all these things. And, of course, in capacity building. So we are a lot training civil servants. We have four trails, actually, for people who actually are the managers, the top managers, for IT experts, for people controlling data, and for regular civil servants. Because one thing, when we’re talking about state transformation, we thought the one thing you have to train and to change, of course, is the civil servants. And nowadays they have to have a digital mind. And some of them have been there for many years. They didn’t have the digital capabilities. So we’re training all of them in digital capabilities and specifically on AI as well in order to think how to use this new technology in their regular life in order to improve civil service.

So I think it’s a more systemic approach there.

Shankar Maruwada

Pathways are like digital rails. What should model developers focus on so that AI can plug into these pathways safely across sectors and countries?

Irina Ghose

Very interesting. And I’ll just try to kind of paint the picture by giving a context. Now, think about it. We’re talking a lot about agriculture. It has the last mile. Now, if you were to solve for that farmer day in and day out, there’ll be various kinds of work that they have to do. Look at what is the weather conditions, one source of data. Look at how the crop yield, et cetera, is performing in other source of data. The market prices in other source of data. Whatever has to be done across for reaping and sowing. So these kinds of data, if they want to infuse, anybody wants to infuse AI on top of that, and if you build it every time, it is so cumbersome.

Now, if you kind of do the same thing that, Nandan, you’ve been talking about, at one point of time, all of us are different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. universal adapter came, it took it away. We all use UPI for digital payments. Do we know anything to do with the technology behind it? Whether it’s earned, whatever is coming across as the small micropayment, we have no idea. So one of the things here to be done is have a universal language which accesses the tools as well as the data. So we came out with this concept in Anthropic in 2024 called the model context protocol. And very simplistically put, I think of MCP as to AI was say what UPI was to payments.

And in effect, what it really does is you develop things once and you make it MCP ready. And anything else that you want to do it further, you do not have to keep on writing again and again. So all the cases of agriculture, healthcare, anything else put together, that can happen seamlessly. Why does it matter for India? There’s a lot of data which already exists in hell. in education, in various ways that citizen services are going across, and that is a rich level of data. So if we kind of make this data AI ready, use the tools which are going across, then the case of diffusion and that accountability of everybody coming together will be that much more quicker.

Shankar Maruwada

Excellent. A lot of people who deploy AI, they have an old notion that it’s like normal software. You buy great software, it is perfected, and you deploy, and you can close that and go away. In AI, that is just the start, because as you use it, data comes in. The data gets better, the models get better. With better models, you provide better services, usage increases, more usage, more data. This cycle, and while it is happening, the models improve, the data improve, so for a lot of adopters, once they go beyond procurement how do you continuously invest to upgrade and evolve? That’s again a very important question. So when we talk of 100 diffusion pathways these are 100 diffusion pathways to safe AI impact at scale which creates a second stress and I’ll come to you on that Trevor.

When lives are at stake where do you draw the line between speed 100 pathways to 2030 and safety and coming from health safety means literally lives, right?

Trevor Mundeli

Yes Shankar and there are a lot of lives at stake and I feel the urgency. Every year we don’t have the next generation of malaria vaccines we’re seeing hundreds of thousands of young children dying. Every year we don’t have a personalized education coach for every child no matter where they are. we see a tremendous amount of human potential wasted. So there is this urgency to get things done and that might make one think very carefully on the safety front and it is that safety issue where people are in the health area saying we need to take a step back, we need to look carefully at the frameworks before we just jump in with like the application I talked about, the self -application, how would that be gated, how would that be guarded.

I do think that because of the excellence of the DPI stack here in India and because of the thousands of application efforts I see, you are going to probe those frameworks for the safe introduction probably first in a context which is, as Nandan was mentioning, the frugal innovation that will be relevant across lower middle income countries and actually beyond. So I do think that we are very much looking at India as a safe introduction. The foundry of AI application. and we want to see those frameworks whereby we can safely introduce the technology. In terms of the technology itself, just having a type of black box system that gives a health recommendation is almost never adequate, almost never satisfactory.

These systems need to be auditable. And I have to say that Anthropic has made quite a lot of progress in their research on how are these concepts, how are these recommendations actually represented in the model. People want to be able to audit that. They don’t just want something that comes out of nowhere. If you have a human clinician that makes an error, you can talk to that person. You can say, well, where did this, why did you think this was the case when you made a misdiagnosis over here? Was it because you didn’t elicit the right question from the patient or you transcribed incorrectly? And that is the kind of transparency that we actually demand of the AI systems at the end of the day.

So I think that… But between the work going on here in India and some of that transparency research, we can get there.

Shankar Maruwada

Thank you, Trevor. Minister Dweck, as you’re thinking of implementing AI solutions at scale, what is the hardest political or economic challenge, and what are some tips on how one should deal with it?

Esther Dweck

Okay. I think it’s kind of a political economy issue now, I think, in Brazil we are looking for. Of course, one thing is the workforce problem, because we may be going to this utopia that no human need to work anymore, and the machines work for us. So how actually create, how divide this wealth in order to come from these machines working? But there’s one point. But more concerning in our current period now in Brazil is about digital sovereignty. Of course, very few countries, maybe only two countries in the world, will be totally digital sovereign right now. But I think we have to. We have to increase our digital sovereignty in terms of being able to.

have our services and be able to operate it, be able to know where our data is, to know how we’ll be able to continue with our services to our populations. So we are discussing a lot of this in Brazil, how to increase our level of digital sovereignty. Of course, we know we’re probably not in a very, in a few years, be totally digital sovereign, but at least we’re to increase. And we’re actually working with our suppliers in order for them to offer us more sovereignty or at least some security that we not have any discontinuity. So I think using the state capacity and using the state procurement purchasing power, it’s very important to do this.

And we’re actually using it in order to talk to our buyers. And we discussed this sovereignty in three levels, in the data level. And for this, we’re bringing back the data to Brazil. We’re trying to have… We have two, as I mentioned before, two federal, state -owned companies that are actually having resident clouds within our companies to know where the data is, but only know where the data is not enough. So we are increasing our operational access to the data. And also, I think the third level is why you’re using technology, something that we’ve been discussing a lot here. And it’s not directly related to AI, but it’s related to digital services. I think one thing that we’re doing together here in India, using a technology that was developed here, a verifiable convention, which was very important for us, we are using right now in two pilot projects yet, but we want to scale it up.

One is related to rural credit, but the second one is related to something that I think the whole world is discussing, how to protect child online. So now in Brazil, we passed a law last year, which is a very important law. It was very quick to pass. After one of the digital influencers showed what was happening to children in the Internet, especially on social media, and we passed the bill and it said by 17 March, you have to know what age the person who’s accessing the Internet is. So how to do this in a way that you protect the privacy? We don’t actually know what people are using. So a lot of things are discussed and we’re trying to do this verified recognition in order to have this age verification in a very simple way, very easy for people and for people not to be afraid that the government is actually looking at the Internet.

So I think this is the way to make things that are actually useful and important to protect our citizens but also to provide them with very good services.

Shankar Maruwada

Thank you. Today’s topic was building publish and scale digital public infrastructure for AI. By 2030, when we would have made a lot of progress on that, we would stop calling DPI digital public infrastructure and we’ll start calling it digital public. intelligence. With that, a big thank you to all my panelists and to the audience. Thank you.

Irina Ghose

Thank you. Shankar, if I can just request you to send a token of appreciation to the panel. Thank you. Now the next session is about to start on a very unique topic, AI for Democracy. So we request all the audience here to remain seated. A very wonderful topic, AI for Democracy, and we are very blessed that today we have with us Honorable Chief Guest, Mr. Om Birlaji, Speaker of Parliament of India, Mr. Martin Chongungji, Secretary General, IPU, Mr. Laszlo Z, Deputy Speaker, Parliament of Hungary, Dr. Chinmay Pandya from All World Gayatri Parivar, Ms. Jimena.

Related ResourcesKnowledge base sources related to the discussion topics (10)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Nandan Nilekani described a farmer‑focused mobile application that now serves 2.5 million users, giving them real‑time price and weather information.”

The knowledge base states that 2.5 million farmers have downloaded the app and that it provides price and weather information [S3].

Confirmedhigh

“The first rollout in Maharashtra took nine months, followed by a three‑month replication in Ethiopia and a three‑week adaptation for Amul’s dairy‑farmer programme.”

Evidence in the knowledge base records the Maharashtra implementation lasting nine months, the Ethiopia rollout three months, and the Amul dairy implementation three weeks [S20] and [S19].

Confirmedhigh

“Nandan announced an ambitious global target of creating 100 diffusion pathways by 2030, backed by a coalition that includes Anthropic, Google, the Gates Foundation, UNDP and other partners, and is open to any additional member.”

The knowledge base confirms the announcement of 100 pathways to 2030 and a coalition that includes Google, the Gates Foundation and UNDP and is open to new members [S21] and [S29].

Additional Contextmedium

“The coalition also includes Anthropic.”

Anthropic is not mentioned in the available sources; the coalition members listed are Google, Gates Foundation and UNDP, so Anthropic’s participation is not confirmed by the knowledge base.

Additional Contextmedium

“Shankar Maruwada defined diffusion pathways as “shared rails” that compress learning curves, costs and risks, enabling safe, large‑scale AI impact across sectors and countries rather than being a single platform or app.”

The knowledge base discusses diffusion as moving beyond single, concentrated LLM deployments toward shared, domain-specific pathways, aligning with this description [S68] and [S69].

Additional Contextmedium

“Irina Ghose said successful diffusion requires (1) localisation into the user’s language, (2) integration into daily workflows, and (3) an iterative deployment model.”

The knowledge base highlights multilingual AI work for Indian languages and the importance of language support for expanding use-cases, which supports the localisation point [S73] and [S74]; the other two prerequisites are consistent with broader DPG principles but are not explicitly cited.

Additional Contextlow

“Irina described the Model Context Protocol (MCP) as a universal “adapter” that lets developers create a model once and plug it into any downstream application, likening it to UPI’s role in standardising digital payments.”

The knowledge base references UPI as an example of a standardised digital-payment interface, providing context for the analogy, but it does not contain information about the Model Context Protocol itself [S23].

External Sources (74)
S1
A Digital Future for All (morning sessions) — – Esther Dweck (Minister, Brazil) discussed DPI for efficient government services, financial inclusion, and environmenta…
S2
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — – Esther Dweck (Minister of Management and Innovation in Public Services of Brazil)
S3
Building Population-Scale Digital Public Infrastructure for AI — – Esther Dweck- Irina Ghose – Irina Ghose- Esther Dweck – Nandan Nilekani- Trevor Mundeli- Esther Dweck
S4
Transforming Health Systems with AI From Lab to Last Mile — I’ll ask you to take a seat. When you said, is there anyone who has not visited a doctor, instinctively I was asking, do…
S5
Transforming Health Systems with AI From Lab to Last Mile — -Trevor Mundel: Dr. Dr. Trevor Mundel (medical degree and Ph.D. in mathematics), Rhodes Scholar, extensive experience in…
S6
https://app.faicon.ai/ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has ext…
S7
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S8
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S9
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S10
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — Thank you so much, Vedashree. That was very concise and even compelling. Especially coming from a regulatory standpoint….
S11
Keynote-Dario Amodei — – Irina Ghos: Managing Director for Anthropic India, has three decades of experience building businesses in India (menti…
S12
Building Population-Scale Digital Public Infrastructure for AI — – Irina Ghose- Esther Dweck – Nandan Nilekani- Irina Ghose
S13
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S14
https://app.faicon.ai/ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwad…
S15
AI for agriculture Scaling Intelegence for food and climate resiliance — Thank you, madam. You have rightly pointed out the need to be more sensitive and while developing systems for inclusivit…
S16
Keynote-Rishad Premji — -Mr. Nandan Nilekani: Role/Title: Not specified; Area of expertise: Artificial intelligence (described as pioneer and th…
S17
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Nandan Nilekani** – Co-founder and chairman of Infosys Technologies Limited (participated online) Nandan Nilekani, …
S18
https://dig.watch/event/india-ai-impact-summit-2026/fireside-conversation-01 — Thank you so much, Mr. Sikka, for your profound and very interesting remarks. And of course, your work at VNI also exemp…
S19
Building Population-Scale Digital Public Infrastructure for AI — bought which farmers use and millions of farmers today, 2 .5 million farmers have downloaded this app. And this was buil…
S20
Fireside Conversation: 01 — I don’t know, that makes me a grandfather. So I think when you talk about diffusion, and you have to think of AI, everyb…
S21
Building Scalable AI Through Global South Partnerships — Yeah, thank you so much. And you talked about DPI, you talked about the private sector, public coming together. It’s the…
S22
Fireside Conversation: 01 — This fireside conversation featured Nandan Nilekani, co-founder of Infosys and architect of India’s Aadhaar system, and …
S23
Collaborative AI Network – Strengthening Skills Research and Innovation — um well I mean as Saurabhji the chair of the working group for democratization of AI spoke about there are some fundamen…
S24
https://app.faicon.ai/ai-impact-summit-2026/collaborative-ai-network-strengthening-skills-research-and-innovation — So we have to think about it from a user life perspective. So this is really, I think, a bit about the use case adoption…
S25
Keynote Address_Revanth Reddy_Chief Minister Telangana — Socio‑economic impacts and workforce considerations
S26
AI Meets Agriculture Building Food Security and Climate Resilien — Shankar Maruwada describes how the successful development of Mahavistar involved collaboration between multiple stakehol…
S27
AI for agriculture Scaling Intelegence for food and climate resiliance — So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwad…
S28
Setting the Rules_ Global AI Standards for Growth and Governance — Yeah, no, that’s a great question. I think from sort of a market adoption perspective, a lot of our technology, like gen…
S29
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked…
S30
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we bala…
S31
Al and Global Challenges: Ethical Development and Responsible Deployment — Alfredo Ronchi:Most interesting presentation from the standpoint of China. Thanks a lot for this date. And now we will t…
S32
Safe and Responsible AI at Scale Practical Pathways — Right. So I think my perspective is more as a practitioner because the last almost three decades I’ve been a solution bu…
S33
Opening and Sustaining Government Data | IGF 2023 Networking Session #86 — Another notable challenge was the need to convert data between Arabic and English. This language barrier required meticu…
S34
Open Forum #56 Shaping Africas Digital Future a Forum on Data Governance — The Minister argues that Sierra Leone’s success in digital transformation over the past 6-7 years resulted from strategi…
S35
Democratizing AI Building Trustworthy Systems for Everyone — Crampton argues that none of Microsoft’s five strategic pillars for AI diffusion (infrastructure, skilling, multilingual…
S36
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S37
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S38
Building Indias Digital and Industrial Future with AI — Speaker 1 highlights a key regulatory challenge where AI systems need to be explainable and accountable, but in security…
S39
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S40
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S41
Operationalizing data free flow with trust | IGF 2023 WS #197 — However, there are calls for the development of horizontal, interoperable, and technologically neutral policy frameworks…
S42
AI for agriculture Scaling Intelegence for food and climate resiliance — It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership w…
S43
Setting the Rules_ Global AI Standards for Growth and Governance — Develop modular, interoperable standards systems that can be adapted across different sectors and use cases without star…
S44
Building Population-Scale Digital Public Infrastructure for AI — Open AgriNet demonstrates successful modular, adaptable infrastructure model for other sectors
S45
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — These key comments fundamentally shaped the discussion by elevating it from technical implementation details to strategi…
S46
African Union (AU) Data Policy Framework — While data localisation is often seen as an expression of state sovereignty, as a possible policy option, data localisat…
S47
NRIs MAIN SESSION: DATA GOVERNANCE — Collaboration is seen as essential for effective implementation and enforcement of data protection laws and regulations …
S48
Building Population-Scale Digital Public Infrastructure for AI — Dweck highlights digital sovereignty as a major political and economic challenge, emphasizing the need for countries to …
S49
Digital politics in 2017: Unsettled weather, stormy at times, with sunny spells — Second, in 2017 we can expect further pressure on data localisation (a practice which requires service providers and/or …
S50
WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs — The main areas of agreement included the need for local data infrastructure, capacity building, harmonized policies for …
S51
Cloud computing and data localisation: Lessons on jurisdiction — A hybrid system – where data localisation is generally prohibited, except for data directly affecting national security …
S52
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — In conclusion, the discussion highlighted the need to overcome the challenges posed by the siloed approach, trade agreem…
S53
Collaborative AI Network – Strengthening Skills Research and Innovation — um well I mean as Saurabhji the chair of the working group for democratization of AI spoke about there are some fundamen…
S54
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S55
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S56
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S57
Building Population-Scale Digital Public Infrastructure for AI — And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked…
S58
Building Population-Scale Digital Public Infrastructure for AI — Launch 100 diffusion pathways by 2030 initiative with global coalition including Anthropic, Google, Gates Foundation, an…
S59
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — And this is what prevents innovation inside the government, especially because innovation comes with errors. We know tha…
S60
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Capacity Building Implementation Gill warns against repeating past mistakes in global development initiatives where eff…
S61
Democratizing AI Building Trustworthy Systems for Everyone — Crampton argues that none of Microsoft’s five strategic pillars for AI diffusion (infrastructure, skilling, multilingual…
S62
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S63
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S64
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S65
Toward Collective Action_ Roundtable on Safe & Trusted AI — Cool. So I think we just have to be very, very careful here of the sort of, you know, the Silicon Valley approach of mov…
S66
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S67
WS #49 Benefit everyone from digital tech equally & inclusively — – Mobile apps that provide farmers with real-time weather data and crop management advice.
S68
Keynotes — Historical Context of Technological Revolutions
S69
Collaborative AI Network – Strengthening Skills Research and Innovation — Diffusion is not about like concentrated western LLMs all together and just deploy it. It’s about actually walking the p…
S70
Panel 3 – Innovations in Submarine Cable Technology and Maintenance & Panel 4 – Legal and Regulatory Frameworks for Cable Protection — It set the stage for discussing future innovations and challenges in submarine cable technology, leading to a deeper exp…
S71
Open Forum #19 Strengthening Information Integrity on Climate Change — This intervention fundamentally challenged the panel’s framing and forced a deeper examination of cultural and ethical d…
S72
AI Meets Agriculture Building Food Security and Climate Resilien — What makes this happen? What is that secret sauce, the design principles? It is the same as DPI. What worked for DPI, we…
S73
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “An interesting fact is that most of the AI models in the world work in English”[41]. “But your AI model works in Indian…
S74
WS #119 AI for Multilingual Inclusion — – Encouraging learning and use of multiple languages Athanase Bahizire: Thank you so much. Very good question. Actually…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Nandan Nilekani
3 arguments171 words per minute531 words185 seconds
Argument 1
100 diffusion pathways goal
EXPLANATION
Nandan announced an ambitious target to create 100 diffusion pathways for positive AI use by 2030, aiming to spread AI benefits globally across sectors and countries. The goal is presented as a collective effort involving multiple partners.
EVIDENCE
He stated that the coalition aims for “100 diffusion pathways by 2030” and that this goal was announced recently, with partners such as Anthropic, Google, the Gates Foundation and UNDP joining the coalition [15-22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 100 diffusion pathways target is repeatedly referenced in the discussion, with Nandan announcing it and partners joining the coalition [S19] and it being highlighted as a clarion call for global AI scaling [S21], as well as in the fireside conversation on diffusion and implementation [S22].
MAJOR DISCUSSION POINT
Goal setting for AI diffusion
AGREED WITH
Shankar Maruwada, Irina Ghose, Trevor Mundeli
Argument 2
Pathways compress learning curves, cost and risk, making large‑scale adoption feasible
EXPLANATION
Nandan described pathways as mechanisms that accelerate implementation by reducing time, cost, and risk, allowing others to replicate successes quickly. He highlighted how earlier projects took nine months, then three months, then three weeks, illustrating the compression effect.
EVIDENCE
He explained that “once you have a pathway, then you can get, somebody else can get to the same point quicker” and gave the example of implementation times dropping from nine months to three weeks across different projects [13-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nandan’s illustration of implementation time dropping from nine months to three weeks is documented in the agriculture case study, showing the compression effect of pathways [S19]; the broader notion of diffusion as a general-purpose technology that starts from the user is discussed in the fireside conversation [S20].
MAJOR DISCUSSION POINT
Efficiency of diffusion pathways
AGREED WITH
Shankar Maruwada, Irina Ghose, Trevor Mundeli
Argument 3
Inclusive, positive AI use requires safe diffusion infrastructure
EXPLANATION
Nandan emphasized that AI should be deployed in an inclusive manner so that no one is left out, and that safe diffusion infrastructure is strategic for the world. He linked inclusivity with the need for coordinated, safe pathways.
EVIDENCE
He noted that the initiatives are designed “to improve and make better people’s lives, can meet the aspirations in a very inclusive way so that everybody is in, nobody is left out” and called showing the good use of AI a strategic priority [17-18][31-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on inclusivity and safe diffusion appears in the agriculture scaling discussion, which stresses feedback loops and inclusive design [S15]; the fireside conversation also links safe diffusion infrastructure to strategic priorities [S22].
MAJOR DISCUSSION POINT
Inclusivity and safety in AI diffusion
S
Shankar Maruwada
4 arguments133 words per minute1438 words645 seconds
Argument 1
Pathways as shared rails for rapid replication
EXPLANATION
Shankar described diffusion pathways as shared rails that compress learning curves, cost and risk, enabling AI to be used by all of society. He contrasted this with platform or model approaches, stressing the infrastructural nature of pathways.
EVIDENCE
He said, “The diffusion infrastructure we are talking about creating isn’t a platform app or model. It’s shared rails that compress learning curves, cost and risk” [44-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shankar’s description of pathways as “shared rails” is echoed in the panel summary that contrasts his distributed approach with centralized hubs [S3]; the notion of shared infrastructure for rapid replication is also highlighted in the discussion on diffusion pathways [S24].
MAJOR DISCUSSION POINT
Infrastructure for AI diffusion
AGREED WITH
Nandan Nilekani, Irina Ghose, Trevor Mundeli
DISAGREED WITH
Trevor Mundeli
Argument 2
Balancing rapid diffusion (100 pathways) with safety safeguards is critical where lives are at stake
EXPLANATION
Shankar raised the tension between the urgency of scaling AI quickly through 100 pathways and the need to ensure safety, especially in high‑stakes domains like health. He asked where the line should be drawn between speed and safety.
EVIDENCE
He asked, “When lives are at stake where do you draw the line between speed 100 pathways to 2030 and safety” highlighting the trade-off [265-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between speed and safety is reflected in the risk-control perspective on AI market adoption [S28] and the call for auditable, transparent systems in high-stakes domains [S32]; the 100 pathways agenda provides the speed dimension [S19].
MAJOR DISCUSSION POINT
Speed vs. safety in AI scaling
AGREED WITH
Trevor Mundeli
DISAGREED WITH
Trevor Mundeli
Argument 3
Political‑economic tension around wealth distribution and workforce impacts must be managed
EXPLANATION
Shankar identified political‑economic challenges, such as how wealth generated by AI and automation should be distributed and the impact on the workforce, as a key issue for governments to address when scaling AI.
EVIDENCE
He framed the issue as a “political economy issue” and asked about the hardest political or economic challenge for AI implementation, pointing to concerns about wealth distribution and workforce changes [285-287].
MAJOR DISCUSSION POINT
Political economy of AI
Argument 4
Universal language and standards allow AI to plug into pathways across sectors
EXPLANATION
Shankar argued that a universal language or protocol, similar to UPI for payments, would enable AI tools to integrate seamlessly with diverse pathways, reducing the need for bespoke development each time.
EVIDENCE
He referenced the ubiquity of UPI and suggested a “universal language which accesses the tools as well as the data” to make AI integration easier across sectors [246-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a universal language or protocol is supported by the discussion of language barriers and translation challenges in government data projects [S33] and by the broader call for foundational resources to enable AI democratization [S23].
MAJOR DISCUSSION POINT
Interoperability standards
AGREED WITH
Irina Ghose
I
Irina Ghose
4 arguments163 words per minute1288 words473 seconds
Argument 1
Contextual, workflow‑embedded diffusion is essential
EXPLANATION
Irina stressed that for AI to diffuse at scale it must be presented in the local language, fit naturally into users’ daily workflows, and be iteratively refined. These factors make AI intuitive rather than a specialized scientific tool.
EVIDENCE
She listed three requirements: contextual to the local language, embedded in existing workflow, and iterative improvement [60-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The requirement that diffusion be contextual, workflow-integrated, and iterative is explicitly mentioned in the panel summary on diffusion pathways [S3] and reinforced by the agriculture case study’s emphasis on contextualisation [S19].
MAJOR DISCUSSION POINT
Design criteria for AI diffusion
AGREED WITH
Shankar Maruwada
Argument 2
AI must be contextual to local language, fit existing workflows, and be iteratively improved
EXPLANATION
Reiterating her earlier point, Irina highlighted that AI adoption hinges on language localisation, seamless workflow integration, and continuous iteration to stay relevant for end‑users such as teachers, health workers, and small business owners.
EVIDENCE
She gave examples of teachers in Bihar, health workers in Coimbatore, and small business leaders in Indore needing AI that is intuitive and embedded in their daily tasks [58-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Irina’s three design criteria are mirrored in the discussion of language localisation and workflow embedding for teachers, health workers, and small businesses [S19] and in the broader statement that diffusion must become contextual and iterative [S3].
MAJOR DISCUSSION POINT
Localization and workflow integration
AGREED WITH
Shankar Maruwada
Argument 3
Failure is gradual loss of relevance; maintain domain‑specific data and language support
EXPLANATION
Irina described that AI systems rarely fail abruptly; instead they lose relevance as users stop interacting with them. Maintaining domain‑specific datasets and language support is essential to prevent this slow decay.
EVIDENCE
She noted that “failure never happens with a big bang it just slowly dies because people just stop reducing the level of interaction” and emphasized the need for contextual, domain-specific data and language support [169-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The observation that AI systems fail gradually as users stop interacting, and the importance of domain-specific data and language support, are documented in the agriculture diffusion analysis [S19] and in the language-translation challenges noted for government data projects [S33].
MAJOR DISCUSSION POINT
Failure modes in AI diffusion
Argument 4
Introduce a Model Context Protocol (MCP) as a universal “adapter” for AI tools
EXPLANATION
Irina introduced the Model Context Protocol, a standard that would allow AI models to be built once and reused across applications, similar to how UPI standardized digital payments. MCP aims to simplify integration and reduce duplication of effort.
EVIDENCE
She explained that MCP is “to AI what UPI was to payments”, enabling developers to make tools MCP-ready once and then reuse them without rewriting code [250-254].
MAJOR DISCUSSION POINT
Standardisation for AI integration
AGREED WITH
Esther Dweck
T
Trevor Mundeli
4 arguments167 words per minute1117 words399 seconds
Argument 1
Scaling hubs to aggregate fragmented pilots and provide funding
EXPLANATION
Trevor described the creation of scaling hubs that act as aggregation points for numerous AI pilots, offering funding and coordination to overcome fragmentation and accelerate national rollout.
EVIDENCE
He outlined the hubs in India and Africa, their role in consolidating pilots, providing government-level funding, and reducing fragmentation that hinders scaling [84-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trevor’s proposal for centralized scaling hubs that aggregate pilots and channel funding is described in the panel overview of scaling mechanisms [S3] and contrasted with distributed approaches in the discussion on hub versus shared-rail models [S24].
MAJOR DISCUSSION POINT
Institutional mechanisms for scaling
AGREED WITH
Nandan Nilekani, Shankar Maruwada, Irina Ghose
DISAGREED WITH
Shankar Maruwada
Argument 2
Centralised “scaling hubs” reduce fragmentation and accelerate national rollout
EXPLANATION
He reiterated that centralised hubs can channel diffusion into centres of excellence, allowing governments to scale AI solutions more rapidly and with less risk than a scattered pilot approach.
EVIDENCE
He emphasized that “we don’t want to inhibit diffusion” but that channeling it into hubs “is a way that we are really going to get to scale more rapidly” [96-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The benefit of centralized hubs for reducing fragmentation and speeding national rollout is highlighted in the same panel summary on scaling hubs [S3] and in the commentary on hub-based scaling versus scattered pilots [S24].
MAJOR DISCUSSION POINT
Reducing fragmentation
Argument 3
AI systems must be auditable and transparent, especially in high‑stakes health applications
EXPLANATION
Trevor argued that AI used in health must be auditable and provide clear reasoning, as black‑box recommendations are insufficient for clinical decision‑making. Transparency is needed for trust and accountability.
EVIDENCE
He stated that “a black box system that gives a health recommendation is almost never adequate” and that systems need to be auditable, allowing clinicians to trace why a recommendation was made [274-281].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for auditability and transparency in health AI is emphasized in the practical pathways discussion on responsible AI at scale [S32] and in the risk-control perspective on AI adoption [S28].
MAJOR DISCUSSION POINT
Auditability and transparency
Argument 4
Modular, interoperable infrastructure (e.g., OpenAgriNet) demonstrates how components can be combined for scale
EXPLANATION
Trevor highlighted OpenAgriNet as a modular, adaptable platform that brings together various components to provide personalized agricultural assistance, illustrating a model that could be replicated in health.
EVIDENCE
He described OpenAgriNet as “modular, the way that you can adapt it to the local circumstances” and praised its ability to deliver personalized information to smallholder farmers [185-187].
MAJOR DISCUSSION POINT
Modular infrastructure for scaling
E
Esther Dweck
5 arguments180 words per minute1938 words643 seconds
Argument 1
Procurement reform, digital infrastructure and data governance enable scaling
EXPLANATION
Esther argued that transforming procurement practices, strengthening digital infrastructure, and establishing robust data governance are essential for scaling AI within the public sector.
EVIDENCE
She detailed the need to change procurement to focus on outcomes rather than lowest price, highlighted Brazil’s digital ID platform (gov.br) and digital infrastructure, and stressed data governance reforms including a new decree and chief data officers [128-133][144-150].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Esther’s points on reforming procurement, strengthening digital infrastructure, and establishing data-governance frameworks are covered in the panel overview of governance and procurement challenges [S3].
MAJOR DISCUSSION POINT
Enabling environment for AI scaling
Argument 2
Outcome‑oriented, risk‑tolerant procurement and robust digital ID platforms are needed
EXPLANATION
She emphasized shifting procurement from a risk‑averse, price‑focused model to one that values outcomes and tolerates managed risk, while leveraging digital ID systems to personalize services and support AI deployment.
EVIDENCE
She explained that current procurement seeks lowest risk and price, which stifles innovation, and advocated for a policy-oriented approach; she also referenced Brazil’s digital ID platform (gov.br) as a foundation for AI-enabled personalized services [128-133][145-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift toward outcome-based, risk-tolerant procurement and the use of digital ID platforms for personalized services are discussed in the same governance panel featuring Esther [S3].
MAJOR DISCUSSION POINT
Procurement and digital identity for AI
Argument 3
Reform procurement to focus on outcomes, accept managed risk, and foster innovation culture
EXPLANATION
Esther described a shift in procurement mindset toward outcome‑based evaluation, acceptance of some risk, and collaboration with suppliers to build an innovation‑friendly culture within government.
EVIDENCE
She noted the move from process-oriented to policy-oriented procurement, the need to allow failure as part of innovation, and the importance of interacting with suppliers during procurement [128-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Esther’s call for outcome-focused procurement, managed risk acceptance, and an innovation-friendly culture appears in the panel summary on procurement reform and digital sovereignty [S3].
MAJOR DISCUSSION POINT
Innovation‑friendly procurement
Argument 4
Strengthen digital sovereignty through resident clouds, data localisation and chief data officers
EXPLANATION
Esther highlighted Brazil’s efforts to increase digital sovereignty by establishing resident cloud services, bringing data back to Brazil, and appointing chief data officers to oversee data use and security.
EVIDENCE
She discussed resident clouds owned by federal companies, the goal of bringing data back to Brazil, and the creation of chief data officer roles as part of a new data governance decree [290-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion of resident cloud services, data localisation, and the creation of chief data officer roles as part of Brazil’s digital sovereignty strategy is included in the governance and digital sovereignty segment of the panel [S3].
MAJOR DISCUSSION POINT
Digital sovereignty
AGREED WITH
Irina Ghose
DISAGREED WITH
Irina Ghose
Argument 5
Building capacity by training civil servants in digital and AI skills is essential
EXPLANATION
Esther stressed that a skilled civil service is crucial for state transformation, describing a training programme that targets managers, IT experts, data controllers, and regular staff to develop digital and AI competencies.
EVIDENCE
She outlined four training tracks for different civil-servant roles and emphasized the need to give them a “digital mind” to use AI in everyday work [221-227].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Esther’s emphasis on civil-service capacity building through multi-track training programmes is highlighted in the panel overview of capacity development for AI scaling [S3].
MAJOR DISCUSSION POINT
Capacity development for AI
Agreements
Agreement Points
Structured diffusion mechanisms (pathways, shared rails, hubs, protocols) accelerate AI scaling and reduce implementation time
Speakers: Nandan Nilekani, Shankar Maruwada, Irina Ghose, Trevor Mundeli
100 diffusion pathways goal Pathways compress learning curves, cost and risk, making large‑scale adoption feasible Pathways as shared rails for rapid replication Introduce a Model Context Protocol (MCP) as a universal “adapter” for AI tools Scaling hubs to aggregate fragmented pilots and provide funding
All four speakers stress that having pre-defined diffusion pathways, whether framed as shared rails, scaling hubs or a universal model context protocol, dramatically shortens deployment cycles and lowers cost and risk, enabling rapid, large-scale AI adoption. Nandan illustrates the time compression from nine months to three weeks [13-15]; Shankar describes pathways as shared rails that compress learning curves [44-47]; Irina proposes the MCP as a universal adapter to reuse AI components across applications [250-254]; Trevor outlines scaling hubs that aggregate pilots and channel funding to overcome fragmentation [84-99].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with calls for modular, interoperable standards that can be adapted across sectors to speed AI deployment, as advocated in global AI standards initiatives and demonstrated by Open AgriNet’s modular infrastructure model [S43][S44].
AI diffusion must be contextual, language‑localised and embedded in everyday workflows
Speakers: Irina Ghose, Shankar Maruwada
Contextual, workflow‑embedded diffusion is essential AI must be contextual to local language, fit existing workflows, and be iteratively improved Universal language and standards allow AI to plug into pathways across sectors
Irina emphasizes that AI must be delivered in the local language, fit users’ daily workflows and evolve iteratively [60-62]; Shankar adds that a universal language or protocol, akin to UPI for payments, is needed so AI can integrate seamlessly across sectors [246-250]. Both agree that localisation and workflow integration are prerequisites for successful diffusion.
Safety, auditability and transparency are critical when AI is applied in high‑stakes domains
Speakers: Trevor Mundeli, Shankar Maruwada
AI systems must be auditable and transparent, especially in health applications Balancing rapid diffusion (100 pathways) with safety safeguards is critical where lives are at stake
Trevor argues that health AI must be auditable and provide clear reasoning to earn trust [274-281]; Shankar raises the tension between speed of diffusion and safety, asking where the line should be drawn for life-critical uses [265-267]. Both converge on the need for robust safety and audit mechanisms.
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on safety and transparency reflects the UN AI Security Council’s focus on algorithmic transparency and rigorous testing, as well as broader AI governance discussions stressing explainability and auditability [S54][S56][S55].
Robust data governance and digital sovereignty are foundational for AI scaling
Speakers: Esther Dweck, Irina Ghose
Strengthen digital sovereignty through resident clouds, data localisation and chief data officers Introduce a Model Context Protocol (MCP) as a universal “adapter” for AI tools
Esther outlines Brazil’s push for digital sovereignty via resident clouds, data localisation and governance structures [290-304]; Irina’s MCP aims to make data AI-ready and interoperable across applications [250-254]. Both see strong data governance and sovereignty as essential enablers for AI diffusion.
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors policy debates on digital sovereignty and data governance, highlighted in the African Union Data Policy Framework and expert commentary on the political-economic challenges of maintaining control over national data assets [S46][S48][S50].
Similar Viewpoints
Both stress that AI must be delivered in local languages and integrated into existing workflows, and that a universal protocol (like MCP/UPI) is needed to enable seamless integration across sectors [60-62][246-250].
Speakers: Irina Ghose, Shankar Maruwada
Contextual, workflow‑embedded diffusion is essential Universal language and standards allow AI to plug into pathways across sectors
Both highlight the necessity of robust, sovereign data infrastructures and interoperable standards to make data AI‑ready and support large‑scale diffusion [290-304][250-254].
Speakers: Esther Dweck, Irina Ghose
Strengthen digital sovereignty through resident clouds, data localisation and chief data officers Introduce a Model Context Protocol (MCP) as a universal “adapter” for AI tools
Both agree that safety and auditability cannot be compromised in rapid AI diffusion, especially for high‑risk sectors like health [274-281][265-267].
Speakers: Trevor Mundeli, Shankar Maruwada
AI systems must be auditable and transparent, especially in health applications Balancing rapid diffusion (100 pathways) with safety safeguards is critical where lives are at stake
Both describe diffusion pathways as shared infrastructure that dramatically reduces implementation time and risk, enabling faster scaling [13-15][44-47].
Speakers: Nandan Nilekani, Shankar Maruwada
Pathways compress learning curves, cost and risk, making large‑scale adoption feasible Pathways as shared rails for rapid replication
Unexpected Consensus
Modular, interoperable infrastructure as a key scaling strategy across sectors
Speakers: Trevor Mundeli, Esther Dweck
Modular, interoperable infrastructure (e.g., OpenAgriNet) demonstrates how components can be combined for scale Strengthen digital sovereignty through resident clouds, data localisation and chief data officers
While Trevor focuses on a modular, adaptable platform for agriculture (OpenAgriNet) and Esther on sovereign, resident cloud infrastructure for government services, both converge on the principle that modular, interoperable technical foundations are essential for scaling AI across diverse domains-a consensus that bridges private-sector pilots and national digital sovereignty strategies [185-187][290-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Recommendations for modular, interoperable infrastructure echo the development of technology-neutral, adaptable standards systems for AI growth and the successful modular public-infrastructure example of Open AgriNet [S43][S44][S45].
Overall Assessment

The panel shows strong convergence on four pillars: (1) the need for structured diffusion pathways or hubs to accelerate AI rollout; (2) the necessity of localisation, language support and workflow integration; (3) the imperative of safety, auditability and transparency in high‑risk applications; and (4) the foundational role of robust data governance and digital sovereignty. These agreements cut across public‑private, sectoral and national boundaries, indicating a shared vision for coordinated, safe and inclusive AI diffusion.

High consensus – most speakers, from government, foundations and industry, articulate compatible strategies, suggesting that future policy and technical work is likely to be coordinated around these shared principles, enhancing prospects for effective, inclusive AI deployment by 2030.

Differences
Different Viewpoints
Centralised scaling hubs vs distributed shared‑rail diffusion pathways
Speakers: Shankar Maruwada, Trevor Mundeli
Pathways as shared rails for rapid replication Scaling hubs to aggregate fragmented pilots and provide funding
Shankar describes diffusion pathways as “shared rails that compress learning curves, cost and risk” and stresses that the infrastructure is not a platform but a common rail for all sectors [44-47]. Trevor proposes creating “scaling hubs” that act as aggregation points for many pilots, providing government-level funding and reducing fragmentation to accelerate national rollout [84-99]. The two approaches differ: Shankar favours a distributed, standards-based rail model, while Trevor advocates a more centralised hub model.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between centralised hubs and distributed pathways reflects the broader integration versus fragmentation debate in digital public infrastructure, as raised in the DNS Trust Horizon discussion [S45].
Procurement risk‑aversion versus outcome‑oriented, risk‑tolerant procurement for AI innovation
Speakers: Irina Ghose, Esther Dweck
AI diffusion requires all‑in commitment and iterative rollout embedded in existing workflows Procurement reform, shifting from lowest‑price, lowest‑risk to outcome‑oriented, risk‑tolerant approaches
Irina stresses that diffusion needs “all in” commitment and that innovators must embed AI into daily workflows, implying a willingness to experiment and accept early-stage errors [56-62]. Esther argues that civil servants avoid innovation because “the auditing body will come after me” and that procurement must move from “lowest price” to a policy-oriented, outcome-focused mindset that tolerates managed risk [128-133][138-140]. The tension lies in how much risk civil servants and innovators should accept during diffusion.
Speed of scaling (100 pathways by 2030) versus safety and auditability in high‑stakes domains
Speakers: Shankar Maruwada, Trevor Mundeli
Balancing rapid diffusion (100 pathways) with safety safeguards is critical where lives are at stake AI systems must be auditable and transparent, especially in health applications
Shankar raises the trade-off, asking “When lives are at stake where do you draw the line between speed 100 pathways to 2030 and safety” [265-267]. Trevor responds that while urgency is high, AI recommendations must be auditable and transparent, noting that “a black box system … is almost never adequate” and that clinicians need to trace why a recommendation was made [274-281]. The disagreement is over the acceptable balance between rapid deployment and rigorous safety controls.
Universal model‑context protocol (MCP) versus national digital sovereignty and data localisation
Speakers: Irina Ghose, Esther Dweck
Introduce a Model Context Protocol (MCP) as a universal “adapter” for AI tools Strengthen digital sovereignty through resident clouds, data localisation and chief data officers
Irina proposes the Model Context Protocol, likening it to UPI, to allow AI tools to be built once and reused across applications without rewriting code [250-254]. Esther emphasizes Brazil’s push for digital sovereignty, describing resident clouds, bringing data back to Brazil, and appointing chief data officers as part of a new data-governance decree [290-304]. The two positions differ on openness: Irina pushes for a cross-border universal standard, while Esther stresses national control over data and infrastructure.
POLICY CONTEXT (KNOWLEDGE BASE)
This clash parallels policy discussions on cross-border data free flow versus localisation, noted in IGF 2023’s trust-focused data-free-flow framework and the AU’s nuanced stance on data localisation as a sovereignty tool [S41][S46][S47][S48].
Unexpected Differences
Irina’s emphasis on language localisation and universal protocol versus Esther’s focus on national data sovereignty
Speakers: Irina Ghose, Esther Dweck
AI must be contextual to local language, workflow‑embedded and iterative Strengthen digital sovereignty through resident clouds, data localisation and chief data officers
While both discuss localisation, Irina sees multilingual support and a universal protocol (MCP) as a way to accelerate diffusion across borders, whereas Esther prioritises keeping data within national borders and building sovereign cloud capacity. The clash between cross‑border standardisation and national data sovereignty was not anticipated given the otherwise collaborative tone of the panel.
POLICY CONTEXT (KNOWLEDGE BASE)
The disagreement underscores the same sovereignty versus standardisation dilemma, with policy literature highlighting the need to balance national data control against interoperable protocols for AI diffusion [S46][S48][S50].
Overall Assessment

The panel shows strong consensus on the need for diffusion pathways and inclusive AI, but substantive disagreements emerge around the architecture for scaling (distributed rails vs central hubs), the degree of procurement risk tolerance, the balance between speed and safety, and the tension between universal standards and national digital sovereignty.

Moderate to high: While participants share common objectives, the divergent views on implementation mechanisms could impede coordinated action unless reconciled. The implications are that without alignment on scaling models, procurement reforms, safety standards, and data governance, the 100‑pathway target may face fragmentation, slower adoption, or regulatory friction.

Partial Agreements
All participants endorse the overarching aim of creating diffusion pathways to spread AI benefits at scale and agree that some form of enabling infrastructure (whether shared rails, hubs, or reforms) is needed. They differ on the precise mechanism, but share the goal of rapid, inclusive AI diffusion [15-16][44-47][60-62][84-99][128-133].
Speakers: Nandan Nilekani, Shankar Maruwada, Irina Ghose, Trevor Mundeli, Esther Dweck
100 diffusion pathways goal Pathways compress learning curves, cost and risk, making large‑scale adoption feasible Contextual, workflow‑embedded diffusion is essential Scaling hubs to aggregate fragmented pilots and provide funding Procurement reform, digital infrastructure and data governance enable scaling
Takeaways
Key takeaways
The concept of “diffusion pathways” is central: shared, reusable rails that compress learning curves, cost and risk, enabling rapid replication of AI solutions for public good. A global coalition aims to create 100 diffusion pathways by 2030, involving governments, foundations, and tech firms (e.g., Anthropic, Google, Gates Foundation, UNDP). Successful scaling requires AI to be contextual (local language, domain‑specific data), embedded in existing workflows, and continuously iterated. Fragmented pilots hinder scale; “scaling hubs” in India and Africa are proposed to aggregate pilots, provide funding, and act as centers of excellence. Public‑sector scaling depends on reforms to procurement (outcome‑oriented, risk‑tolerant), robust digital infrastructure (digital IDs, service platforms), and strong data‑governance frameworks. Safety and auditability are non‑negotiable for high‑stakes applications (health, agriculture); models must be transparent and auditable. Interoperability standards such as the Model Context Protocol (MCP) are needed so AI components can plug into pathways across sectors and countries. Building digital sovereignty (resident clouds, data localisation, chief data officers) and capacity‑building for civil servants are essential for sustainable adoption. Political and economic challenges include managing wealth distribution from AI‑driven productivity and ensuring inclusive, equitable outcomes.
Resolutions and action items
Launch of a global coalition to develop 100 diffusion pathways by 2030. Establishment of scaling hubs in India and several African nations (Rwanda, Nigeria, Senegal, Kenya) to fund and coordinate large‑scale roll‑outs. Brazil’s INSPIRE (AI for Public Service with Innovation, Responsibility, and Ethics) program to create institutional arrangements, data‑sovereignty mechanisms, and civil‑servant training. Announcement of a forthcoming Brazilian decree on data governance, mandating chief data officers in ministries. Development and promotion of Anthropic’s Model Context Protocol (MCP) as a universal adapter for AI tools. Commitment to train civil servants at multiple levels (managers, IT experts, data stewards, general staff) on digital and AI competencies. Agreement to continue sharing best‑practice pathways (e.g., Maharashtra, Ethiopia, Amul) to accelerate future implementations.
Unresolved issues
How to precisely balance rapid diffusion (the 100‑pathway target) with rigorous safety and auditability standards, especially in health applications. Specific details of outcome‑oriented procurement policies and how to institutionalise managed‑risk approaches across diverse government agencies. Concrete steps to achieve full digital sovereignty for countries that currently rely on foreign cloud providers. Mechanisms for ongoing monitoring and evaluation of diffusion pathways to ensure they remain inclusive and do not create new inequities. Long‑term governance model for the global coalition: decision‑making processes, funding responsibilities, and accountability.
Suggested compromises
Adopt an outcome‑oriented, policy‑focused procurement model that tolerates managed risk rather than insisting on lowest‑price, lowest‑risk contracts. Use scaling hubs as focal points for diffusion while still allowing decentralized, “random” diffusion to preserve innovation diversity. Pursue incremental digital sovereignty (resident clouds, data localisation) rather than an all‑or‑nothing approach, acknowledging current dependencies. Implement modular, interoperable standards (e.g., MCP) to allow different AI solutions to plug into existing pathways without forcing a single vendor or architecture.
Thought Provoking Comments
We went from nine months to three months to three weeks by learning from lived experience; we call these ‘pathways’ that let others reach the same point faster, aiming for 100 diffusion pathways by 2030 to spread positive AI use.
Introduces the concrete concept of ‘diffusion pathways’ and demonstrates how iterative learning dramatically accelerates AI deployment, framing the whole panel around a measurable global ambition.
Sets the agenda for the discussion, prompting other speakers to define what pathways mean in practice, leading to Shankar’s historical analogy, Irina’s criteria for diffusion, and Trevor’s scaling‑hub proposal.
Speaker: Nandan Nilekani
The crucial ingredient in past industrial revolutions was not better inventions but diffusion – the spread of know‑how, trust and institutional capability that lets societies adopt technology at scale.
Reframes the conversation from technology creation to systematic diffusion, linking historical lessons to AI and emphasizing the need for structured pathways.
Creates a turning point that shifts the panel from describing projects to discussing mechanisms of spread; it directly elicits Irina’s focus on contextualisation and Trevor’s scaling‑hub concept.
Speaker: Shankar Maruwada
AI deployment rarely fails because of model performance; it fails because of perceived complexity. For diffusion we need (1) local language context, (2) integration into existing workflows, and (3) an iterative, user‑centric approach.
Distills the practical barriers to scaling AI into three clear, actionable dimensions, moving the debate from high‑level ambition to on‑the‑ground implementation details.
Guides the subsequent dialogue toward concrete requirements—language support, workflow embedding, and iterative design—prompting Shankar’s UPI analogy and Esther’s procurement reforms.
Speaker: Irina Ghose
We are creating ‘scaling hubs’ in partnership with governments to aggregate fragmented pilots, provide funding, and act as centers of excellence that channel diffusion rather than letting it remain random and scattered.
Identifies fragmentation as a major barrier and proposes a concrete institutional solution, bridging the gap between pilot projects and national scale.
Leads the conversation to discuss how to organise diffusion pathways, influencing Esther’s remarks on institutional change and reinforcing Shankar’s point about the stress inherent in fixed pathways.
Speaker: Trevor Mundeli
In government procurement we must shift from lowest‑price, lowest‑risk buying to outcome‑oriented, policy‑focused procurement that accepts failure as part of innovation, while also strengthening digital infrastructure and data governance.
Highlights systemic bureaucratic obstacles and offers a transformative approach to public‑sector innovation, linking procurement, infrastructure, and governance.
Triggers a deeper examination of institutional barriers, prompting Shankar to ask about the hardest political/economic challenges and leading others to discuss safety, data sovereignty, and the need for new procurement mindsets.
Speaker: Esther Dweck
For technology to work at population scale it must become ‘boring’—invisible and taken for granted, like UPI for payments; only when AI is no longer seen as magic does true diffusion occur.
Uses a vivid metaphor to capture the end goal of diffusion, emphasizing user experience over technical novelty and setting a benchmark for AI adoption.
Shifts the tone from aspirational to pragmatic, inspiring Irina’s proposal of a universal ‘model context protocol’ and reinforcing the need for seamless integration discussed earlier.
Speaker: Shankar Maruwada
We’ve created a ‘model context protocol’ (MCP) – a universal language for AI models, analogous to UPI for payments, so developers can build once and plug into any downstream application without rewriting code.
Proposes a technical standard that could operationalise the diffusion pathways, turning the abstract idea of “rails” into a concrete interoperable protocol.
Extends the earlier UPI analogy, prompting discussion on standardisation across sectors and countries, and aligning with Trevor’s call for auditable, modular systems.
Speaker: Irina Ghose
AI systems, especially in health, must be auditable and transparent; a black‑box recommendation is never sufficient—users need to trace why a decision was made, similar to questioning a human clinician.
Emphasises safety and accountability, introducing a critical dimension to scaling AI in high‑stakes domains and linking technical design to trust.
Deepens the conversation on safety, leading to Esther’s remarks on data governance and digital sovereignty, and reinforcing the need for robust diffusion pathways that embed auditability.
Speaker: Trevor Mundeli
Overall Assessment

The discussion was driven forward by a series of pivotal insights that moved the panel from high‑level ambition to concrete mechanisms for scaling AI responsibly. Nandan’s diffusion‑pathway vision provided the overarching goal, while Shankar’s historical analogy reframed the challenge as one of systematic spread rather than invention. Irina’s three‑pronged diffusion criteria and Trevor’s scaling‑hub proposal supplied actionable levers, prompting Esther to expose the bureaucratic bottlenecks in procurement and data governance. The recurring UPI metaphor and Irina’s model‑context protocol anchored the abstract idea of pathways in tangible, interoperable standards. Finally, Trevor’s emphasis on auditable AI introduced the essential safety dimension, ensuring that speed does not eclipse trust. Collectively, these comments redirected the conversation toward institutional design, technical standardisation, and governance, shaping a nuanced roadmap for achieving the 100 diffusion pathways by 2030.

Follow-up Questions
How can progress toward the goal of 100 diffusion pathways by 2030 be measured and tracked?
Nandan announced the 100 diffusion pathways target and a global coalition, but did not specify metrics or monitoring mechanisms, indicating a need for research on measurement frameworks.
Speaker: Nandan Nilekani
What are the most effective methods to assess the return on investment (ROI) of language localization for AI models in diverse Indian languages?
Irina emphasized contextual language and ROI when adding new Indian languages, suggesting further study on how to quantify benefits of language support.
Speaker: Irina Ghose
How can the modular infrastructure of Open AgriNet be adapted for personal health assistants in low‑ and middle‑income countries?
Trevor expressed interest in replicating the agricultural AI model for health, indicating a research gap in transferring the approach to the health sector.
Speaker: Trevor Mundeli
What privacy‑preserving, verifiable age‑verification mechanisms can be deployed at scale to protect children online while respecting digital sovereignty?
Esther described Brazil’s new age‑verification law and the challenge of balancing privacy with protection, highlighting a need for technical solutions and policy research.
Speaker: Esther Dweck
What standards and protocols are needed for a universal "model context protocol" to enable seamless AI integration across sectors and countries?
Irina introduced the Model Context Protocol (MCP) as a universal adapter, but its design, adoption, and governance require further investigation.
Speaker: Irina Ghose
What frameworks and tools are required to make AI recommendations auditable and transparent, especially in high‑stakes health applications?
Trevor stressed the necessity of auditability for AI health recommendations, pointing to a research need for robust auditing frameworks.
Speaker: Trevor Mundeli
How effective are scaling hubs in aggregating fragmented AI pilots and accelerating national‑scale deployment, and what best practices can be identified?
Trevor described scaling hubs as a solution to fragmentation but did not provide evidence of impact, suggesting a need to study their efficacy.
Speaker: Trevor Mundeli
How can governments balance rapid AI deployment (speed) with safety and ethical safeguards in life‑critical domains?
Trevor highlighted the tension between speed of diffusion pathways and safety in health, indicating a need for policy and risk‑management research.
Speaker: Trevor Mundeli
How can digital sovereignty be increased while maintaining interoperability with global AI services, and what governance models support this?
Esther discussed Brazil’s push for digital sovereignty and the challenges of data location and control, calling for research on sovereign yet interoperable architectures.
Speaker: Esther Dweck
What capacity‑building approaches are most effective for upskilling civil servants in AI and digital mindsets, especially those with long tenure?
Esther mentioned training programs for civil servants but did not detail optimal methods, indicating a need for research on effective public‑sector AI education.
Speaker: Esther Dweck
How can public‑sector procurement processes be reformed to encourage AI innovation while managing risk and accountability?
Esther highlighted the current risk‑averse procurement culture and the need for outcome‑oriented policies, suggesting further study on procurement reform.
Speaker: Esther Dweck
What are the key components of a "digital public intelligence" system that evolves from digital public infrastructure, and how can its impact be evaluated?
Shankar projected a future shift from DPI to digital public intelligence without defining its architecture or metrics, indicating a research agenda.
Speaker: Shankar Maruwada
How can AI‑driven platforms like Blue Dot be designed to create inclusive employment opportunities across diverse economies?
Nandan referenced the Blue Dot job platform as part of diffusion pathways but did not elaborate on design or impact, pointing to a need for study on AI‑enabled job creation.
Speaker: Nandan Nilekani

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.