Building Population-Scale Digital Public Infrastructure for AI

20 Feb 2026 11:00h - 12:00h

Building Population-Scale Digital Public Infrastructure for AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how to scale AI for public good through “diffusion pathways,” a framework for rapidly spreading know-how, trust and institutional capability rather than just technology awareness [1-3][40-46]. Nandan Nilekani illustrated the speed gains achievable with this approach, noting that a farmer-support app took nine months to launch in Maharashtra, three months in Ethiopia, and three weeks for an Amul dairy solution, showing a dramatic reduction in rollout time [4-12]. He announced an ambition to create 100 diffusion pathways by 2030, backed by a global coalition that includes Anthropic, Google, the Gates Foundation and UNDP, and open to any participant [14-24][26-29]. Irina Ghose emphasized that diffusion succeeds when AI is delivered in the local language, fits seamlessly into users’ daily workflows, and is continuously iterated, citing tools like “co-work” that enable non-technical users to adopt AI [60-62][68-73]. She also introduced Anthropic’s Model Context Protocol (MCP) as a universal “language” for AI components, likening it to UPI’s role in payments and enabling one-time development for multi-sector deployment [250-254]. Trevor Mundeli warned that fragmented pilots hinder scaling and proposed government-funded “scaling hubs” in India and Africa to aggregate efforts, reduce duplication and accelerate country-level impact [84-99]. Esther Dweck described Brazil’s reforms through its Ministry of Management and Innovation, focusing on outcome-oriented procurement, strengthening digital infrastructure and a national digital ID platform (gov.br) to support AI services [128-136][137-140][144-148]. She detailed the INSPIRE program, which creates new institutional arrangements, promotes data sovereignty, builds AI platforms for education, health and agriculture, and runs a four-track training scheme for civil servants to build digital and AI capacity [199-207][208-227]. On safety, Trevor stressed the need for auditable AI, especially in health, noting Anthropic’s work on model transparency that lets clinicians trace recommendations [274-282]. Esther added that political-economic challenges such as digital sovereignty and age-verification for online safety are being addressed through legislation and local data control initiatives in Brazil [286-314]. The discussion concluded that robust digital public infrastructure (DPI) is essential for scaling AI, and by 2030 it may evolve into “digital public intelligence,” reflecting the collective confidence in achieving safe, inclusive AI impact at scale [315-317][30-32][15].


Keypoints


Major discussion points


Diffusion pathways as a fast-track to AI impact – Nandan Nilekani described how a farmer-app that took nine months to roll out in Maharashtra was launched in Ethiopia in three months and for dairy farmers in three weeks, illustrating the speed gains from reusable “pathways” and announcing an ambition to create 100 diffusion pathways by 2030 with a global coalition of partners such as Anthropic, Google, the Gates Foundation and UNDP [4-12][15-20][22-27].


Key ingredients for AI diffusion at scale – Irina Ghose emphasized that successful diffusion requires (1) local language/context, (2) embedding AI into existing daily workflows, and (3) an iterative, “AI-first” mindset that continuously engages users, citing examples from Indian language support and low-code tools [60-62][64-72].


Barriers to scaling pilots and the need for coordinated hubs – Trevor Mundeli highlighted the problem of fragmented pilots across ministries and sectors, proposing “scaling hubs” in partnership with governments (e.g., Rwanda, Nigeria, Senegal) to aggregate funding, expertise, and DPI infrastructure, thereby turning pilots into sustainable, population-scale services [88-99][100-103].


Public-sector reforms to enable AI adoption – Esther Dweck outlined three systemic changes needed within the state: (1) innovation-oriented procurement that tolerates risk and failure, (2) robust digital infrastructure (e.g., national digital ID and service platform), and (3) data-governance reforms (chief data officers, sovereign data policies) to break silos and support AI-driven public services [124-138][144-151][158-162].


Technical standards for plug-and-play AI – Irina later introduced the Model Context Protocol (MCP) as a universal “adapter” that lets AI models be built once and reused across domains (agriculture, health, etc.), analogous to how UPI standardized digital payments [250-254].


Overall purpose / goal of the discussion


The panel aimed to chart a collaborative roadmap for building, publishing, and scaling digital public infrastructure (DPI) for AI worldwide. By sharing concrete rollout examples, identifying systemic obstacles, and proposing both governance reforms and technical standards, the participants sought to mobilize governments, foundations, and technology firms around the “100 diffusion pathways by 2030” vision-ensuring AI is deployed safely, inclusively, and at population scale.


Overall tone and its evolution


The conversation began with an optimistic, celebratory tone, highlighting rapid successes and ambitious targets. As the dialogue progressed, it shifted to a pragmatic, problem-solving tone, acknowledging fragmentation, procurement hurdles, and safety concerns. By the end, the tone returned to hopeful and forward-looking, emphasizing concrete solutions (scaling hubs, MCP, policy reforms) and a collective call to action. Throughout, the atmosphere remained collaborative and constructive.


Speakers

Nandan Nilekani


Area of expertise: Digital public infrastructure, AI for agriculture and public good


Role/Title: Co-founder and Chairman of Infosys Technologies Limited (as noted in external sources) [S13][S14]


Speaker 1


Area of expertise: Event hosting/moderation (no specific domain)


Role/Title: Event host / moderator introducing the panel [S4][S6]


Shankar Maruwada


Area of expertise: AI diffusion pathways, public policy, panel moderation


Role/Title: Panel moderator


Irina Ghose


Area of expertise: AI model development, responsible AI, language localization


Role/Title: Managing Director, Anthropic India [S16][S17]


Trevor Mundeli


Area of expertise: Global health, AI scaling, philanthropic funding


Role/Title: President, Bill & Melinda Gates Foundation (global health focus) [S10]


Esther Dweck


Area of expertise: Public sector innovation, digital sovereignty, AI governance


Role/Title: Minister of Management and Innovation in Public Services, Brazil [S1][S2]


Additional speakers:


Om Birlaji – Speaker of Parliament of India (Chief Guest)


Martin Chongungji – Secretary General, Inter-Parliamentary Union (IPU)


Laszlo Z – Deputy Speaker, Parliament of Hungary


Dr. Chinmay Pandya – Representative, All World Gayatri Parivar


Ms. Jimena – (Affiliation not specified in transcript)


Dario Amadei – CEO, Anthropic (referenced in discussion)


Full session reportComprehensive analysis and detailed insights

The session opened with Nandan Nilekani outlining a practical illustration of how “diffusion pathways” can accelerate AI-driven public services. He described a farmer-support app that required nine months to launch in Maharashtra, was replicated in Ethiopia in three months, and then adapted for dairy farmers by Amul in just three weeks [4-12]. From this experience he argued that lived implementation dramatically shortens rollout times and coined the term “pathways” for the repeatable routes that enable others to reach the same point more quickly [13-15]. He announced an ambition to develop 100 diffusion pathways worldwide by 2030, backed by a newly formed global coalition that includes Anthropic, Google, the Gates Foundation, UNDP and other partners, and invited any organisation to join [19-27][28-32]. Nandan also referenced “Blue Dot”, an initiative aimed at creating job opportunities through AI-enabled platforms [??-??], recalled the earlier target of “50 in 5” (50 countries adopting AI pathways within five years) as a benchmark for the new ambition [??-??], and likened the 100-pathway target to the previous “15.5 DPI” goal, underscoring continuity in scaling objectives [??-??].


Shankar Maruwada expanded the metaphor, likening diffusion to the spread of know-how, trust and institutional capability rather than mere awareness [40-46]. He described pathways as shared “rails” that compress learning curves, lower costs and reduce risk, thereby allowing AI to be used safely across societies [44-47]. He later noted that, like the Unified Payments Interface (UPI), technology must become “boring” and invisible to users for true diffusion to occur [247-254].


Irina Ghose identified three non-technical prerequisites for AI to move from pilot to population scale: localisation to the user’s language, seamless embedding into existing daily workflows, and an iterative “AI-first” mindset that keeps the technology relevant [60-62]. She illustrated how low-code tools such as Anthropic’s Co-Work enable non-technical users-teachers, health workers, small-business owners-to adopt AI without writing code [64-73]. To further reduce friction, she introduced the Model Context Protocol (MCP), a universal “adapter” that allows AI models to be built once and then plugged into multiple domains, much as UPI standardised digital payments [250-254]. Irina later positioned MCP as a “universal language” that lets AI models access tools and data across sectors without bespoke re-engineering [250-254].


Trevor Mundeli highlighted a systemic obstacle: the proliferation of fragmented pilots across ministries and funders, which hampers scaling [88-99]. To counter this, he proposed government-funded “scaling hubs” in countries such as Rwanda, Nigeria, Senegal and Kenya that would aggregate funding, expertise and DPI (Digital Public Infrastructure) infrastructure, acting as centres of excellence that channel diffusion toward national-level impact [84-99][100-103]. He argued that without such hubs, the “pilotitis” phenomenon would persist, preventing sustainable, population-scale outcomes.


Esther Dweck described Brazil’s parallel reforms aimed at creating the institutional backbone required for AI diffusion. She called for a shift in public-sector procurement from a focus on lowest price and risk to an outcome-oriented, failure-tolerant approach that encourages innovation and involves suppliers [128-138]. She also stressed the importance of robust digital infrastructure-specifically a national digital ID and the gov.br service platform-as the foundation for AI-enabled public services [144-148]. Complementary data-governance measures, such as appointing chief data officers, breaking data silos and enacting a new decree on data governance, were presented as essential for trustworthy AI deployment [150-162]. Through the INSPIRE programme, Brazil is creating new institutional arrangements, promoting data sovereignty, and running a four-track training scheme for civil servants to build AI and digital skills [199-227].


Safety and auditability emerged as a counterweight to the drive for speed. Trevor warned that high-stakes applications, especially in health, require transparent, auditable systems; black-box recommendations are insufficient for clinicians who need to trace the reasoning behind a suggestion [274-282]. He praised Anthropic’s research on model interpretability and suggested that India’s DPI stack could serve as a cautious testbed for safe AI introduction [267-273].


Esther added a political-economic dimension, noting Brazil’s pursuit of digital sovereignty through data localisation, resident clouds and supplier negotiations [286-304]. She cited recent legislation mandating age verification for online services, explaining how the government is seeking privacy-preserving verification methods that protect children without invasive surveillance [308-314].


In concluding remarks, Shankar Maruwada summed up the vision: by 2030 the world should move from “digital public infrastructure” to “digital public intelligence”, reflecting a mature ecosystem where AI is embedded, safe and universally accessible [315-317].


Collectively, the panel underscored that building open, safe, and locally-adapted diffusion pathways-supported by institutional reforms, standardised protocols such as MCP, and responsible governance-is essential to realise the 100-pathway AI ambition by 2030 [315-317].


Session transcriptComplete transcript of the session
Nandan Nilekani

bought which farmers use and millions of farmers today, 2 .5 million farmers have downloaded this app. And this was built to make sure that farmers have access to the best information about access to prices, access to weather information and so on. And it’s very sophisticated. It took nine months to get this going in Maharashtra. But we learned a lot about how to do these things. And the next implementation was done in Ethiopia. So in Africa, and Ethiopia did the same thing in three months. So essentially what took us nine months the first time around took us three months. And recently, at the request of the Prime Minister, Amul implemented the whole thing. And Amul implemented it for cows and bought for dairy farmers to understand about the cows and whether they’re lactating or whether they’re, you know, milk and so on.

And that was done in three weeks. So I think you went from nine months to three months to three weeks. So what is the message in that is that if you get the lived experience of implementing these kind of systems for public good, you can actually dramatically reduce the time in which you can do that. And we call these ways of reaching the goal faster, we call them as pathways, because once you have a pathway, then you can get, somebody else can get to the same point quicker. And just like we had this notion that we’ll have 50 in five, 50 countries in five years, we are also now setting an ambitious goal for doing 100 diffusion pathways by 2030.

In other words, by 2030, all of us together across the world will develop these pathways to diffuse the use of AI in a positive way to help farmers, improve the life of young kids, allow people to get jobs through something called Blue Dot. There are so many things going on, but all of them are designed to be effective. to improve and make better people’s lives, can meet the aspirations in a very inclusive way so that everybody is in, nobody is left out. And so we announced a partnership. We announced a coalition of this, of 100 diffusion pathways by 2030. We announced that yesterday or day before yesterday. And we have a global coalition. Anthropic is there. Google is there.

Gates Foundation is there. UNDP is there. A whole host of people are there. And it’s a very open, it’s a big tent. Anybody can join the coalition. But our goal is all of us work together to very, in a focused manner, develop these pathways of diffusion of different kinds of positive AI use cases and then actually make it happen in countries around the world. So just like 15 .5 was a DPI goal, 100 diffusion pathways by 2030 is the AI goal we have. And we are confident that all of us collectively can get there. So I think this is important. I think it’s strategic for the world that we show the good use of AI, and it’s strategic that all of us work together to do that.

Thank you very much.

Speaker 1

Thank you so much, Mr. Nandan. At this point, I would love to invite our panelists up to the stage. We’ll start by taking a quick group photograph together and then begin the discussion. So let me invite Minister Esther Dweck, Mr. Trevor Mundell, Ms. Reena Ghosh, and Mr. Shankar Maruwada, accompanied by Nandan, to be on the stage for a quick group photograph. Thank you. Let me now hand it over to Shankar Maruwala, who will moderate us to the next panel.

Shankar Maruwada

Good afternoon. We have an exciting panel discussion ahead. Let me start off with where Nandan stopped. Hundred pathways. What are these pathways? These are diffusion pathways to AI impact safely and at scale. Let me provide a bit of background. France invented better than Britain in the first industrial revolution yet Britain won that Britain in turn out invented US in steel, Germany out invented US in chemistry yet it’s the US that won the second industrial revolution what was the crucial thing it was not better invention or even innovation the missing ingredient was diffusion which the United States of America did much better diffusing the benefits and the impact of this technology throughout the economy and the society when we say diffusion we don’t mean awareness or access diffusion as Nandan described is the spread of know -how, trust and institutional capability that allows organizations to adopt AI safely and sustainably as he explained Maharashtra was the pioneer to do this in India it’s like Sir Edmund Hillary climbing Mount Everest for the first time he inspires he creates a pathway for others to follow and it would be rather stupid if after he came back he said I am not sharing this with others the pathway I created I have removed it so now you guys find your own pathway the societies that create such pathways allow a whole lot of others to prosper to make progress to create impact inclusively and equitably that is the when Nandan talked about diffusion hundred pathways these are the hundred diffusion pathways across sectors countries continents some are some may be led by proprietary models some may be led by sovereign efforts some may not be it may differ It’s the choice of the AI adopter to decide which pathway works best for them.

So the diffusion infrastructure we are talking about creating isn’t a platform app or model. It’s shared rails that compress learning curves, cost and risk. So that AI can be used by all of society for all of humanity. With that, I would like to begin the panel discussion. Irina, from the model builder’s perspective, what needs to be true for AI to be deployable at population scale? Not just impressive pilots, especially in high -stake public systems. What needs to happen?

Irina Ghose

Thank you so much, Shankar. And absolutely a pleasure and honor to be here with all of you. Thank you so much. The way I think about it is AI deployment would seldom, if ever, have any roadblocks because of a complexity in the model or the performance. The only reason it fails to gain scale is because the perception in our mind about the complexity. And one of the things that we really feel is that you have to be all in, first yourself, diffuse it to people around you to make it happen. Now, if you think about it, in a pilot, you’ve got experts doing it, you’ve got guardrails, you’ve got the intensity of people, and you’ve got a select group.

Now, when that kind of goes and spreads out, you’ve got a teacher in Bihar kind of implementing it, you’ve got a health worker in Coimbatore, you’ve got a small business leader in Indore doing it, who are not into ML, but for them, AI will start having significance when it stops being a scientific tool to something which is as intuitive for them. So three things which come into play. The first one is that for diffusion, it needs to be contextual to the local language that you speak. Second, it needs to be in the workflow of what you’re doing every day and you don’t need to do net new things. And the third is to be, you have to be iterative and be at it to make it happen.

And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked with X -Step to make it diffused across so many realms of life. And at Anthropic also we said that it’s not a technology for the sake of the technology only in the hands of developers and builders. We found that in India, India happens to be the second largest user base of cloud outside the US. So a big round of applause to all of us out here for making that happen. And what we also felt is that when we are building tools, one of the tools you might have heard is co -work, which earlier used to be done a lot by developers.

But now, people who are information workers or who are just thinking as to how to solve things. The idea is that you do not have to develop code, read a lot of intense things. You can make the tool work for itself. So in my mind, diffusion really means, first, how do I think that everything that I do, I have to be AI first. Second, the ecosystem being in India around myself, I enthuse everybody. And third, how am I giving back to everybody in the last mile to make it happen?

Shankar Maruwada

Fantastic. One of the things I liked about what Anthropic CEO Dario Amadei said is very soon, imagine a country with a whole bunch of geniuses living in data centers. What will that country do? Think about it. But till we reach there, and Dario says in two, three years, but till we reach there, Trevor, as president of Gates Foundation looking at global health, you are dealing with a situation where you’ve seen a whole bunch of, you’ve seen a whole bunch of AI pilots. not too many of them have scaled. From your experience, what separates pilots from systems that have scaled and become institutional? What separates an experiment from a scaled, institutional, sustainable impact?

Trevor Mundeli

Thank you, Shanka. And thank you for the invitation to be on this good panel. And also for the overview you gave me a few days ago of the very good work you’re doing at XSTEP. I learned about Open AgriNet and where that has made progress. But on this issue of scaling of AI, I had an opportunity to, this morning, sit down with the heads of entities which we call scaling hubs. There are two of them here in India, and there are three, soon to be four, in Africa. And there’s also a pan -African venture called Smart Africa. And you might say, well, what are these scaling hubs? So the idea is that we would support a partnership with the governments now in Rwanda, Nigeria, Senegal, and soon to be Kenya, wherein we place funding that the government can use to take the pilots that are out there and to really push them to large scale.

And why would we need a hub like this to do that? Well, one of the big barriers that we are currently seeing is the fragmentation that is occurring out there in terms of many, many ventures, some that we fund, other funders, everything with very good intent. Let’s do a small pilot. Let’s quickly do something over here. Thousands of them occurring out there. You take it at a government level. They have people approaching the Ministry of Agriculture, the Ministry of Education, the Ministry of Health, Ministry of Finance. all of them with different groups and on the DPI front, all of them trying to put in place the necessary DPI infrastructure to support their pilots. And now this fragmentation which is occurring over there, which I think is a big inhibitor of scaling to real population scale that we need.

So we are going to invest in these hubs that can be points of aggregation. We don’t want to inhibit diffusion. People have the idea of diffusion as a more random process which goes anywhere, and there’s something good about that. But if we can channel the diffusion into these centers of excellence, I think at the country level, the feedback that we’ve had from the governments is that that is a way that we are really going to get to scale more rapidly. Thank you.

Shankar Maruwada

Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathways. Diffusion pathways. Definition is everywhere, right? Pathways by definition is fixed. So it’s how do you spread. a technology in certain fixed pathways towards certain impact. It is indeed a stress. I believe that stress needs to be there because we are talking of the stress of safe AI impact at scale. But it is indeed a challenge, and together we have to solve it very quickly. I want to talk a bit about Minister Esther Dweck’s ministry, MGI, or the Ministry of Management and Innovation. Isn’t that a cool concept? The government of Brazil has a minister and a ministry looking after the idea of innovation and management.

They are collaborating very closely with India on a range of issues, and it’s my honor, Your Excellency, to have you here. Minister, I want to ask you a question. Scale efforts, diffusion. A lot of times fail inside government, not because of technology. But because of procurement process change and accountability, what has to change inside the state for AI to move from pilots to durable public services?

Esther Dweck

Thank you, Shankar. Thank you for inviting me and also for the partnership that we have with India. And Brazil is looking for this partnership with India because of scale. If anything can be scaled up in India, it can be in Brazil because compared to India, we are not such a big country. But compared to many other countries, very large. So for us, very important, this partnership. But when you talk about the problem inside the state, our ministry was created. The whole name is Ministry of Management and Innovation in Public Service. So we are focusing on innovation inside the public services. And we created a special secretary for state transformation because we saw that the state had to be transformed in order to actually be able to have innovation.

Because if we stand with the same way of doing procurement, actually we won’t be. We won’t be able to. do it. So we think that we need, in terms of AI, we need to transform the state in three main areas. The first one is procurement, for sure. And any kind of innovation procurement needs to be changed. So also the infrastructure, especially the digital infrastructure, and of course the governments. And when I talk about the procurement process, usually people are looking for the lowest price, lowest risk, and usually civil servants are very afraid of doing procurement because the auditing bodies are trying to look if they’re doing something wrong. So they usually try to go for the lowest risk possible.

And this is what prevents innovation inside the government, especially because innovation comes with errors. We know that any innovation might come to error. And if the civil servant cannot make any mistakes, then we never innovate. So one of the things that we found out when we’re trying to ask for how to do innovation procurement in the government, the first thing people say, I’m afraid of doing any mistakes, then the auditing body will come after me and then I won’t be able to be a civil servant. So what have we done is to change the mindset of the procurement process. Instead of more process -oriented, we are looking for a more policy -oriented and looking at the outcomes and not only the lowest price thing.

And with many other ministries, we are discussing how to actually build that culture of innovation procurement with this idea that it must fail. And you can also interact with the one you’re buying off. Because, of course, you’re buying something that doesn’t exist. How do you explain to them what you need? So there are a lot of things that you have to change in terms of procurement in order to actually be able to do AI. And, of course, the second thing is the digital infrastructure. As, of course, as Nandan has said before, Brazil, since 2023, when we came here for the G20 in India, we brought this idea of DPI to Brazil very… as something very strong.

Thank you. and we already know that we had something that could be called the DPI, but we didn’t know the concept before. And one of the things that was very important for us was our digital ID and our platform for services, a digital platform for service, which both called gov .br. And based on this platform, you were able to, what we are discussing now in terms of optimizing, but also in having more personalized services, knowing the people, if you know the citizen, we will be able to provide them specialized service, and we’re doing AI to do this, how to actually specialize service, what the people actually need. So I think using this, having a good DPI infrastructure, especially in terms of identification, and be able to also, of course, to have a better data governance.

That’s the third thing I would like to say is the governance inside the state. When we launched our plan for AI, and this morning, today, we had a session on the Brazilian AI plan. And the first thing the president said is that we need our database. He said we need the Brazilian database. We cannot have silos anymore. We cannot have this minister saying, no, this is my data. No one can access this data. So we have to do it, of course, in a private, preserving privacy in a security way. So we discussed all the data governance. We’re about to launch a new decree on data governance. Having every minister to have a chief data officer, someone who actually knows the data, knows how to use the data.

So we are actually looking at these things in order to access from the state to be able to innovate into this AI. Thank you. That’s it. Thank you.

Shankar Maruwada

Wonderful. Thank you. Irina, you’ve been in the IT space for three decades. You’ve seen the Internet thing boom, bust, and now you’re seeing AI. From your vast experience, what is the most common failure mode when AI moves from pilots to everyday workforce, everyday? And what kind of safety infrastructure actually prevent?

Irina Ghose

yeah I think one of the things that we have to remember is that the failure never happens with a big bang it just slowly dies because people just stop reducing the level of interaction they have gradually and you suddenly realize that it’s not relevant anymore so what really needs to happen that you need to keep it in a way that people use it daily and use it in the way that is contextual for each of them. For example one of the reasons why it might fail is because the data sets are speaking across to a country of a different nature which is kind of setting benchmarks in banking and financial systems which is not the same way where in agriculture is the biggest thing that we require hence collecting data for Indian languages nuancing it by say legal, by agriculture by what people are speaking across in that dialect in that language, this is very critical so if I want to look at three things that needs to happen, first of all keep it contextual to the domain, micro domain in which it is required at Anthropic we have kind of worked closely to ensure that we now have Indic language availability for 10 Indian languages from Hindi to Malayalam to Gujarati to Urdu and it’s available in the latest models and it is incrementally improving day by day and the last part I would say is that ensuring that whatever you are doing the ROI that we look at should be if I invest in a language say Bengali how many net new use cases have been opened up because of that and how many more people have got the benefit of that and I think the work that say we are doing with Aikstep and thanks to the fields employed education, healthcare, everything that’s the litmus test that we should be measuring ourselves on

Shankar Maruwada

I want to ask a question to the audience by raising hands how many of you use UPI keep your hands up if you know how UPI works, what’s the protocol behind it what’s the technology behind it hands are steadily coming down this is my point, we don’t care about technology as long as it works, for something to work at population scale technology has to be boring technology has to be invisible till the time it is not, it has not diffused, it is just some magic mystery thing that we all are stuck with figuring out what to do it’s a long journey from technology as magic to technology as normal boring in fact this wise old man once told me when you stop thinking of something as technology that’s when it has diffused 500 years ago this was magical ocular technology.

It allowed someone to see. Now we don’t think of it as technology. A day will come when we don’t think of AI as technology. That is the day we can say that AI has diffused through all of society. We have some way to go for that. Trevor, when you hear of things like Open AgriNet, some exciting work happening, what makes you think that fees like infrastructure versus yet another project that is going to the path of pilotitis, death by pilots?

Trevor Mundeli

Well, I do look a little bit with envy at Open AgriNet. Having looked across the work that the foundation does in agriculture and in health, traditionally the narrative has been how fortunate those health folks are because there’s such huge funding into the health areas, such huge investment in research, in genomics, in human health. and much less on plant genomics, which admittedly is potentially more complex, the clinical trial infrastructures for developing new products on the human health side versus on the agriculture side. But now we come to AI, and I have to say I look at OpenAgronet, and I think that the agriculture community is ahead of human health in terms of the implementation of a system which is personally useful to a farmer smallholder farmer, for instance, being able to get the information they need, being able to determine what crop disease they have to deal with or a disease in their cattle and what the weather is going to be and how they can maximize the finances in their small farm.

All of these types of things I would love to see in the health space, a personal health assistant. In low – and middle -income countries, so many people are not very close to a tertiary hospital. And they may be 10, 20 miles even from a primary health care clinic. Can we not provide for them with a system that can personally provide them with the information that they need in a safe way? And I think Open AgriNet really puts those components of infrastructure together. The way that it’s modular, the way that you can adapt it to the local circumstances, it’s in many ways exactly what we need in that personal health side of the picture. So I only have some envy, but I hope we can duplicate that on the health side.

Thank you.

Shankar Maruwada

Thank you, Trevor. Open AgriNet is just a group of organizations coming together, collaborating, as Trevor said, each bringing in one piece of the puzzle so that together we can create those diffusion pathways. And as Nandan said, that is what allows us to take something from Maharashtra, which took nine months, to Ethiopia in three months. Back to India in three weeks. from agriculture to livestock, from India to Ethiopia, from Asia to Africa and back. That is the exciting possibility that India has been in the journey of for the last 15 years, what we call as DPI. The thing about DPI is when you start with a strong use case in mind, as Arina and others have said, you harness technology, so technology becomes a good slave to a very powerful cause.

Then you take advantage of rapidly evolving technology. Minister Dweck, if you designed a national diffusion pathway for one public service, what would you prioritize first, institutions, incentives, data readiness or governance?

Esther Dweck

Well, it’s difficult to choose only one thing, I guess. Maybe this perspective from management, you’re always looking for some kind of a systemic approach, trying to look at all these things. Together. Together. And actually, we recently have launched a program, an R &D for AI in Brazil. It’s called INSPIRE in English, but in Portuguese means BREATHE, INSPIRE, but the same acronym, which is AI for Public Service with Innovation, Responsibility, and Ethics. And it has this systemic approach inside of it. Because the first thing, we create this new institutional arrangement. It’s not new, but we had in this R &D project, we have the government, of course, we have some state -owned companies, we have some private companies, and our innovation ecosystem in Brazil, all of them bringing together in order to help the government to have new AI platforms.

Because when we, although we’re already using AI in Brazil, we saw that we have a lot of lack of technological expertise and lack of financial support as well. So we’re trying to create this platform where we can actually offer many bodies of the government different solutions that can be used in many different areas, as you said. As I was saying before. So this idea, well, first thing we are discussing to have the data more sovereignty on the data and how to actually use better, but also for the data to be ready to be used. So one thing I was explaining before. So using AI to help to improve our data set. So it’s going both ways.

Another thing is also in the governance perspective, of course, we’re creating, as I mentioned, this shared tools and common practices and trying to share how, and specifically in this project, we’re creating this generative AI platform, and we’re trying to apply to different solutions. So recently, at the end of last year, we have this university enrollment exam for people finishing high school. So we created this whole complete, for them to know when they’re finishing school, what they’re going to do. Are they going to the job market? Are they going? Enroll school to enroll university? How to apply? What’s the best thing for them? So using AI to help them to actually decide this. And they’re doing the same thing for health care, for.

agriculture sector as well. So we’re looking at all these things. And, of course, in capacity building. So we are a lot training civil servants. We have four trails, actually, for people who actually are the managers, the top managers, for IT experts, for people controlling data, and for regular civil servants. Because one thing, when we’re talking about state transformation, we thought the one thing you have to train and to change, of course, is the civil servants. And nowadays they have to have a digital mind. And some of them have been there for many years. They didn’t have the digital capabilities. So we’re training all of them in digital capabilities and specifically on AI as well in order to think how to use this new technology in their regular life in order to improve civil service.

So I think it’s a more systemic approach there.

Shankar Maruwada

Pathways are like digital rails. What should model developers focus on so that AI can plug into these pathways safely across sectors and countries?

Irina Ghose

Very interesting. And I’ll just try to kind of paint the picture by giving a context. Now, think about it. We’re talking a lot about agriculture. It has the last mile. Now, if you were to solve for that farmer day in and day out, there’ll be various kinds of work that they have to do. Look at what is the weather conditions, one source of data. Look at how the crop yield, et cetera, is performing in other source of data. The market prices in other source of data. Whatever has to be done across for reaping and sowing. So these kinds of data, if they want to infuse, anybody wants to infuse AI on top of that, and if you build it every time, it is so cumbersome.

Now, if you kind of do the same thing that, Nandan, you’ve been talking about, at one point of time, all of us are different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. universal adapter came, it took it away. We all use UPI for digital payments. Do we know anything to do with the technology behind it? Whether it’s earned, whatever is coming across as the small micropayment, we have no idea. So one of the things here to be done is have a universal language which accesses the tools as well as the data. So we came out with this concept in Anthropic in 2024 called the model context protocol. And very simplistically put, I think of MCP as to AI was say what UPI was to payments.

And in effect, what it really does is you develop things once and you make it MCP ready. And anything else that you want to do it further, you do not have to keep on writing again and again. So all the cases of agriculture, healthcare, anything else put together, that can happen seamlessly. Why does it matter for India? There’s a lot of data which already exists in hell. in education, in various ways that citizen services are going across, and that is a rich level of data. So if we kind of make this data AI ready, use the tools which are going across, then the case of diffusion and that accountability of everybody coming together will be that much more quicker.

Shankar Maruwada

Excellent. A lot of people who deploy AI, they have an old notion that it’s like normal software. You buy great software, it is perfected, and you deploy, and you can close that and go away. In AI, that is just the start, because as you use it, data comes in. The data gets better, the models get better. With better models, you provide better services, usage increases, more usage, more data. This cycle, and while it is happening, the models improve, the data improve, so for a lot of adopters, once they go beyond procurement how do you continuously invest to upgrade and evolve? That’s again a very important question. So when we talk of 100 diffusion pathways these are 100 diffusion pathways to safe AI impact at scale which creates a second stress and I’ll come to you on that Trevor.

When lives are at stake where do you draw the line between speed 100 pathways to 2030 and safety and coming from health safety means literally lives, right?

Trevor Mundeli

Yes Shankar and there are a lot of lives at stake and I feel the urgency. Every year we don’t have the next generation of malaria vaccines we’re seeing hundreds of thousands of young children dying. Every year we don’t have a personalized education coach for every child no matter where they are. we see a tremendous amount of human potential wasted. So there is this urgency to get things done and that might make one think very carefully on the safety front and it is that safety issue where people are in the health area saying we need to take a step back, we need to look carefully at the frameworks before we just jump in with like the application I talked about, the self -application, how would that be gated, how would that be guarded.

I do think that because of the excellence of the DPI stack here in India and because of the thousands of application efforts I see, you are going to probe those frameworks for the safe introduction probably first in a context which is, as Nandan was mentioning, the frugal innovation that will be relevant across lower middle income countries and actually beyond. So I do think that we are very much looking at India as a safe introduction. The foundry of AI application. and we want to see those frameworks whereby we can safely introduce the technology. In terms of the technology itself, just having a type of black box system that gives a health recommendation is almost never adequate, almost never satisfactory.

These systems need to be auditable. And I have to say that Anthropic has made quite a lot of progress in their research on how are these concepts, how are these recommendations actually represented in the model. People want to be able to audit that. They don’t just want something that comes out of nowhere. If you have a human clinician that makes an error, you can talk to that person. You can say, well, where did this, why did you think this was the case when you made a misdiagnosis over here? Was it because you didn’t elicit the right question from the patient or you transcribed incorrectly? And that is the kind of transparency that we actually demand of the AI systems at the end of the day.

So I think that… But between the work going on here in India and some of that transparency research, we can get there.

Shankar Maruwada

Thank you, Trevor. Minister Dweck, as you’re thinking of implementing AI solutions at scale, what is the hardest political or economic challenge, and what are some tips on how one should deal with it?

Esther Dweck

Okay. I think it’s kind of a political economy issue now, I think, in Brazil we are looking for. Of course, one thing is the workforce problem, because we may be going to this utopia that no human need to work anymore, and the machines work for us. So how actually create, how divide this wealth in order to come from these machines working? But there’s one point. But more concerning in our current period now in Brazil is about digital sovereignty. Of course, very few countries, maybe only two countries in the world, will be totally digital sovereign right now. But I think we have to. We have to increase our digital sovereignty in terms of being able to.

have our services and be able to operate it, be able to know where our data is, to know how we’ll be able to continue with our services to our populations. So we are discussing a lot of this in Brazil, how to increase our level of digital sovereignty. Of course, we know we’re probably not in a very, in a few years, be totally digital sovereign, but at least we’re to increase. And we’re actually working with our suppliers in order for them to offer us more sovereignty or at least some security that we not have any discontinuity. So I think using the state capacity and using the state procurement purchasing power, it’s very important to do this.

And we’re actually using it in order to talk to our buyers. And we discussed this sovereignty in three levels, in the data level. And for this, we’re bringing back the data to Brazil. We’re trying to have… We have two, as I mentioned before, two federal, state -owned companies that are actually having resident clouds within our companies to know where the data is, but only know where the data is not enough. So we are increasing our operational access to the data. And also, I think the third level is why you’re using technology, something that we’ve been discussing a lot here. And it’s not directly related to AI, but it’s related to digital services. I think one thing that we’re doing together here in India, using a technology that was developed here, a verifiable convention, which was very important for us, we are using right now in two pilot projects yet, but we want to scale it up.

One is related to rural credit, but the second one is related to something that I think the whole world is discussing, how to protect child online. So now in Brazil, we passed a law last year, which is a very important law. It was very quick to pass. After one of the digital influencers showed what was happening to children in the Internet, especially on social media, and we passed the bill and it said by 17 March, you have to know what age the person who’s accessing the Internet is. So how to do this in a way that you protect the privacy? We don’t actually know what people are using. So a lot of things are discussed and we’re trying to do this verified recognition in order to have this age verification in a very simple way, very easy for people and for people not to be afraid that the government is actually looking at the Internet.

So I think this is the way to make things that are actually useful and important to protect our citizens but also to provide them with very good services.

Shankar Maruwada

Thank you. Today’s topic was building publish and scale digital public infrastructure for AI. By 2030, when we would have made a lot of progress on that, we would stop calling DPI digital public infrastructure and we’ll start calling it digital public. intelligence. With that, a big thank you to all my panelists and to the audience. Thank you.

Irina Ghose

Thank you. Shankar, if I can just request you to send a token of appreciation to the panel. Thank you. Now the next session is about to start on a very unique topic, AI for Democracy. So we request all the audience here to remain seated. A very wonderful topic, AI for Democracy, and we are very blessed that today we have with us Honorable Chief Guest, Mr. Om Birlaji, Speaker of Parliament of India, Mr. Martin Chongungji, Secretary General, IPU, Mr. Laszlo Z, Deputy Speaker, Parliament of Hungary, Dr. Chinmay Pandya from All World Gayatri Parivar, Ms. Jimena.

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The farmer‑support app took nine months to launch in Maharashtra, was replicated in Ethiopia in three months, and then adapted for dairy farmers by Amul in just three weeks.”

The knowledge base states the app took nine months to build [S3] and later notes the rollout was compressed to three weeks after an intermediate three-month period [S24], confirming the reported timeline.

Additional Contextmedium

“Diffusion is likened to the spread of know‑how, trust and institutional capability rather than mere awareness.”

S30 describes diffusion as “walking the path” with domain knowledge and replaceability, emphasizing practical capability over simple deployment, which adds nuance to the metaphor presented.

Additional Contextmedium

“Like the Unified Payments Interface (UPI), technology must become “boring” and invisible to users for true diffusion to occur.”

S79 uses the UPI analogy to illustrate how a technology succeeds by becoming a ubiquitous, low‑friction infrastructure, providing supporting context for the claim about “boring” diffusion.

External Sources (80)
S1
A Digital Future for All (morning sessions) — – Esther Dweck (Minister, Brazil) discussed DPI for efficient government services, financial inclusion, and environmenta…
S2
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — – Esther Dweck (Minister of Management and Innovation in Public Services of Brazil)
S3
Building Population-Scale Digital Public Infrastructure for AI — – Esther Dweck- Irina Ghose – Irina Ghose- Esther Dweck – Nandan Nilekani- Trevor Mundeli- Esther Dweck
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S8
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwad…
S9
AI Meets Agriculture Building Food Security and Climate Resilien — – Dr. Soumya Swaminathan- Shankar Maruwada Dr. Swaminathan advocates for a cautious, medical research-style evaluation …
S10
Transforming Health Systems with AI From Lab to Last Mile — -Trevor Mundel: Dr. Dr. Trevor Mundel (medical degree and Ph.D. in mathematics), Rhodes Scholar, extensive experience in…
S11
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has ext…
S12
Building Population-Scale Digital Public Infrastructure for AI — – Nandan Nilekani- Trevor Mundeli – Trevor Mundeli- Esther Dweck
S13
Keynote-Rishad Premji — -Mr. Nandan Nilekani: Role/Title: Not specified; Area of expertise: Artificial intelligence (described as pioneer and th…
S14
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Nandan Nilekani** – Co-founder and chairman of Infosys Technologies Limited (participated online) Karianne Tung, Ve…
S15
https://dig.watch/event/india-ai-impact-summit-2026/fireside-conversation-01 — Thank you so much, Mr. Sikka, for your profound and very interesting remarks. And of course, your work at VNI also exemp…
S16
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — Thank you so much, Vedashree. That was very concise and even compelling. Especially coming from a regulatory standpoint….
S17
Keynote-Dario Amodei — – Irina Ghos: Managing Director for Anthropic India, has three decades of experience building businesses in India (menti…
S18
Building Population-Scale Digital Public Infrastructure for AI — – Irina Ghose- Esther Dweck – Nandan Nilekani- Irina Ghose
S19
Fireside Conversation: 01 — This fireside conversation featured Nandan Nilekani, co-founder of Infosys and architect of India’s Aadhaar system, and …
S20
Building Scalable AI Through Global South Partnerships — Yeah, thank you so much. And you talked about DPI, you talked about the private sector, public coming together. It’s the…
S21
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we bala…
S22
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — The speaker addressed practical challenges in implementing AI solutions for farmers in low-income countries. She stresse…
S23
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S24
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked…
S25
https://dig.watch/event/india-ai-impact-summit-2026/safe-and-responsible-ai-at-scale-practical-pathways — Now with education, when we are working recently, we realized that LLMs are becoming increasingly good, at least with th…
S26
Safe and Responsible AI at Scale Practical Pathways — “The moment they hit any domain‑specific vocabulary, that’s when they start failing.”[64]. “came up with a solution of u…
S27
Multilingual Internet: a Key Catalyst for Access & Inclusion | IGF 2023 Town Hall #75 — Audience:Hi, my name is Keisuke Kamimura, professor of linguistics and Japanese at Daito Bunka University in Tokyo. And …
S28
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Owen Lauder- Michael Brown- Wifredo Fernandez- Austin Marin- Sihao Huang Examples include enterprise knowledge bases,…
S29
All hands on deck to connect the next billions | IGF 2023 WS #198 — To address the digital divide, a whole-of-government and whole-of-society approach is advocated. Initiatives are being i…
S30
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S31
Operationalizing data free flow with trust | IGF 2023 WS #197 — Conversations about data governance should occur in various settings, including normative and legal frameworks. These di…
S32
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion — “The general panel is about policy on the one side, adoption on the other”[52]. “…we have to work downstream and upstr…
S33
Networking Session #37 Mapping the DPI stakeholders? — Ekanayake highlighted that DPI implementation requires government departments to work together in new ways around shared…
S34
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S35
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Modernising government processes is also on Barbados’s agenda, to align with the pace of technological development. The …
S36
Bridging the AI innovation gap — This comment provides a profound reframing of technical standards from bureaucratic requirements to tools of global equi…
S37
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S38
Anthropic’s MCP aims to transform AI integration — Anthropic hasunveiledthe Model Context Protocol (MCP), an open-source standard designed to improve AI assistant performa…
S39
Building Population-Scale Digital Public Infrastructure for AI — Diffusion requires technology to become contextual, workflow-integrated, and iterative rather than remaining a scientifi…
S40
A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23 — The analysis also highlights the shrinking opportunities for participation in UN processes related to internet governanc…
S41
Research Publication No. 2014-6 March 17, 2014 — – Many of the positions the US government has taken across roles – and both domestically and internationally – are at le…
S42
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe’s address emphasised the critical function of public-private partnerships in fostering standardisation that und…
S43
WS #290 Sovereignty and Interoperable Digital Identity in Dldcs — Technical experts from CityHub and the OpenID Foundation discussed the complexity of transitioning from physical to digi…
S44
Tech attache briefing: Technical standards: Policy implications and international landscape — -Examine the policyimplications of standardsand discuss the interaction between standards and regulations. -Mapping the…
S45
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — It doesn’t mean that countries can’t have their own perspectives or sovereign outlooks, but there is sort of a… a move…
S46
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — And this is what prevents innovation inside the government, especially because innovation comes with errors. We know tha…
S47
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The discussion highlighted the importance of policy interoperability rather than uniform global governance, recognizing …
S48
Digital sovereignty in Brazil: for what and for whom? | IGF 2023 Launch / Award Event #187 — Flavio Wagner:Thank you, Raquel. So, hi everybody. Nice to have you with us here this morning in Japan. So Brazil is a v…
S49
Review of AI and digital developments in 2024 — Approaches to digital sovereignty will vary, depending on a country’s political and legal systems. Legal approaches incl…
S50
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — High level of consensus with constructive disagreements mainly on implementation details rather than fundamental princip…
S51
WS #257 Emerging Norms for Digital Public Infrastructure — These key comments shaped the discussion by highlighting the complex, multifaceted nature of DPI. They moved the convers…
S52
e-Accessibility Policy Handbook for Persons with Disabilities — – Evaluate: how well are needs being met? Evaluative activities provide evidence on how well the concepts …
S53
https://dig.watch/event/india-ai-impact-summit-2026/collaborative-ai-network-strengthening-skills-research-and-innovation — They want to be co -architects of the future, this fundamental shift that humanity is going through. And this is where w…
S54
Democratizing AI Building Trustworthy Systems for Everyone — The historical perspective on technology diffusion offers both hope and urgency: success requires deliberate action acro…
S55
Laying the foundations for AI governance — Artemis Seaford: So the greatest obstacle, in my opinion, to translating AI governance principles into practice may actu…
S56
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: One part is that, of course, the way the technology is evolving, there is IP-driven solutions and there …
S57
WS #119 AI for Multilingual Inclusion — To achieve multilingual inclusion in AI, there is a need for innovation and local solutions. Communities should create t…
S58
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Mark Gachara emphasized that climate impacts are most severe in the Global South and among indigenous communities, so fu…
S59
Can we test for trust? The verification challenge in AI — ## Rapid-Fire Policy Recommendations 6. **Asymmetry** between rapid capability advancement and slower safety progress
S60
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S61
Safe and Responsible AI at Scale Practical Pathways — Moderate disagreement level that reflects healthy debate about implementation strategies rather than fundamental opposit…
S62
AI governance struggles to match rapid adoption — Accelerating AI adoptionis exposingclear weaknesses in corporate AI governance. Research shows that while most organisat…
S63
Fireside Conversation: 01 — Gates Foundation is there. UNDP is there. The Kenyans. It’s a global coalition. Because what we learned from the agricul…
S64
Building Population-Scale Digital Public Infrastructure for AI — Launch 100 diffusion pathways by 2030 initiative with global coalition including Anthropic, Google, Gates Foundation, an…
S65
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — And it requires four things, four ingredients. First of all, identity. How to remain human in a technological world. It …
S66
Panel Discussion Inclusion Innovation & the Future of AI — Thank you. So… So, you know, it’s been interesting because whenever we speak about AI at scale, when we talk about tak…
S67
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S68
Networking Session #37 Mapping the DPI stakeholders? — Ekanayake highlighted that DPI implementation requires government departments to work together in new ways around shared…
S69
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — And why would we need a hub like this to do that? Well, one of the big barriers that we are currently seeing is the frag…
S70
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Modernising government processes is also on Barbados’s agenda, to align with the pace of technological development. The …
S71
AI as critical infrastructure for continuity in public services — Human adoption challenges center on fear of replacement, communication gaps, and the need for quality-focused rather tha…
S72
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — The significance of MCP extends beyond technical functionality to address vendor lock-in. As Sellitto noted, “Before MCP…
S73
AI Without the Cost Rethinking Intelligence for a Constrained World — Okay. Yeah. So I came in by accident but was really interested to hear what’s being discussed, especially MSET and the p…
S74
WS #187 Bridging Internet AI Governance From Theory to Practice — Sandrine ELMI HERSI: Thank you. And let me first start to say that it’s a real pleasure to… Thank you all for joining …
S75
From principles to implementation – pathways forward — Tomas Lamanauskas:Thank you very much, Robert, and it’s great to follow the very loyal colleagues who always finish on t…
S76
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — Frederick Mitchell: As we look around today, there are wars and rumors of war. Some countries have marched into other …
S77
From India to the Global South_ Advancing Social Impact with AI — 60 ,000 crores is being put in our ITIs. So our ITIs are the grassroots organizations, government ITIs, there’s maybe mo…
S78
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — And finally, if there is any time left, we will see if any audience member wants to ask a question. Let me start off by …
S79
Building Sovereign and Responsible AI Beyond Proof of Concepts — This comment was exceptionally thought-provoking because it introduced a completely new dimension to the AI scaling prob…
S80
WS #98 Universal Principles Local Realities Multistakeholder Pathways for DPI — Bidisha Chaudhury from the University of Amsterdam raised a crucial point about the persistence of big tech influence ev…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Nandan Nilekani
2 arguments171 words per minute531 words185 seconds
Argument 1
Rapid reduction of implementation time shows scalability
EXPLANATION
Nandan illustrates how the time required to deploy AI‑driven solutions fell dramatically across successive projects, demonstrating that once a pathway is established, later roll‑outs can be executed much faster. This showcases the potential for rapid scaling of AI applications for public good.
EVIDENCE
He described the first implementation in Maharashtra taking nine months, the subsequent rollout in Ethiopia completing in three months, and the Amul dairy-farmer system being deployed in three weeks, highlighting the acceleration from nine months to three weeks across projects [4-12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Pathways reduced rollout time from nine months to three weeks through shared learning and institutional capability, as described in the AI diffusion discussion [S3] and reinforced in the fireside conversation with Nandan Nilekani [S19].
MAJOR DISCUSSION POINT
Implementation speed as evidence of scalable diffusion pathways
DISAGREED WITH
Trevor Mundeli
Argument 2
Goal of 100 diffusion pathways by 2030 as a strategic AI agenda
EXPLANATION
Nandan announces an ambitious target of creating one hundred AI diffusion pathways worldwide by 2030, positioning it as a collective strategic goal to spread positive AI use cases across sectors and countries. The aim is to coordinate global actors to achieve this scale.
EVIDENCE
He mentions the ambition of “100 diffusion pathways by 2030,” the formation of a global coalition that includes Anthropic, Google, the Gates Foundation, and UNDP, and calls the initiative the AI equivalent of the DPI goal of 50 in five years [15-29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The announcement of a target of 100 diffusion pathways by 2030 is recorded in the session summary on global AI partnerships [S20] and referenced in the high-level session featuring Nilekani [S14].
MAJOR DISCUSSION POINT
Setting a global target for AI diffusion
AGREED WITH
Shankar Maruwada, Trevor Mundeli, Irina Ghose
S
Shankar Maruwada
1 argument133 words per minute1438 words645 seconds
Argument 1
Diffusion defined as spread of know‑how, trust and institutional capability
EXPLANATION
Shankar explains that diffusion is not merely awareness but the transfer of practical know‑how, trust, and institutional capacity that enables organizations to adopt AI safely and sustainably. This definition underpins the concept of diffusion pathways.
EVIDENCE
He states that diffusion is “the spread of know-how, trust and institutional capability that allows organizations to adopt AI safely and sustainably” and links it to the Maharashtra example as a pioneering pathway [43-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Diffusion is defined as the spread of know-how, trust and institutional capability in the discussion of diffusion pathways [S3] and highlighted in the moderator’s remarks on historical context [S24].
MAJOR DISCUSSION POINT
Conceptual definition of diffusion
AGREED WITH
Esther Dweck, Trevor Mundeli, Nandan Nilekani
I
Irina Ghose
4 arguments163 words per minute1288 words473 seconds
Argument 1
AI must be contextual to local language, embedded in daily workflow, and iterative
EXPLANATION
Irina argues that for AI to diffuse at scale it must be delivered in the user’s native language, fit seamlessly into existing daily tasks, and be continuously refined through iteration. These conditions make AI feel intuitive rather than a specialized tool.
EVIDENCE
She lists three requirements: contextual to the local language, integrated into everyday workflow without new processes, and an iterative approach to implementation [60-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for AI to be in the user’s native language, fit existing workflows and be iteratively refined is emphasized in the agrotech lightning talk on mobile-first tools [S22], the diffusion discussion [S24], and the multilingual internet briefing [S27].
MAJOR DISCUSSION POINT
Prerequisites for population‑scale AI deployment
AGREED WITH
Shankar Maruwada, Nandan Nilekani
DISAGREED WITH
Trevor Mundeli
Argument 2
“AI‑first” mindset and ecosystem enthusiasm are essential for diffusion
EXPLANATION
Irina stresses that individuals and organisations need to adopt an “AI‑first” attitude, actively champion AI within their networks, and foster an enthusiastic ecosystem to drive widespread adoption. This cultural shift is as important as the technology itself.
EVIDENCE
She says “first, I have to think that everything I do, I have to be AI first” and describes energising the Indian ecosystem and encouraging everyone to be enthusiastic about AI [72-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Irina’s call for an “AI-first” attitude and the importance of an enthusiastic ecosystem are quoted directly in the AI diffusion session transcript [S3] and echoed in the panel’s discussion on energising the ecosystem [S24].
MAJOR DISCUSSION POINT
Cultural and ecosystem drivers of AI diffusion
Argument 3
Failure is gradual loss of relevance; requires domain‑specific data and language support
EXPLANATION
Irina notes that AI deployments rarely fail abruptly; instead they fade as users stop finding them relevant. Maintaining relevance requires domain‑specific datasets and robust language support tailored to local contexts.
EVIDENCE
She explains that “failure never happens with a big bang, it just slowly dies because people just stop reducing the level of interaction” and highlights the need for domain-specific data and language nuances, especially for Indian languages [169-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gradual decay of AI relevance and the need for domain-specific datasets and language nuances are discussed in the observation about vocabulary failure [S25] and the safe-AI guidelines recommending glossaries for domain terms [S26]; multilingual challenges are also noted in [S27].
MAJOR DISCUSSION POINT
Gradual decay as a common failure mode
Argument 4
Model Context Protocol (MCP) provides a universal adapter for safe AI integration
EXPLANATION
Irina introduces the Model Context Protocol as a standard that lets developers build AI components once and reuse them across sectors, similar to how UPI standardized digital payments. MCP aims to simplify integration and improve safety by providing a common interface.
EVIDENCE
She describes MCP as “what UPI was to payments,” a universal language that makes tools and data AI-ready, allowing seamless deployment across agriculture, health, and other domains [250-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Model Context Protocol is introduced as a universal adapter for AI components in the diffusion session [S3] and described as analogous to UPI in the U.S. AI standards overview [S28].
MAJOR DISCUSSION POINT
Technical standard to streamline safe AI deployment
AGREED WITH
Trevor Mundeli, Esther Dweck
DISAGREED WITH
Esther Dweck
E
Esther Dweck
5 arguments180 words per minute1938 words643 seconds
Argument 1
Procurement should be outcome‑oriented, allow controlled failure, and involve suppliers
EXPLANATION
Esther argues that public‑sector procurement must shift from a focus on lowest price and risk to an outcome‑oriented approach that tolerates controlled failure and engages directly with innovators. This change is necessary to foster AI experimentation within government.
EVIDENCE
She explains that current procurement seeks lowest price and risk, civil servants fear audits, and therefore innovation stalls; she proposes a policy-oriented, outcome-focused mindset and collaboration with suppliers to enable controlled failure [128-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to shift public-sector procurement to an outcome-oriented, failure-tolerant approach that engages innovators is outlined in the AI diffusion discussion [S3].
MAJOR DISCUSSION POINT
Reforming procurement for AI innovation
AGREED WITH
Shankar Maruwada, Trevor Mundeli, Nandan Nilekani
DISAGREED WITH
Irina Ghose
Argument 2
Build digital ID and a unified service platform (gov.br) as backbone for AI services
EXPLANATION
Esther highlights Brazil’s digital ID system and the gov.br unified service platform as critical infrastructure that enables personalized, AI‑driven public services. These digital foundations support identification, data sharing, and service personalization.
EVIDENCE
She describes the digital ID and the gov.br platform as enabling personalized services, allowing the government to know citizens and tailor AI applications accordingly [147-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brazil’s digital ID system and the gov.br unified service platform are presented as foundations for AI-driven public services in the Brazil digital future session [S1] and in the whole-of-government digital ID overview [S29]; DPI parallels are drawn in [S30].
MAJOR DISCUSSION POINT
Digital infrastructure as AI enabler
DISAGREED WITH
Irina Ghose
Argument 3
Establish data governance, chief data officers, and sovereign data policies
EXPLANATION
Esther outlines a plan to create a national data governance framework, appoint chief data officers in each ministry, and launch a decree on data governance to break data silos and ensure responsible, privacy‑preserving use of data for AI.
EVIDENCE
She mentions the need for a Brazilian database, the upcoming decree on data governance, and the appointment of chief data officers to oversee data use and governance [150-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A national data governance framework, appointment of chief data officers and sovereign data policies are detailed in Brazil’s ministerial remarks [S1] and the data-governance workshop summary [S31]; similar recommendations appear in the AI diffusion discussion [S3].
MAJOR DISCUSSION POINT
National data governance for AI
Argument 4
Digital sovereignty requires data localization, resident clouds, and supplier negotiations
EXPLANATION
Esther stresses that Brazil must increase digital sovereignty by keeping data within national borders, developing resident cloud capabilities, and negotiating with suppliers to ensure continuity and security of services.
EVIDENCE
She discusses Brazil’s push for digital sovereignty, the creation of two state-owned companies with resident clouds, and efforts to bring data back to Brazil while negotiating with suppliers for greater control [290-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brazil’s push for digital sovereignty, including resident cloud initiatives and supplier negotiations, is described in the Brazil session notes [S1] and the digital sovereignty discussion [S29].
MAJOR DISCUSSION POINT
Achieving digital sovereignty
Argument 5
Address wealth distribution and workforce impacts of AI automation
EXPLANATION
Esther points out that AI‑driven automation raises political‑economic questions about how the resulting wealth will be shared and how the workforce will be re‑skilled, emphasizing the need for policies that manage these societal impacts.
EVIDENCE
She notes concerns about a future where machines do all work, the challenge of dividing wealth generated by AI, and the broader workforce problem associated with automation [287-289].
MAJOR DISCUSSION POINT
Socio‑economic implications of AI
T
Trevor Mundeli
5 arguments167 words per minute1117 words399 seconds
Argument 1
Fragmented pilots hinder scaling; scaling hubs aggregate efforts and funding
EXPLANATION
Trevor observes that the proliferation of isolated AI pilots creates fragmentation, which impedes national‑scale impact. He proposes dedicated scaling hubs that pool resources, coordinate with governments, and provide a single point of aggregation to accelerate diffusion.
EVIDENCE
He describes the existence of scaling hubs in India and Africa, their role in aggregating funding and coordinating with ministries, and how fragmentation of many small pilots is a major barrier to scaling [84-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The proliferation of isolated AI pilots and the role of scaling hubs in India and Africa to aggregate resources are discussed in the AI diffusion session [S3] (lines 84-99).
MAJOR DISCUSSION POINT
Need for coordinated scaling hubs
AGREED WITH
Nandan Nilekani, Shankar Maruwada, Irina Ghose
DISAGREED WITH
Irina Ghose
Argument 2
Hubs act as centers of excellence to channel diffusion and achieve rapid national scale
EXPLANATION
Trevor further explains that these scaling hubs function as centers of excellence, providing structured pathways for governments to adopt AI safely and quickly, thereby converting pilot projects into large‑scale public services.
EVIDENCE
He notes that channeling diffusion through hubs of excellence is viewed by governments as the fastest route to scale, and that this approach aligns with the DPI stack in India [96-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Scaling hubs are described as centers of excellence that provide structured pathways for rapid national scale in the same AI diffusion discussion [S3] (lines 96-99).
MAJOR DISCUSSION POINT
Hubs as diffusion accelerators
DISAGREED WITH
Irina Ghose
Argument 3
AI systems must be auditable and transparent, not opaque black boxes
EXPLANATION
Trevor stresses that for high‑stakes applications, AI outputs need to be auditable and explainable; stakeholders must be able to trace decisions rather than accept opaque recommendations, ensuring accountability and trust.
EVIDENCE
He cites Anthropic’s research on making model recommendations auditable and argues that clinicians need to understand why a model made a particular suggestion, emphasizing the need for transparency [274-279].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for auditable, explainable AI outputs, citing Anthropic’s research, is raised in the AI safety segment of the diffusion session [S3] and reinforced by U.S. AI standards on transparency [S28].
MAJOR DISCUSSION POINT
Auditability as a safety requirement
AGREED WITH
Irina Ghose, Esther Dweck
Argument 4
Urgency to save lives must be balanced with robust safety frameworks; India’s DPI serves as a testbed
EXPLANATION
Trevor highlights the tension between the urgent need for AI‑driven solutions in health and education and the necessity of strong safety safeguards. He sees India’s DPI ecosystem as an ideal environment to pilot and refine safety frameworks before broader deployment.
EVIDENCE
He mentions the pressing need for malaria vaccines and personalized education, the call for safety frameworks, and the view that India’s DPI stack can act as a safe introduction point for frugal innovation [267-273].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing urgent AI-driven health solutions with safety frameworks, using India’s DPI ecosystem as a testbed, is mentioned in the diffusion discussion [S3] and echoed in the broader conversation on AI adoption vs. energy constraints [S21].
MAJOR DISCUSSION POINT
Balancing speed with safety in high‑stakes domains
DISAGREED WITH
Nandan Nilekani
Argument 5
Emphasize frugal innovation for low‑ and middle‑income countries while maintaining safety
EXPLANATION
Trevor argues that AI solutions for LMICs should be built on frugal, cost‑effective innovations that still meet rigorous safety standards, ensuring that scalability does not compromise protection of users.
EVIDENCE
He refers to “frugal innovation” that is relevant across lower-middle-income countries and stresses the need to keep safety intact while scaling such solutions [271-273].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on frugal, cost-effective AI innovation for LMICs that retains safety standards is articulated in the AI diffusion panel [S3] (lines 271-273).
MAJOR DISCUSSION POINT
Frugal innovation as a safety‑aware scaling strategy
S
Speaker 1
1 argument74 words per minute83 words67 seconds
Argument 1
A group photograph should be taken before the panel discussion to foster unity and visibility among participants
EXPLANATION
Speaker 1 proposes that the panelists gather for a quick group photograph before starting the discussion, suggesting that this visual record and shared moment helps create a sense of cohesion and signals the collaborative nature of the event.
EVIDENCE
He thanks Nandan, states that they will start by taking a quick group photograph together and then begin the discussion, and proceeds to invite the panelists onto the stage for the photo [33-36].
MAJOR DISCUSSION POINT
Procedural step to promote cohesion and visibility
Agreements
Agreement Points
Creation of structured diffusion pathways/infrastructure to accelerate AI scaling
Speakers: Nandan Nilekani, Shankar Maruwada, Trevor Mundeli, Irina Ghose
Goal of 100 diffusion pathways by 2030 as a strategic AI agenda Diffusion defined as spread of know‑how, trust and institutional capability Fragmented pilots hinder scaling; scaling hubs aggregate efforts and funding Model Context Protocol (MCP) provides a universal adapter for safe AI integration
All speakers stress the need for repeatable, shared pathways-whether described as a global target of 100 pathways, the know-how/trust infrastructure, scaling hubs, or a universal technical protocol-to dramatically reduce implementation time and enable rapid, safe diffusion of AI solutions across sectors and countries [15-29][43-46][84-99][250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for structured diffusion pathways is highlighted in discussions on building population-scale digital public infrastructure, where scaling hubs are proposed as aggregation points to coordinate AI rollout [S39] and the concept of co-architecting 100 AI diffusion pathways emphasizes coordinated, language-aware scaling [S53]. Open-source diffusion models further support structured pathways for broader access [S56].
Institutional and governance reforms are essential for AI diffusion
Speakers: Shankar Maruwada, Esther Dweck, Trevor Mundeli, Nandan Nilekani
Diffusion defined as spread of know‑how, trust and institutional capability Procurement should be outcome‑oriented, allow controlled failure, and involve suppliers Scaling hubs act as centers of excellence to channel diffusion and achieve rapid national scale Global coalition of diverse actors to work together on diffusion pathways
Speakers agree that changes in public-sector procurement, data governance, and the creation of dedicated hubs or coalitions are required to provide the institutional capacity and policy environment needed for scaling AI safely and effectively [43-46][128-140][84-99][22-27].
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder, bottom-up governance models are advocated to counter centralisation in internet governance and to enable diverse participation in AI diffusion [S40]; international efforts to align AI governance across the tech stack point to the necessity of institutional reforms and common standards [S45]; policy interoperability rather than uniform global governance underscores the need for adaptable institutional frameworks [S47]; and broader calls for democratising AI stress reforms in decision-making structures [S54].
AI solutions must be contextual, language‑specific and fit into existing workflows
Speakers: Irina Ghose, Shankar Maruwada, Nandan Nilekani
AI must be contextual to local language, embedded in daily workflow, and iterative Diffusion is the spread of know‑how, trust and institutional capability that allows organizations to adopt AI safely and sustainably Implementation timelines shortened through local adaptations (Maharashtra, Ethiopia, Amul) demonstrate the importance of contextual rollout
There is consensus that AI must be delivered in users’ native languages, integrated seamlessly into daily tasks, and iteratively refined, as this contextualization underpins successful diffusion pathways and rapid rollout [60-62][43-46][4-12].
POLICY CONTEXT (KNOWLEDGE BASE)
Diffusion requires AI to be contextual and workflow-integrated rather than a pure scientific tool, as argued in the DPI building report [S39]; multilingual inclusion initiatives stress local language solutions and community-driven systems [S57]; and the co-architecting pathways discussion highlights the importance of language-specific data and voice adoption [S53].
Safety, auditability and transparency are non‑negotiable for high‑stakes AI deployments
Speakers: Trevor Mundeli, Irina Ghose, Esther Dweck
AI systems must be auditable and transparent, not opaque black boxes Model Context Protocol (MCP) provides a universal adapter for safe AI integration Digital sovereignty and privacy safeguards are required for trustworthy AI services
All agree that AI must be built with mechanisms for auditability, standardised interfaces, and strong data/privacy safeguards to ensure trustworthy, safe deployment, especially in health and public services [274-279][250-254][290-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs note the asymmetry between rapid AI capability growth and slower safety progress, framing safety as a non-negotiable requirement [S59]; guidance for international AI safety coordination stresses the structural mismatch that must be addressed through robust safeguards [S60]; practical pathways for safe AI at scale call for balanced innovation and safeguards [S61]; and analyses of governance gaps show that rapid roll-outs have outpaced safety mechanisms [S62].
Similar Viewpoints
Both emphasize that a supportive ecosystem—through an AI‑first cultural mindset and robust digital public service platforms—drives adoption and effective use of AI at scale [72-74][147-149].
Speakers: Irina Ghose, Esther Dweck
“AI‑first” mindset and ecosystem enthusiasm are essential for diffusion Build digital ID and a unified service platform (gov.br) as backbone for AI services
Both view the creation of dedicated structures (hubs or pathways) that aggregate expertise and resources as critical to moving AI from pilots to institutionalised services [84-99][43-46].
Speakers: Trevor Mundeli, Shankar Maruwada
Scaling hubs act as centers of excellence to channel diffusion and achieve rapid national scale Diffusion defined as spread of know‑how, trust and institutional capability
Unexpected Consensus
Alignment of technical standardisation with data sovereignty goals
Speakers: Irina Ghose, Esther Dweck
Model Context Protocol (MCP) provides a universal adapter for safe AI integration Digital sovereignty requires data localisation, resident clouds and supplier negotiations
While Irina focuses on a technical universal protocol (MCP) to streamline AI integration, Esther stresses policy-driven data localisation and sovereignty; both converge on the need for standardized, controllable data interfaces that respect national control, an unexpected overlap between technical and policy domains [250-254][290-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Public-private partnerships are identified as key to harmonising AI standards that support sovereign data policies [S42]; discussions on interoperable digital identity underline the need for standards that respect sovereignty while enabling cross-border interoperability [S43]; technical standards policy analyses map how organisations like ISO and ITU can bridge standards and regulation for sovereign objectives [S44]; the push for an ISO AI governance standard illustrates movement toward universal standards compatible with national sovereignty concerns [S45]; and regional debates on digital sovereignty in Brazil and Europe illustrate the tension and attempts to align standards with sovereign goals [S48][S50].
Overall Assessment

The panel shows strong convergence on four core themes: (1) establishing repeatable diffusion pathways, (2) reforming institutional and governance frameworks, (3) ensuring AI is locally contextualised and workflow‑integrated, and (4) embedding safety, auditability and privacy safeguards. These shared positions indicate a high level of consensus that coordinated technical standards, policy reforms, and ecosystem building are all required to achieve the ambitious 100‑pathway target by 2030.

High consensus across speakers, suggesting that future initiatives can build on this common ground to design integrated strategies that combine technical protocols (e.g., MCP), scaling hubs, outcome‑oriented procurement, and robust data governance, thereby increasing the likelihood of successful, safe, and inclusive AI diffusion.

Differences
Different Viewpoints
Rapid scaling versus safety safeguards
Speakers: Nandan Nilekani, Trevor Mundeli
Rapid reduction of implementation time shows scalability Urgency to save lives must be balanced with robust safety frameworks; India’s DPI serves as a testbed
Nandan emphasizes that AI-driven solutions can be rolled out extremely fast – from nine months to three weeks – and pushes the 100 diffusion pathways target as a strategic agenda [12-13][15-16]. Trevor counters that while speed is desirable, high-stakes applications (e.g., health) require strong safety and auditability frameworks, and he sees India’s DPI stack as a cautious testbed rather than a race for speed [267-273][274-279].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources highlight the tension between fast AI capability development and the slower evolution of safety frameworks, describing it as a core governance challenge [S59][S60]; recommendations call for balanced pathways that integrate safety without stifling innovation [S61]; and evidence shows that corporate AI governance has struggled to keep pace with rapid adoption [S62].
Centralised scaling hubs versus bottom‑up contextual diffusion
Speakers: Trevor Mundeli, Irina Ghose
Fragmented pilots hinder scaling; scaling hubs aggregate efforts and funding Hubs act as centers of excellence to channel diffusion and achieve rapid national scale AI must be contextual to local language, embedded in daily workflow, and iterative Model Context Protocol (MCP) provides a universal adapter for safe AI integration
Trevor argues that the proliferation of isolated pilots creates fragmentation and proposes dedicated scaling hubs to aggregate resources and provide a coordinated pathway to national scale [84-99]. Irina stresses that diffusion works best when AI is tailored to local languages, fits existing workflows, and is iteratively refined; she also proposes a universal Model Context Protocol to enable reuse across sectors, suggesting a more decentralized, technology-standard approach [60-62][250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
The DPI report proposes scaling hubs as coordination points but also warns that over-centralisation can limit diversity, advocating bottom-up multistakeholder processes [S39][S40]; emerging norms for digital public infrastructure stress the need for governance models that balance central coordination with local contextualisation [S51].
Procurement reform versus cultural AI‑first mindset
Speakers: Esther Dweck, Irina Ghose
Procurement should be outcome‑oriented, allow controlled failure, and involve suppliers Build digital ID and a unified service platform (gov.br) as backbone for AI services AI‑first mindset and ecosystem enthusiasm are essential for diffusion
Esther calls for a shift in public-sector procurement from lowest-price, low-risk focus to an outcome-oriented, failure-tolerant approach that engages suppliers and reforms processes [128-140]. Irina, by contrast, highlights the need for an “AI-first” cultural attitude and ecosystem enthusiasm, focusing on user-level adoption rather than institutional procurement changes [72-74].
Digital sovereignty versus universal technical standards
Speakers: Esther Dweck, Irina Ghose
Digital sovereignty requires data localisation, resident clouds, and supplier negotiations Model Context Protocol (MCP) provides a universal adapter for safe AI integration
Esther stresses that Brazil must increase digital sovereignty by keeping data within national borders, developing resident clouds, and negotiating with suppliers to ensure continuity and security [290-304]. Irina promotes a universal Model Context Protocol that standardises AI integration across domains, potentially reducing the need for strict localisation and favouring interoperability [250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
Ongoing debates on digital sovereignty in Brazil and Europe illustrate the clash between national data control and the push for interoperable technical standards [S48][S50]; policy analyses of standards organisations highlight how standards can both support and challenge sovereign objectives [S44][S45]; and broader discussions on policy interoperability stress the need to reconcile sovereign perspectives with universal standards [S47].
Unexpected Differences
Fixed diffusion pathways versus flexible universal adapter
Speakers: Shankar Maruwada, Irina Ghose
Diffusion pathways are fixed Model Context Protocol (MCP) provides a universal adapter for safe AI integration
Shankar describes diffusion pathways as “fixed” routes that compress learning curves and risk [104-106]. Irina’s proposal of a universal Model Context Protocol suggests a flexible, reusable technical layer that can adapt across sectors, implying that pathways need not be rigidly predefined. This conceptual clash between a fixed-route view and a modular standard was not anticipated given the overall consensus on diffusion.
Overall Assessment

The panel broadly agrees on the importance of establishing AI diffusion pathways to achieve public‑good outcomes. However, substantive disagreements emerge around the preferred mechanism: rapid, technology‑driven scaling versus institutionally coordinated hubs; speed of rollout versus rigorous safety and auditability; national procurement and sovereignty reforms versus cultural, AI‑first adoption; and whether diffusion pathways should be fixed routes or supported by flexible universal standards.

Moderate to high – while the end goal is shared, the divergent strategies indicate potential friction in policy design and implementation. These tensions could affect the pace and coherence of AI diffusion, requiring careful negotiation to align rapid deployment ambitions with safety, sovereignty, and inclusive governance.

Partial Agreements
All speakers concur that creating diffusion pathways for AI is essential for public‑good impact, but they diverge on the mechanisms: Nandan proposes a global target; Shankar defines diffusion conceptually; Irina stresses language‑level contextualisation; Trevor advocates scaling hubs to overcome fragmentation; Esther calls for procurement and institutional reforms. The shared goal is evident, yet the routes differ [15-16][43-46][60-62][84-99][128-140].
Speakers: Nandan Nilekani, Shankar Maruwada, Irina Ghose, Trevor Mundeli, Esther Dweck
Goal of 100 diffusion pathways by 2030 as a strategic AI agenda Diffusion defined as spread of know‑how, trust and institutional capability AI must be contextual to local language, embedded in daily workflow, and iterative Fragmented pilots hinder scaling; scaling hubs aggregate efforts and funding Procurement should be outcome‑oriented, allow controlled failure, and involve suppliers
All three agree that safety, accountability and good governance are prerequisites for scaling AI. Trevor focuses on auditability of model outputs; Irina highlights iterative, domain‑specific refinement; Esther points to national data‑governance structures and chief data officers. Each stresses a different layer of the safety stack but shares the overarching aim of trustworthy AI deployment [274-279][60-62][150-162].
Speakers: Trevor Mundeli, Irina Ghose, Esther Dweck
AI systems must be auditable and transparent, not opaque black boxes AI must be contextual to local language, embedded in daily workflow, and iterative Establish data governance, chief data officers, and sovereign data policies
Takeaways
Key takeaways
Diffusion pathways are the primary mechanism for scaling AI impact; the goal is 100 pathways by 2030. Implementation time can be dramatically reduced through learned pathways (9 months → 3 months → 3 weeks). Successful diffusion requires AI to be contextual (local language), embedded in daily workflows, and iteratively improved. Procurement in the public sector must shift from lowest‑price, low‑risk focus to outcome‑oriented, risk‑tolerant, innovation‑friendly processes. Fragmented pilots hinder scale; dedicated “scaling hubs” act as national centers of excellence to aggregate pilots, funding, and expertise. Robust digital public infrastructure (digital ID, unified service platforms, data governance, chief data officers) is essential for AI deployment. Safety and auditability are non‑negotiable, especially in high‑stakes domains; models need transparent, auditable interfaces (e.g., Model Context Protocol). Balancing rapid diffusion with safety is critical; frugal, low‑cost innovation can serve LMICs while maintaining safeguards. Digital sovereignty—control over data location and usage—is a major political/economic concern for countries like Brazil.
Resolutions and action items
Launch of a global coalition to develop 100 diffusion pathways by 2030, with partners including Anthropic, Google, Gates Foundation, UNDP, etc. Commitment to create and fund scaling hubs in Rwanda, Nigeria, Senegal, Kenya, and additional hubs in Africa and India. Brazil’s Ministry of Management and Innovation to roll out the INSPIRE (AI for Public Service) program, integrating government, state‑owned firms, and private sector. Adoption of outcome‑oriented procurement reforms in Brazil (and advocated for elsewhere) to enable controlled‑failure innovation. Development of a universal Model Context Protocol (MCP) by Anthropic to standardize AI integration across sectors. Continued partnership between India and Brazil on digital public infrastructure (digital ID, gov.br platform) and data governance frameworks.
Unresolved issues
Specific metrics and timelines for measuring the success of each diffusion pathway remain undefined. How to uniformly enforce auditability and transparency standards across diverse AI models and vendors. Mechanisms for ensuring long‑term sustainability and financing of scaling hubs after initial funding. Detailed strategies for achieving full digital sovereignty, especially data localization, without compromising interoperability. Approaches to address workforce displacement and equitable wealth distribution resulting from AI automation.
Suggested compromises
Adopt a policy‑oriented procurement approach that tolerates limited, managed failures rather than insisting on zero‑risk contracts. Use scaling hubs as a middle ground between completely open diffusion and isolated pilots, channeling resources while preserving innovation diversity. Implement outcome‑based incentives for suppliers, encouraging collaboration with innovators while maintaining accountability. Balance rapid rollout (speed) with safety frameworks by piloting in frugal‑innovation contexts before wider deployment.
Thought Provoking Comments
We call these ways of reaching the goal faster, we call them as pathways… we are now setting an ambitious goal for doing 100 diffusion pathways by 2030. A global coalition including Anthropic, Google, Gates Foundation, UNDP has been announced.
Introduces the central framing device of the discussion – ‘diffusion pathways’ – and sets a concrete, time‑bound ambition together with a multi‑stakeholder coalition, moving the conversation from anecdotal pilots to a coordinated global strategy.
Sets the agenda for the entire panel; every subsequent speaker references ‘pathways’, ‘diffusion’, and the need for scalable infrastructure. It prompts the panel to think about how to operationalise such pathways across sectors and countries.
Speaker: Nandan Nilekani
Diffusion is the spread of know‑how, trust and institutional capability that allows organisations to adopt AI safely and sustainably… like Sir Edmund Hillary climbing Everest – he creates a pathway for others; it would be stupid not to share it.
Uses a vivid historical analogy to clarify that diffusion is not just awareness but the transfer of practical capability, emphasizing the moral imperative to share knowledge.
Deepens the audience’s understanding of ‘pathways’, steering the discussion toward concrete mechanisms (shared rails, institutional capability) rather than abstract technology talk.
Speaker: Shankar Maruwada
AI deployment would seldom fail because of model complexity or performance. The only reason it fails to gain scale is perception of complexity. Three things are needed: contextual language, integration into daily workflow, and an iterative approach.
Challenges the common assumption that technical performance is the main barrier, shifting focus to usability, localisation, and continuous improvement.
Redirects the conversation toward language localisation and workflow embedding, which later leads to detailed mentions of Indic language support and the need for a universal protocol.
Speaker: Irina Ghose
We are investing in scaling hubs in India and Africa to aggregate fragmented pilots. Fragmentation across ministries and funders is a big inhibitor; hubs can channel diffusion into centres of excellence.
Identifies a systemic bottleneck (fragmentation) and proposes a concrete organisational solution (scaling hubs), moving the dialogue from isolated pilots to coordinated national‑level scaling.
Introduces the concept of ‘hubs’ that becomes a reference point for later discussion on institutional pathways and the need for coordinated governance structures.
Speaker: Trevor Mundeli
Procurement in government seeks lowest price and lowest risk, making civil servants fear mistakes. We need to shift to outcome‑oriented, policy‑oriented procurement and a culture that accepts failure as part of innovation.
Highlights a deep bureaucratic barrier—risk‑averse procurement—and offers a cultural‑change solution, linking procurement reform directly to AI scaling.
Triggers a deeper examination of internal state reforms, prompting further comments on digital ID, data governance, and the necessity of institutional change for AI diffusion.
Speaker: Esther Dweck
Failure never happens with a big bang; it slowly dies because people stop using it. Keep AI contextual to the domain, support local languages, and measure ROI by new use‑cases opened.
Provides a nuanced view of why pilots fade, emphasizing sustained relevance, localisation, and measurable impact rather than one‑off deployments.
Reinforces earlier points about language and workflow, leading to concrete examples of Anthropic’s support for ten Indian languages and the discussion of the Model Context Protocol.
Speaker: Irina Ghose
Technology has to be boring, invisible. When you stop thinking of something as technology, that’s when it has truly diffused – just like UPI is now invisible to users.
Frames diffusion as a cultural shift where technology becomes part of everyday life, not a novelty, providing a clear target for AI adoption.
Sets an aspirational benchmark that guides later suggestions (e.g., MCP, universal adapters) and underscores the need for seamless integration.
Speaker: Shankar Maruwada
OpenAgriNet shows how modular, locally adaptable infrastructure can serve smallholder farmers. We should replicate that model for personal health assistants in low‑ and middle‑income countries.
Extends the diffusion concept from agriculture to health, illustrating cross‑sector applicability and raising the stakes of scaling AI for human wellbeing.
Broadens the scope of the discussion to health, prompting safety‑focused comments and highlighting the need for auditable, trustworthy AI in high‑risk domains.
Speaker: Trevor Mundeli
We have introduced the Model Context Protocol (MCP) – think of it as what UPI was to payments. It provides a universal language for AI models to access tools and data, making integration repeatable and cheap.
Proposes a concrete technical standard that could act as the ‘shared rail’ for diffusion pathways, moving the conversation from abstract ideas to actionable infrastructure.
Creates a tangible focal point for model developers, linking back to earlier calls for universal adapters and reinforcing the ‘boring technology’ narrative.
Speaker: Irina Ghose
AI systems, especially in health, must be auditable. Black‑box recommendations are never adequate; we need transparency so clinicians can understand why a suggestion was made.
Emphasises safety and accountability, introducing the ethical dimension that balances speed of diffusion with risk management.
Shifts the tone toward caution, prompting discussion of governance, DPI stacks, and the need for robust auditing frameworks before large‑scale roll‑outs.
Speaker: Trevor Mundeli
Digital sovereignty is a major political‑economic challenge. We need resident clouds, data localisation, and mechanisms like age‑verification that protect privacy while delivering services.
Raises a macro‑level policy issue—national control over data and infrastructure—that underpins all technical diffusion efforts.
Adds a strategic layer to the conversation, linking technical pathways to sovereignty concerns and influencing the concluding remarks about future digital public intelligence.
Speaker: Esther Dweck
Overall Assessment

The discussion coalesced around the central metaphor of ‘diffusion pathways’, introduced by Nandan Nilekani and fleshed out by Shankar Maruwada. Key interventions—Irina Ghose’s focus on language, workflow and perception; Trevor Mundeli’s scaling‑hub model; and Esther Dweck’s procurement and sovereignty reforms—served as turning points that moved the dialogue from anecdotal successes to systemic challenges and solutions. These comments redirected attention toward institutional design, localisation, safety, and political economy, shaping a multi‑dimensional roadmap for scaling AI responsibly by 2030.

Follow-up Questions
How can we effectively measure the return on investment (ROI) of adding language support (e.g., Bengali, other Indian languages) to AI models to ensure it opens new use cases and benefits more people?
Irina highlighted the need to assess ROI when expanding language coverage, indicating a gap in metrics for evaluating impact of language localization.
Speaker: Irina Ghose
What is the optimal design and governance model for “scaling hubs” that aggregate fragmented AI pilots and accelerate their transition to national‑scale deployments?
Trevor described scaling hubs as a solution to fragmentation but did not detail how they should operate, suggesting further study on their structure and effectiveness.
Speaker: Trevor Mundeli
How can a universal “model context protocol” (MCP) be standardized, adopted, and integrated across diverse sectors and countries to serve as the AI equivalent of UPI for payments?
Irina introduced MCP as a potential universal adapter for AI tools and data, but its specification, governance, and adoption pathways remain unexplored.
Speaker: Irina Ghose
What concrete policies and technical architectures are needed to enhance digital sovereignty—especially data residency, operational access, and security—for countries like Brazil?
Esther emphasized Brazil’s push for digital sovereignty, noting the need for more research on data localization, sovereign cloud strategies, and related procurement practices.
Speaker: Esther Dweck
What methods and standards can make AI health recommendation systems auditable and transparent enough for clinicians and regulators to trust them?
Trevor stressed the importance of auditability in AI for health, indicating a research gap in developing practical, verifiable frameworks for clinical AI.
Speaker: Trevor Mundeli
How should policymakers balance the urgency of rapid diffusion pathways (e.g., 100 pathways by 2030) with rigorous safety safeguards in high‑stakes domains such as health and child protection?
Trevor raised the tension between speed and safety, pointing to the need for systematic safety‑by‑design guidelines that do not impede scaling.
Speaker: Trevor Mundeli
What privacy‑preserving, user‑friendly technologies can be deployed for age verification online to protect children while respecting data protection norms?
Esther described Brazil’s new law requiring age verification and the challenge of doing so without invasive surveillance, highlighting a research area in privacy‑enhancing verification.
Speaker: Esther Dweck
What metrics and evaluation frameworks should be used to assess the success and impact of AI diffusion pathways across sectors and geographies?
Nandan announced the 100 diffusion pathways goal but did not specify how progress will be measured, indicating a need for robust evaluation criteria.
Speaker: Nandan Nilekani
What specific reforms to public procurement processes can encourage innovative AI projects while managing risk and accountability within government agencies?
Esther identified procurement as a barrier to AI adoption and described mindset shifts, but concrete procedural reforms remain an open question for further study.
Speaker: Esther Dweck

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.