Building Population-Scale Digital Public Infrastructure for AI
20 Feb 2026 11:00h - 12:00h
Building Population-Scale Digital Public Infrastructure for AI
Session at a glance
Summary
This discussion focused on scaling artificial intelligence solutions from pilot projects to population-level implementation through “diffusion pathways” – systematic approaches to spreading AI technology safely and sustainably across societies. Nandan Nilekani opened by describing how AI agricultural solutions that initially took nine months to implement in Maharashtra were later deployed in Ethiopia in three months and by Amul in just three weeks, demonstrating how established pathways can dramatically accelerate implementation. The panel announced an ambitious goal of creating 100 diffusion pathways by 2030, supported by a global coalition including Anthropic, Google, Gates Foundation, and UNDP.
The discussion emphasized that successful AI diffusion requires technology to become “boring” and invisible rather than magical, similar to how UPI payments work seamlessly without users understanding the underlying technology. Key challenges identified include moving beyond fragmented pilot projects, transforming government procurement processes to embrace innovation and acceptable failure, and building proper data governance structures. Minister Esther Dweck from Brazil highlighted the need for systemic changes in procurement, digital infrastructure, and governance, noting that civil servants must develop digital mindsets and that data silos between ministries must be eliminated.
Safety considerations were paramount, particularly in healthcare applications where lives are at stake. The panelists stressed that AI systems must be auditable and transparent rather than black boxes, especially in high-stakes public services. The discussion concluded with the vision that by 2030, successful implementation of digital public infrastructure for AI would transform the concept from “digital public infrastructure” to “digital public intelligence,” representing truly integrated and scaled AI solutions serving all of society.
Keypoints
Major Discussion Points:
– AI Diffusion Pathways and the “100 by 2030” Initiative: The discussion centers around creating 100 diffusion pathways by 2030 to spread AI benefits globally. Nandan Nilekani explains how implementation times decreased dramatically from 9 months (Maharashtra) to 3 months (Ethiopia) to 3 weeks (Amul), demonstrating how shared pathways can accelerate AI adoption across different contexts and countries.
– Scaling AI from Pilots to Population-Scale Implementation: A key challenge discussed is moving beyond impressive pilots to sustainable, institutional AI systems. The panelists identify fragmentation as a major barrier, with Trevor Mundell describing how thousands of small pilots across different government ministries create inefficiencies rather than scalable solutions.
– Government Transformation for AI Adoption: Minister Esther Dweck outlines three critical areas for state transformation: procurement processes (moving from risk-averse, lowest-price models to innovation-friendly approaches), digital infrastructure development, and data governance (breaking down silos and establishing chief data officers across ministries).
– Safety and Accountability in High-Stakes AI Deployment: The discussion addresses the tension between speed (100 pathways by 2030) and safety, particularly in healthcare where lives are at stake. Trevor Mundell emphasizes the need for auditable, transparent AI systems rather than “black box” solutions, especially when replacing human clinical judgment.
– Technical Infrastructure for AI Diffusion: Irina Ghose introduces the Model Context Protocol (MCP) as a universal language for AI systems, comparing it to how UPI simplified digital payments. The focus is on making AI contextual to local languages and workflows while creating reusable, modular infrastructure components.
Overall Purpose:
The discussion aims to establish a framework for scaling AI implementation globally through shared “diffusion pathways” that compress learning curves, costs, and risks. The goal is to move AI from experimental pilots to sustainable, population-scale public services that benefit all of society, with particular emphasis on inclusive deployment across developing countries.
Overall Tone:
The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementations and expressing confidence in achieving ambitious goals. There’s a sense of urgency balanced with responsibility, particularly when discussing safety considerations. The conversation maintains a practical, solution-oriented approach, with panelists building on each other’s points and sharing real-world experiences. The tone becomes slightly more technical when discussing infrastructure components but remains accessible and focused on societal impact rather than pure technology.
Speakers
Speakers from the provided list:
– Nandan Nilekani – Technology leader and public policy expert, discussed DPI (Digital Public Infrastructure) implementation and AI diffusion pathways
– Speaker 1 – Event moderator/host, facilitated transitions between speakers and sessions
– Shankar Maruwada – Panel moderator, works with X-Step organization, moderated discussion on AI diffusion pathways
– Irina Ghose – Technology executive with three decades of IT experience, represents Anthropic, expertise in AI model development and deployment
– Esther Dweck – Minister of Management and Innovation in Public Service, Brazil, government official focused on public sector innovation and AI implementation
– Trevor Mundeli – President of Gates Foundation, expertise in global health and development, focuses on scaling AI solutions in health and agriculture
Additional speakers:
– Jimena – Mentioned at the very end of the transcript but did not speak during this discussion
Full session report
This comprehensive discussion on scaling artificial intelligence solutions from experimental pilots to population-level implementation centred around the concept of “diffusion pathways” – systematic approaches to spreading AI technology safely and sustainably across societies. The conversation brought together perspectives from technology companies, international development organisations, and government leaders to address one of the most pressing challenges in AI deployment: moving beyond impressive demonstrations to institutional, scalable impact.
The Diffusion Pathway Framework
Nandan Nilekani, continuing from a previous presentation, illustrated the transformative potential of established diffusion pathways through a compelling case study in agricultural AI. He described how an AI solution for farmers that provides “access to prices, access to weather information” initially required nine months to implement in Maharashtra, was subsequently deployed in Ethiopia in just three months, and later implemented by Amul for dairy farmers in merely three weeks. This dramatic acceleration demonstrates how shared learning and institutional capability can compress implementation timelines whilst expanding applications from crop agriculture to livestock management and from Asia to Africa. The agricultural AI app has been downloaded by 2.5 million farmers, demonstrating significant scale.
This success story underpinned a global initiative to create 100 diffusion pathways by 2030. Supported by a coalition including Anthropic, Google, Gates Foundation, and UNDP, this initiative aims to develop systematic approaches for deploying AI solutions across sectors and countries. The pathways are designed as “shared rails” that compress learning curves, costs, and risks, enabling AI adoption by all segments of society rather than remaining confined to technical experts.
Shankar Maruwada, serving as moderator, provided crucial historical context by drawing parallels to industrial revolutions, noting that whilst France invented better than Britain in the first industrial revolution, and Germany out-invented the United States in chemistry, it was diffusion capability rather than invention that determined which societies ultimately benefited most from technological advances. This perspective emphasises that the challenge is not merely developing better AI models, but creating the infrastructure and institutional capabilities necessary for widespread, equitable adoption.
From Technology as Magic to Technology as Boring
A central theme throughout the discussion was the evolution of technology from mysterious and magical to mundane and invisible. Maruwada illustrated this concept with his eyeglasses analogy and a demonstration with the audience, asking them to raise hands if they used UPI payments (most did) and then if they understood the underlying protocols (very few did). This exemplifies successful diffusion – when technology becomes so integrated into daily life that users stop thinking of it as technology at all. As Maruwada noted, we stop thinking of something as technology when it becomes truly integrated into our lives.
Irina Ghose from Anthropic reinforced this perspective, arguing that AI deployment failures rarely stem from technical complexity or model performance limitations. Instead, they result from the perception of complexity and the failure to make AI contextual to users’ daily workflows. She emphasised three critical requirements for successful diffusion: contextualisation to local languages and domains (noting that Anthropic supports “10 Indian languages from Hindi to Malayalam to Gujarati to Urdu”), integration into existing workflows rather than requiring new processes, and iterative implementation that allows for continuous improvement.
Ghose also highlighted that “India happens to be the second largest user base of cloud outside the US,” underscoring the country’s significant role in global AI adoption. This insight challenges conventional approaches to AI deployment that focus primarily on technical capabilities, suggesting instead that success depends on making AI intuitive and accessible to end users who are not machine learning experts but need AI to solve practical problems in their daily work.
Addressing the Fragmentation Challenge
Trevor Mundeli from the Gates Foundation identified fragmentation as a critical barrier to scaling AI solutions. He described observing thousands of well-intentioned pilot projects across different government ministries – agriculture, education, health, and finance – each attempting to build separate AI capabilities and digital public infrastructure. This fragmentation, whilst demonstrating innovation energy, actually impedes scaling by creating inefficiencies and preventing the consolidation of learning and resources.
To address this challenge, the Gates Foundation is investing in “scaling hubs” in Rwanda, Nigeria, Senegal, and soon Kenya, working with Smart Africa as a pan-African venture. These hubs serve as aggregation points where governments can channel funding and coordination efforts to move promising pilots to large-scale implementation. The approach recognises that whilst diffusion benefits from organic spread, some coordination mechanisms are necessary to prevent the waste of resources and to accelerate the transition from pilots to institutional systems.
Government Transformation for AI Adoption
Minister Esther Dweck from Brazil’s Ministry of Management and Innovation in Public Service provided detailed insights into the institutional changes required within government to enable AI adoption at scale. She outlined the INSPIRE program (“AI for Public Service with Innovation, Responsibility, and Ethics”) and described three critical areas requiring transformation: procurement processes, digital infrastructure, and governance structures.
The procurement challenge is particularly acute because traditional government purchasing prioritises lowest price and lowest risk, with civil servants facing potential audit consequences for any perceived mistakes. This creates a culture fundamentally opposed to innovation, which inherently involves experimentation and acceptable failure rates. Dweck described efforts to shift from process-oriented to outcome-oriented procurement, allowing interaction with vendors during development and accepting that innovation requires iterative improvement.
Brazil’s approach includes establishing chief data officers across all ministries and implementing the gov.br digital platform to break down data silos that prevent effective AI implementation. The government has also developed practical applications, such as an AI system for university enrollment exams and age verification systems with a compliance deadline of 17 March. Brazil is pursuing digital sovereignty through three levels: data sovereignty (knowing where data is located), operational sovereignty (having access to and control over data), and technological sovereignty (understanding and controlling the technologies being used).
The Brazilian experience illustrates the systemic nature of the challenge, requiring coordinated changes in legal frameworks, organisational structures, incentive systems, and professional capabilities across government institutions. Brazil is also collaborating with India on “verifiable convention” technology, demonstrating international cooperation in AI governance.
Safety and Accountability in High-Stakes Applications
The discussion addressed the inherent tension between the urgency of AI deployment and the critical importance of safety, particularly in applications where lives are at stake. Mundeli articulated this tension powerfully, noting that delays in deploying AI for healthcare and education have real human costs, whilst emphasising that AI systems for health recommendations cannot be “black boxes” that provide unexplainable outputs.
Instead, healthcare AI systems must be auditable and transparent, allowing healthcare professionals to understand the reasoning behind recommendations just as they would with human clinicians. This requirement for transparency and auditability represents a significant technical and design challenge, demanding AI systems that can not only provide accurate recommendations but also explain their reasoning in ways that healthcare professionals can evaluate and patients can understand.
Technical Infrastructure for Interoperability
Ghose introduced the Model Context Protocol (MCP) as a potential solution for creating interoperable AI systems. She positioned MCP as potentially playing a similar role for AI that UPI plays for digital payments – providing a universal language that enables different AI tools and data sources to work together seamlessly.
This technical standardisation is crucial for preventing the recreation of silos at the AI level. Rather than building separate AI capabilities for agriculture, healthcare, education, and other sectors, standardised protocols could enable reusable components that reduce development costs and accelerate deployment across different domains.
Cross-Sector Learning and Adaptation
An unexpected area of consensus emerged around the potential for cross-sector learning and adaptation of AI solutions. Mundeli expressed particular interest in agricultural AI applications, noting that agriculture appears ahead of healthcare in developing personally useful AI systems for end users. He described the vision of personal health assistants that could provide safe, contextual health information to people in low- and middle-income countries who may be far from healthcare facilities.
This cross-sector potential illustrates the value of the diffusion pathway approach, where successful implementations in one domain can be adapted and applied to others, accelerating overall progress whilst reducing development costs and risks.
Implementation Challenges and Continuous Evolution
The discussion highlighted that AI deployment differs fundamentally from traditional software procurement. Rather than a one-time purchase of a finished product, AI systems require continuous investment in data collection, model improvement, and service enhancement. This creates ongoing financial and organisational commitments that many adopting organisations are not prepared for.
Successful AI diffusion therefore requires not just initial implementation but sustainable models for continuous evolution, including technical infrastructure for data collection and model updates, organisational capabilities for managing ongoing improvement cycles, and financial models that support long-term investment rather than one-time purchases.
Conclusion and Future Vision
The discussion concluded with Shankar’s vision of transformation to “digital public intelligence” by 2030, representing the ultimate goal of making AI so thoroughly integrated into public services and social systems that it becomes invisible infrastructure rather than a distinct technology requiring special attention.
The 100 diffusion pathways by 2030 initiative represents an ambitious attempt to systematise this transformation. Success will depend on maintaining the balance between structured coordination and organic innovation, between speed of deployment and safety requirements, and between global collaboration and local autonomy. The conversation ultimately reframed AI scaling from a primarily technical challenge to a comprehensive transformation challenge requiring coordinated changes in technology, institutions, governance, and culture.
The discussion provided a foundation for understanding both the opportunities and complexities involved in achieving AI diffusion that benefits all of society, emphasising that whilst significant challenges remain, there is substantial alignment among key stakeholders on the fundamental principles and approaches needed for success.
Session transcript
bought which farmers use and millions of farmers today, 2 .5 million farmers have downloaded this app. And this was built to make sure that farmers have access to the best information about access to prices, access to weather information and so on. And it’s very sophisticated. It took nine months to get this going in Maharashtra. But we learned a lot about how to do these things. And the next implementation was done in Ethiopia. So in Africa, and Ethiopia did the same thing in three months. So essentially what took us nine months the first time around took us three months. And recently, at the request of the Prime Minister, Amul implemented the whole thing. And Amul implemented it for cows and bought for dairy farmers to understand about the cows and whether they’re lactating or whether they’re, you know, milk and so on.
And that was done in three weeks. So I think you went from nine months to three months to three weeks. So what is the message in that is that if you get the lived experience of implementing these kind of systems for public good, you can actually dramatically reduce the time in which you can do that. And we call these ways of reaching the goal faster, we call them as pathways, because once you have a pathway, then you can get, somebody else can get to the same point quicker. And just like we had this notion that we’ll have 50 in five, 50 countries in five years, we are also now setting an ambitious goal for doing 100 diffusion pathways by 2030.
In other words, by 2030, all of us together across the world will develop these pathways to diffuse the use of AI in a positive way to help farmers, improve the life of young kids, allow people to get jobs through something called Blue Dot. There are so many things going on, but all of them are designed to be effective. to improve and make better people’s lives, can meet the aspirations in a very inclusive way so that everybody is in, nobody is left out. And so we announced a partnership. We announced a coalition of this, of 100 diffusion pathways by 2030. We announced that yesterday or day before yesterday. And we have a global coalition. Anthropic is there.
Google is there. Gates Foundation is there. UNDP is there. A whole host of people are there. And it’s a very open, it’s a big tent. Anybody can join the coalition. But our goal is all of us work together to very, in a focused manner, develop these pathways of diffusion of different kinds of positive AI use cases and then actually make it happen in countries around the world. So just like 15 .5 was a DPI goal, 100 diffusion pathways by 2030 is the AI goal we have. And we are confident that all of us collectively can get there. So I think this is important. I think it’s strategic for the world that we show the good use of AI, and it’s strategic that all of us work together to do that.
Thank you very much.
Thank you so much, Mr. Nandan. At this point, I would love to invite our panelists up to the stage. We’ll start by taking a quick group photograph together and then begin the discussion. So let me invite Minister Esther Dweck, Mr. Trevor Mundell, Ms. Reena Ghosh, and Mr. Shankar Maruwada, accompanied by Nandan, to be on the stage for a quick group photograph. Thank you. Let me now hand it over to Shankar Maruwala, who will moderate us to the next panel.
Good afternoon. We have an exciting panel discussion ahead. Let me start off with where Nandan stopped. Hundred pathways. What are these pathways? These are diffusion pathways to AI impact safely and at scale. Let me provide a bit of background. France invented better than Britain in the first industrial revolution yet Britain won that Britain in turn out invented US in steel, Germany out invented US in chemistry yet it’s the US that won the second industrial revolution what was the crucial thing it was not better invention or even innovation the missing ingredient was diffusion which the United States of America did much better diffusing the benefits and the impact of this technology throughout the economy and the society when we say diffusion we don’t mean awareness or access diffusion as Nandan described is the spread of know -how, trust and institutional capability that allows organizations to adopt AI safely and sustainably as he explained Maharashtra was the pioneer to do this in India it’s like Sir Edmund Hillary climbing Mount Everest for the first time he inspires he creates a pathway for others to follow and it would be rather stupid if after he came back he said I am not sharing this with others the pathway I created I have removed it so now you guys find your own pathway the societies that create such pathways allow a whole lot of others to prosper to make progress to create impact inclusively and equitably that is the when Nandan talked about diffusion hundred pathways these are the hundred diffusion pathways across sectors countries continents some are some may be led by proprietary models some may be led by sovereign efforts some may not be it may differ It’s the choice of the AI adopter to decide which pathway works best for them.
So the diffusion infrastructure we are talking about creating isn’t a platform app or model. It’s shared rails that compress learning curves, cost and risk. So that AI can be used by all of society for all of humanity. With that, I would like to begin the panel discussion. Irina, from the model builder’s perspective, what needs to be true for AI to be deployable at population scale? Not just impressive pilots, especially in high -stake public systems. What needs to happen?
Thank you so much, Shankar. And absolutely a pleasure and honor to be here with all of you. Thank you so much. The way I think about it is AI deployment would seldom, if ever, have any roadblocks because of a complexity in the model or the performance. The only reason it fails to gain scale is because the perception in our mind about the complexity. And one of the things that we really feel is that you have to be all in, first yourself, diffuse it to people around you to make it happen. Now, if you think about it, in a pilot, you’ve got experts doing it, you’ve got guardrails, you’ve got the intensity of people, and you’ve got a select group.
Now, when that kind of goes and spreads out, you’ve got a teacher in Bihar kind of implementing it, you’ve got a health worker in Coimbatore, you’ve got a small business leader in Indore doing it, who are not into ML, but for them, AI will start having significance when it stops being a scientific tool to something which is as intuitive for them. So three things which come into play. The first one is that for diffusion, it needs to be contextual to the local language that you speak. Second, it needs to be in the workflow of what you’re doing every day and you don’t need to do net new things. And the third is to be, you have to be iterative and be at it to make it happen.
And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked with X -Step to make it diffused across so many realms of life. And at Anthropic also we said that it’s not a technology for the sake of the technology only in the hands of developers and builders. We found that in India, India happens to be the second largest user base of cloud outside the US. So a big round of applause to all of us out here for making that happen. And what we also felt is that when we are building tools, one of the tools you might have heard is co -work, which earlier used to be done a lot by developers.
But now, people who are information workers or who are just thinking as to how to solve things. The idea is that you do not have to develop code, read a lot of intense things. You can make the tool work for itself. So in my mind, diffusion really means, first, how do I think that everything that I do, I have to be AI first. Second, the ecosystem being in India around myself, I enthuse everybody. And third, how am I giving back to everybody in the last mile to make it happen?
Fantastic. One of the things I liked about what Anthropic CEO Dario Amadei said is very soon, imagine a country with a whole bunch of geniuses living in data centers. What will that country do? Think about it. But till we reach there, and Dario says in two, three years, but till we reach there, Trevor, as president of Gates Foundation looking at global health, you are dealing with a situation where you’ve seen a whole bunch of, you’ve seen a whole bunch of AI pilots. not too many of them have scaled. From your experience, what separates pilots from systems that have scaled and become institutional? What separates an experiment from a scaled, institutional, sustainable impact?
Thank you, Shanka. And thank you for the invitation to be on this good panel. And also for the overview you gave me a few days ago of the very good work you’re doing at XSTEP. I learned about Open AgriNet and where that has made progress. But on this issue of scaling of AI, I had an opportunity to, this morning, sit down with the heads of entities which we call scaling hubs. There are two of them here in India, and there are three, soon to be four, in Africa. And there’s also a pan -African venture called Smart Africa. And you might say, well, what are these scaling hubs? So the idea is that we would support a partnership with the governments now in Rwanda, Nigeria, Senegal, and soon to be Kenya, wherein we place funding that the government can use to take the pilots that are out there and to really push them to large scale.
And why would we need a hub like this to do that? Well, one of the big barriers that we are currently seeing is the fragmentation that is occurring out there in terms of many, many ventures, some that we fund, other funders, everything with very good intent. Let’s do a small pilot. Let’s quickly do something over here. Thousands of them occurring out there. You take it at a government level. They have people approaching the Ministry of Agriculture, the Ministry of Education, the Ministry of Health, Ministry of Finance. all of them with different groups and on the DPI front, all of them trying to put in place the necessary DPI infrastructure to support their pilots. And now this fragmentation which is occurring over there, which I think is a big inhibitor of scaling to real population scale that we need.
So we are going to invest in these hubs that can be points of aggregation. We don’t want to inhibit diffusion. People have the idea of diffusion as a more random process which goes anywhere, and there’s something good about that. But if we can channel the diffusion into these centers of excellence, I think at the country level, the feedback that we’ve had from the governments is that that is a way that we are really going to get to scale more rapidly. Thank you.
Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathways. Diffusion pathways. Definition is everywhere, right? Pathways by definition is fixed. So it’s how do you spread. a technology in certain fixed pathways towards certain impact. It is indeed a stress. I believe that stress needs to be there because we are talking of the stress of safe AI impact at scale. But it is indeed a challenge, and together we have to solve it very quickly. I want to talk a bit about Minister Esther Dweck’s ministry, MGI, or the Ministry of Management and Innovation. Isn’t that a cool concept? The government of Brazil has a minister and a ministry looking after the idea of innovation and management.
They are collaborating very closely with India on a range of issues, and it’s my honor, Your Excellency, to have you here. Minister, I want to ask you a question. Scale efforts, diffusion. A lot of times fail inside government, not because of technology. But because of procurement process change and accountability, what has to change inside the state for AI to move from pilots to durable public services?
Thank you, Shankar. Thank you for inviting me and also for the partnership that we have with India. And Brazil is looking for this partnership with India because of scale. If anything can be scaled up in India, it can be in Brazil because compared to India, we are not such a big country. But compared to many other countries, very large. So for us, very important, this partnership. But when you talk about the problem inside the state, our ministry was created. The whole name is Ministry of Management and Innovation in Public Service. So we are focusing on innovation inside the public services. And we created a special secretary for state transformation because we saw that the state had to be transformed in order to actually be able to have innovation.
Because if we stand with the same way of doing procurement, actually we won’t be. We won’t be able to. do it. So we think that we need, in terms of AI, we need to transform the state in three main areas. The first one is procurement, for sure. And any kind of innovation procurement needs to be changed. So also the infrastructure, especially the digital infrastructure, and of course the governments. And when I talk about the procurement process, usually people are looking for the lowest price, lowest risk, and usually civil servants are very afraid of doing procurement because the auditing bodies are trying to look if they’re doing something wrong. So they usually try to go for the lowest risk possible.
And this is what prevents innovation inside the government, especially because innovation comes with errors. We know that any innovation might come to error. And if the civil servant cannot make any mistakes, then we never innovate. So one of the things that we found out when we’re trying to ask for how to do innovation procurement in the government, the first thing people say, I’m afraid of doing any mistakes, then the auditing body will come after me and then I won’t be able to be a civil servant. So what have we done is to change the mindset of the procurement process. Instead of more process -oriented, we are looking for a more policy -oriented and looking at the outcomes and not only the lowest price thing.
And with many other ministries, we are discussing how to actually build that culture of innovation procurement with this idea that it must fail. And you can also interact with the one you’re buying off. Because, of course, you’re buying something that doesn’t exist. How do you explain to them what you need? So there are a lot of things that you have to change in terms of procurement in order to actually be able to do AI. And, of course, the second thing is the digital infrastructure. As, of course, as Nandan has said before, Brazil, since 2023, when we came here for the G20 in India, we brought this idea of DPI to Brazil very… as something very strong.
Thank you. and we already know that we had something that could be called the DPI, but we didn’t know the concept before. And one of the things that was very important for us was our digital ID and our platform for services, a digital platform for service, which both called gov .br. And based on this platform, you were able to, what we are discussing now in terms of optimizing, but also in having more personalized services, knowing the people, if you know the citizen, we will be able to provide them specialized service, and we’re doing AI to do this, how to actually specialize service, what the people actually need. So I think using this, having a good DPI infrastructure, especially in terms of identification, and be able to also, of course, to have a better data governance.
That’s the third thing I would like to say is the governance inside the state. When we launched our plan for AI, and this morning, today, we had a session on the Brazilian AI plan. And the first thing the president said is that we need our database. He said we need the Brazilian database. We cannot have silos anymore. We cannot have this minister saying, no, this is my data. No one can access this data. So we have to do it, of course, in a private, preserving privacy in a security way. So we discussed all the data governance. We’re about to launch a new decree on data governance. Having every minister to have a chief data officer, someone who actually knows the data, knows how to use the data.
So we are actually looking at these things in order to access from the state to be able to innovate into this AI. Thank you. That’s it. Thank you.
Wonderful. Thank you. Irina, you’ve been in the IT space for three decades. You’ve seen the Internet thing boom, bust, and now you’re seeing AI. From your vast experience, what is the most common failure mode when AI moves from pilots to everyday workforce, everyday? And what kind of safety infrastructure actually prevent?
yeah I think one of the things that we have to remember is that the failure never happens with a big bang it just slowly dies because people just stop reducing the level of interaction they have gradually and you suddenly realize that it’s not relevant anymore so what really needs to happen that you need to keep it in a way that people use it daily and use it in the way that is contextual for each of them. For example one of the reasons why it might fail is because the data sets are speaking across to a country of a different nature which is kind of setting benchmarks in banking and financial systems which is not the same way where in agriculture is the biggest thing that we require hence collecting data for Indian languages nuancing it by say legal, by agriculture by what people are speaking across in that dialect in that language, this is very critical so if I want to look at three things that needs to happen, first of all keep it contextual to the domain, micro domain in which it is required at Anthropic we have kind of worked closely to ensure that we now have Indic language availability for 10 Indian languages from Hindi to Malayalam to Gujarati to Urdu and it’s available in the latest models and it is incrementally improving day by day and the last part I would say is that ensuring that whatever you are doing the ROI that we look at should be if I invest in a language say Bengali how many net new use cases have been opened up because of that and how many more people have got the benefit of that and I think the work that say we are doing with Aikstep and thanks to the fields employed education, healthcare, everything that’s the litmus test that we should be measuring ourselves on
I want to ask a question to the audience by raising hands how many of you use UPI keep your hands up if you know how UPI works, what’s the protocol behind it what’s the technology behind it hands are steadily coming down this is my point, we don’t care about technology as long as it works, for something to work at population scale technology has to be boring technology has to be invisible till the time it is not, it has not diffused, it is just some magic mystery thing that we all are stuck with figuring out what to do it’s a long journey from technology as magic to technology as normal boring in fact this wise old man once told me when you stop thinking of something as technology that’s when it has diffused 500 years ago this was magical ocular technology.
It allowed someone to see. Now we don’t think of it as technology. A day will come when we don’t think of AI as technology. That is the day we can say that AI has diffused through all of society. We have some way to go for that. Trevor, when you hear of things like Open AgriNet, some exciting work happening, what makes you think that fees like infrastructure versus yet another project that is going to the path of pilotitis, death by pilots?
Well, I do look a little bit with envy at Open AgriNet. Having looked across the work that the foundation does in agriculture and in health, traditionally the narrative has been how fortunate those health folks are because there’s such huge funding into the health areas, such huge investment in research, in genomics, in human health. and much less on plant genomics, which admittedly is potentially more complex, the clinical trial infrastructures for developing new products on the human health side versus on the agriculture side. But now we come to AI, and I have to say I look at OpenAgronet, and I think that the agriculture community is ahead of human health in terms of the implementation of a system which is personally useful to a farmer smallholder farmer, for instance, being able to get the information they need, being able to determine what crop disease they have to deal with or a disease in their cattle and what the weather is going to be and how they can maximize the finances in their small farm.
All of these types of things I would love to see in the health space, a personal health assistant. In low – and middle -income countries, so many people are not very close to a tertiary hospital. And they may be 10, 20 miles even from a primary health care clinic. Can we not provide for them with a system that can personally provide them with the information that they need in a safe way? And I think Open AgriNet really puts those components of infrastructure together. The way that it’s modular, the way that you can adapt it to the local circumstances, it’s in many ways exactly what we need in that personal health side of the picture. So I only have some envy, but I hope we can duplicate that on the health side.
Thank you.
Thank you, Trevor. Open AgriNet is just a group of organizations coming together, collaborating, as Trevor said, each bringing in one piece of the puzzle so that together we can create those diffusion pathways. And as Nandan said, that is what allows us to take something from Maharashtra, which took nine months, to Ethiopia in three months. Back to India in three weeks. from agriculture to livestock, from India to Ethiopia, from Asia to Africa and back. That is the exciting possibility that India has been in the journey of for the last 15 years, what we call as DPI. The thing about DPI is when you start with a strong use case in mind, as Arina and others have said, you harness technology, so technology becomes a good slave to a very powerful cause.
Then you take advantage of rapidly evolving technology. Minister Dweck, if you designed a national diffusion pathway for one public service, what would you prioritize first, institutions, incentives, data readiness or governance?
Well, it’s difficult to choose only one thing, I guess. Maybe this perspective from management, you’re always looking for some kind of a systemic approach, trying to look at all these things. Together. Together. And actually, we recently have launched a program, an R &D for AI in Brazil. It’s called INSPIRE in English, but in Portuguese means BREATHE, INSPIRE, but the same acronym, which is AI for Public Service with Innovation, Responsibility, and Ethics. And it has this systemic approach inside of it. Because the first thing, we create this new institutional arrangement. It’s not new, but we had in this R &D project, we have the government, of course, we have some state -owned companies, we have some private companies, and our innovation ecosystem in Brazil, all of them bringing together in order to help the government to have new AI platforms.
Because when we, although we’re already using AI in Brazil, we saw that we have a lot of lack of technological expertise and lack of financial support as well. So we’re trying to create this platform where we can actually offer many bodies of the government different solutions that can be used in many different areas, as you said. As I was saying before. So this idea, well, first thing we are discussing to have the data more sovereignty on the data and how to actually use better, but also for the data to be ready to be used. So one thing I was explaining before. So using AI to help to improve our data set. So it’s going both ways.
Another thing is also in the governance perspective, of course, we’re creating, as I mentioned, this shared tools and common practices and trying to share how, and specifically in this project, we’re creating this generative AI platform, and we’re trying to apply to different solutions. So recently, at the end of last year, we have this university enrollment exam for people finishing high school. So we created this whole complete, for them to know when they’re finishing school, what they’re going to do. Are they going to the job market? Are they going? Enroll school to enroll university? How to apply? What’s the best thing for them? So using AI to help them to actually decide this. And they’re doing the same thing for health care, for.
agriculture sector as well. So we’re looking at all these things. And, of course, in capacity building. So we are a lot training civil servants. We have four trails, actually, for people who actually are the managers, the top managers, for IT experts, for people controlling data, and for regular civil servants. Because one thing, when we’re talking about state transformation, we thought the one thing you have to train and to change, of course, is the civil servants. And nowadays they have to have a digital mind. And some of them have been there for many years. They didn’t have the digital capabilities. So we’re training all of them in digital capabilities and specifically on AI as well in order to think how to use this new technology in their regular life in order to improve civil service.
So I think it’s a more systemic approach there.
Pathways are like digital rails. What should model developers focus on so that AI can plug into these pathways safely across sectors and countries?
Very interesting. And I’ll just try to kind of paint the picture by giving a context. Now, think about it. We’re talking a lot about agriculture. It has the last mile. Now, if you were to solve for that farmer day in and day out, there’ll be various kinds of work that they have to do. Look at what is the weather conditions, one source of data. Look at how the crop yield, et cetera, is performing in other source of data. The market prices in other source of data. Whatever has to be done across for reaping and sowing. So these kinds of data, if they want to infuse, anybody wants to infuse AI on top of that, and if you build it every time, it is so cumbersome.
Now, if you kind of do the same thing that, Nandan, you’ve been talking about, at one point of time, all of us are different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. universal adapter came, it took it away. We all use UPI for digital payments. Do we know anything to do with the technology behind it? Whether it’s earned, whatever is coming across as the small micropayment, we have no idea. So one of the things here to be done is have a universal language which accesses the tools as well as the data. So we came out with this concept in Anthropic in 2024 called the model context protocol. And very simplistically put, I think of MCP as to AI was say what UPI was to payments.
And in effect, what it really does is you develop things once and you make it MCP ready. And anything else that you want to do it further, you do not have to keep on writing again and again. So all the cases of agriculture, healthcare, anything else put together, that can happen seamlessly. Why does it matter for India? There’s a lot of data which already exists in hell. in education, in various ways that citizen services are going across, and that is a rich level of data. So if we kind of make this data AI ready, use the tools which are going across, then the case of diffusion and that accountability of everybody coming together will be that much more quicker.
Excellent. A lot of people who deploy AI, they have an old notion that it’s like normal software. You buy great software, it is perfected, and you deploy, and you can close that and go away. In AI, that is just the start, because as you use it, data comes in. The data gets better, the models get better. With better models, you provide better services, usage increases, more usage, more data. This cycle, and while it is happening, the models improve, the data improve, so for a lot of adopters, once they go beyond procurement how do you continuously invest to upgrade and evolve? That’s again a very important question. So when we talk of 100 diffusion pathways these are 100 diffusion pathways to safe AI impact at scale which creates a second stress and I’ll come to you on that Trevor.
When lives are at stake where do you draw the line between speed 100 pathways to 2030 and safety and coming from health safety means literally lives, right?
Yes Shankar and there are a lot of lives at stake and I feel the urgency. Every year we don’t have the next generation of malaria vaccines we’re seeing hundreds of thousands of young children dying. Every year we don’t have a personalized education coach for every child no matter where they are. we see a tremendous amount of human potential wasted. So there is this urgency to get things done and that might make one think very carefully on the safety front and it is that safety issue where people are in the health area saying we need to take a step back, we need to look carefully at the frameworks before we just jump in with like the application I talked about, the self -application, how would that be gated, how would that be guarded.
I do think that because of the excellence of the DPI stack here in India and because of the thousands of application efforts I see, you are going to probe those frameworks for the safe introduction probably first in a context which is, as Nandan was mentioning, the frugal innovation that will be relevant across lower middle income countries and actually beyond. So I do think that we are very much looking at India as a safe introduction. The foundry of AI application. and we want to see those frameworks whereby we can safely introduce the technology. In terms of the technology itself, just having a type of black box system that gives a health recommendation is almost never adequate, almost never satisfactory.
These systems need to be auditable. And I have to say that Anthropic has made quite a lot of progress in their research on how are these concepts, how are these recommendations actually represented in the model. People want to be able to audit that. They don’t just want something that comes out of nowhere. If you have a human clinician that makes an error, you can talk to that person. You can say, well, where did this, why did you think this was the case when you made a misdiagnosis over here? Was it because you didn’t elicit the right question from the patient or you transcribed incorrectly? And that is the kind of transparency that we actually demand of the AI systems at the end of the day.
So I think that… But between the work going on here in India and some of that transparency research, we can get there.
Thank you, Trevor. Minister Dweck, as you’re thinking of implementing AI solutions at scale, what is the hardest political or economic challenge, and what are some tips on how one should deal with it?
Okay. I think it’s kind of a political economy issue now, I think, in Brazil we are looking for. Of course, one thing is the workforce problem, because we may be going to this utopia that no human need to work anymore, and the machines work for us. So how actually create, how divide this wealth in order to come from these machines working? But there’s one point. But more concerning in our current period now in Brazil is about digital sovereignty. Of course, very few countries, maybe only two countries in the world, will be totally digital sovereign right now. But I think we have to. We have to increase our digital sovereignty in terms of being able to.
have our services and be able to operate it, be able to know where our data is, to know how we’ll be able to continue with our services to our populations. So we are discussing a lot of this in Brazil, how to increase our level of digital sovereignty. Of course, we know we’re probably not in a very, in a few years, be totally digital sovereign, but at least we’re to increase. And we’re actually working with our suppliers in order for them to offer us more sovereignty or at least some security that we not have any discontinuity. So I think using the state capacity and using the state procurement purchasing power, it’s very important to do this.
And we’re actually using it in order to talk to our buyers. And we discussed this sovereignty in three levels, in the data level. And for this, we’re bringing back the data to Brazil. We’re trying to have… We have two, as I mentioned before, two federal, state -owned companies that are actually having resident clouds within our companies to know where the data is, but only know where the data is not enough. So we are increasing our operational access to the data. And also, I think the third level is why you’re using technology, something that we’ve been discussing a lot here. And it’s not directly related to AI, but it’s related to digital services. I think one thing that we’re doing together here in India, using a technology that was developed here, a verifiable convention, which was very important for us, we are using right now in two pilot projects yet, but we want to scale it up.
One is related to rural credit, but the second one is related to something that I think the whole world is discussing, how to protect child online. So now in Brazil, we passed a law last year, which is a very important law. It was very quick to pass. After one of the digital influencers showed what was happening to children in the Internet, especially on social media, and we passed the bill and it said by 17 March, you have to know what age the person who’s accessing the Internet is. So how to do this in a way that you protect the privacy? We don’t actually know what people are using. So a lot of things are discussed and we’re trying to do this verified recognition in order to have this age verification in a very simple way, very easy for people and for people not to be afraid that the government is actually looking at the Internet.
So I think this is the way to make things that are actually useful and important to protect our citizens but also to provide them with very good services.
Thank you. Today’s topic was building publish and scale digital public infrastructure for AI. By 2030, when we would have made a lot of progress on that, we would stop calling DPI digital public infrastructure and we’ll start calling it digital public. intelligence. With that, a big thank you to all my panelists and to the audience. Thank you.
Thank you. Shankar, if I can just request you to send a token of appreciation to the panel. Thank you. Now the next session is about to start on a very unique topic, AI for Democracy. So we request all the audience here to remain seated. A very wonderful topic, AI for Democracy, and we are very blessed that today we have with us Honorable Chief Guest, Mr. Om Birlaji, Speaker of Parliament of India, Mr. Martin Chongungji, Secretary General, IPU, Mr. Laszlo Z, Deputy Speaker, Parliament of Hungary, Dr. Chinmay Pandya from All World Gayatri Parivar, Ms. Jimena.
Nandan Nilekani
Speech speed
171 words per minute
Speech length
531 words
Speech time
185 seconds
Rapid, repeatable diffusion pathways accelerate scaling
Explanation
Nandan emphasizes that establishing clear diffusion pathways lets AI move from pilot projects to nationwide deployment in a matter of weeks, dramatically shortening the usual rollout timeline.
Evidence
“And that was done in three weeks.” [15]. “our goal is all of us work together to very, in a focused manner, develop these pathways of diffusion of different kinds of positive AI use cases and then actually make it happen in countries around the world.” [5].
Major discussion point
Diffusion pathways as a strategy for scaling AI impact
Topics
Artificial intelligence | Information and communication technologies for development
Shankar Maruwada
Speech speed
133 words per minute
Speech length
1438 words
Speech time
645 seconds
Diffusion spreads know‑how, trust and institutional capability
Explanation
Shankar clarifies that diffusion is not merely about awareness or access; it is the propagation of the know‑how, trust and institutional capacity that enables safe and sustainable AI adoption.
Evidence
“when we say diffusion we don’t mean awareness or access diffusion as Nandan described is the spread of know‑how, trust and institutional capability that allows organizations to adopt AI safely and sustainably as he explained…” [19].
Major discussion point
Diffusion pathways as a strategy for scaling AI impact
Topics
Artificial intelligence | The enabling environment for digital development
Irina Ghose
Speech speed
163 words per minute
Speech length
1288 words
Speech time
473 seconds
AI must be contextual, language‑specific and fit daily workflow
Explanation
Irina stresses that for AI to scale to whole populations it must be built in the local language, be embedded in users’ everyday workflows, and avoid requiring entirely new processes.
Evidence
“The first one is that for diffusion, it needs to be contextual to the local language that you speak.” [25]. “Second, it needs to be in the workflow of what you’re doing every day and you don’t need to do net new things.” [30].
Major discussion point
Preconditions for AI to be deployable at population scale
Topics
Artificial intelligence | Closing all digital divides
AI projects die slowly when relevance is lost
Explanation
Irina notes that failures rarely occur abruptly; instead, AI solutions gradually become irrelevant as users stop interacting with them, underscoring the need for continual contextual relevance and ROI tracking.
Evidence
“the failure never happens with a big bang it just slowly dies because people just stop reducing the level of interaction they have gradually” [72].
Major discussion point
Common failure modes and safety infrastructure for AI diffusion
Topics
Artificial intelligence | Capacity development
Create a universal Model Context Protocol (MCP)
Explanation
Irina proposes a Model Context Protocol that standardises how models access tools and data, enabling plug‑and‑play integration across sectors and reducing duplication of effort.
Evidence
“we came out with this concept in Anthropic in 2024 called the model context protocol.” [79]. “what it really does is you develop things once and you make it MCP ready.” [81].
Major discussion point
Guidance for model developers to plug into diffusion pathways safely
Topics
Artificial intelligence | The enabling environment for digital development
Trevor Mundeli
Speech speed
167 words per minute
Speech length
1117 words
Speech time
399 seconds
Scaling hubs aggregate funding and expertise to overcome fragmentation
Explanation
Trevor describes the creation of “scaling hubs” that act as aggregation points for resources and expertise, addressing the fragmentation that hampers moving pilots to population‑scale impact.
Evidence
“I had an opportunity to, this morning, sit down with the heads of entities which we call scaling hubs.” [37]. “we are going to invest in these hubs that can be points of aggregation.” [40].
Major discussion point
From pilots to institutionalised, sustainable impact
Topics
Artificial intelligence | The enabling environment for digital development
Safety and auditability are essential, especially in health
Explanation
Trevor warns that rapid AI deployment must be balanced with rigorous safety frameworks and auditability, particularly for health applications where lives are at stake.
Evidence
“These systems need to be auditable.” [58]. “there is this urgency to get things done and that might make one think very carefully on the safety front … we need to look carefully at the frameworks before we just jump in …” [87].
Major discussion point
Balancing speed of diffusion with safety, especially in health applications
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Esther Dweck
Speech speed
180 words per minute
Speech length
1938 words
Speech time
643 seconds
Procurement must shift to outcome‑oriented, policy‑driven models
Explanation
Esther argues that procurement should move away from lowest‑price, risk‑averse approaches toward policy‑driven, outcome‑focused processes that accept controlled failure as part of innovation.
Evidence
“we are looking for a more policy -oriented and looking at the outcomes and not only the lowest price thing.” [48]. “we have changed the mindset of the procurement process.” [49].
Major discussion point
Government reforms needed to enable AI diffusion
Topics
The enabling environment for digital development | Artificial intelligence
Robust digital ID and unified service platforms are foundational
Explanation
Esther highlights that a strong digital infrastructure—particularly a national digital ID and a unified service platform—provides the backbone needed for large‑scale AI adoption and effective data governance.
Evidence
“And one of the things that was very important for us was our digital ID and our platform for services, a digital platform for service, which both called gov .br.” [63]. “having a good DPI infrastructure, especially in terms of identification, and be able to also, of course, to have a better data governance.” [61].
Major discussion point
Government reforms needed to enable AI diffusion
Topics
Data governance | The enabling environment for digital development
Speaker 1
Speech speed
74 words per minute
Speech length
83 words
Speech time
67 seconds
Strategic convening of diverse stakeholders creates diffusion pathways
Explanation
By explicitly inviting ministers, AI pioneers and sector experts to the stage, the moderator builds a shared platform where policy, technology and implementation perspectives intersect. This coordinated gathering jump‑starts the diffusion of know‑how, trust and institutional capability needed for population‑scale AI impact.
Evidence
“At this point, I would love to invite our panelists up to the stage.” [1]. “Let me now hand it over to Shankar Maruwala, who will moderate us to the next panel.” [3]. “So let me invite Minister Esther Dweck, Mr. Trevor Mundell, Ms. Reena Ghosh, and Mr. Shankar Maruwada, accompanied by Nandan, to be on the stage for a quick group photograph.” [6].
Major discussion point
Diffusion pathways as a strategy for scaling AI impact
Topics
The enabling environment for digital development | Artificial intelligence
Public acknowledgment and gratitude build trust for sustainable AI adoption
Explanation
Thanking participants openly signals respect and appreciation, reinforcing the relational capital that underpins the spread of AI know‑how. Such gestures help sustain the institutional capability and confidence required for long‑term diffusion.
Evidence
“Thank you so much, Mr. Nandan.” [4]. “We’ll start by taking a quick group photograph together and then begin the discussion.” [5].
Major discussion point
Diffusion spreads know‑how, trust and institutional capability
Topics
Capacity development | The enabling environment for digital development
Ritualized group photography signals collective commitment and visual unity
Explanation
Opening the session with a group photograph creates a symbolic moment of unity, aligning participants around a common purpose and making the collaborative effort visible. This visual ritual helps translate individual enthusiasm into coordinated, institutionalised action.
Evidence
“We’ll start by taking a quick group photograph together and then begin the discussion.” [5]. “So let me invite Minister Esther Dweck, Mr. Trevor Mundell, Ms. Reena Ghosh, and Mr. Shankar Maruwada, accompanied by Nandan, to be on the stage for a quick group photograph.” [6].
Major discussion point
From pilots to institutionalised, sustainable impact
Topics
Social and economic development | The enabling environment for digital development
Agreements
Agreement points
Need for systematic approach to AI implementation rather than fragmented pilots
Speakers
– Nandan Nilekani
– Trevor Mundeli
– Esther Dweck
Arguments
Pathways reduce implementation time from nine months to three weeks through shared learning and institutional capability
Fragmentation of pilots is a major barrier; scaling hubs can serve as aggregation points for government coordination
Systemic approach needed combining institutions, data readiness, and governance rather than prioritizing single elements
Summary
All speakers agree that moving from fragmented pilot projects to systematic, coordinated approaches is essential for achieving AI impact at scale. They emphasize the importance of shared learning, institutional coordination, and comprehensive frameworks.
Topics
Artificial intelligence | Information and communication technologies for development | The enabling environment for digital development
Importance of contextual localization and workflow integration for AI adoption
Speakers
– Irina Ghose
– Esther Dweck
Arguments
Diffusion requires technology to become contextual, workflow-integrated, and iterative rather than remaining a scientific tool
Capacity building programs essential for civil servants across different skill levels and roles
Summary
Both speakers emphasize that AI must be adapted to local contexts, languages, and existing workflows to achieve successful adoption. They stress the need for making AI intuitive and integrated into daily work processes.
Topics
Artificial intelligence | Closing all digital divides | Capacity development
Need for standardized infrastructure and protocols to enable AI interoperability
Speakers
– Irina Ghose
– Shankar Maruwada
Arguments
Model Context Protocol (MCP) can serve as universal language for AI tools and data, similar to UPI for payments
Universal adapters and shared rails compress learning curves, costs, and risks for AI adoption
Summary
Both speakers advocate for creating standardized protocols and shared infrastructure that can reduce complexity and enable seamless integration across different AI applications and sectors.
Topics
Artificial intelligence | Information and communication technologies for development | Data governance
Critical importance of data governance and breaking down silos
Speakers
– Esther Dweck
– Irina Ghose
Arguments
Chief data officers and systematic data governance essential for breaking down ministerial data silos
Contextual localization in languages and domains critical to prevent gradual abandonment of AI systems
Summary
Both speakers emphasize the need for proper data governance structures and the importance of making data accessible and usable across different domains and languages for effective AI implementation.
Topics
Data governance | Artificial intelligence | The enabling environment for digital development
Similar viewpoints
Both speakers acknowledge the tension between the urgent need for AI deployment and the critical importance of safety, security, and sovereignty considerations. They emphasize the need for careful frameworks that protect citizens while enabling innovation.
Speakers
– Trevor Mundeli
– Esther Dweck
Arguments
Balance needed between urgency of deployment and safety requirements, especially in healthcare applications
Digital sovereignty concerns require control over data location and operational access
Topics
Building confidence and security in the use of ICTs | Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers support collaborative, multi-stakeholder approaches to AI development and deployment, emphasizing the value of shared learning and cross-sector application of successful models.
Speakers
– Nandan Nilekani
– Trevor Mundeli
Arguments
100 diffusion pathways by 2030 goal with global coalition including major tech companies and foundations
Open AgriNet demonstrates successful modular, adaptable infrastructure model for other sectors
Topics
Artificial intelligence | Information and communication technologies for development | Financial mechanisms
Both speakers recognize that traditional procurement approaches are inadequate for AI implementation and that organizations need to adopt new models that support continuous improvement and innovation.
Speakers
– Esther Dweck
– Shankar Maruwada
Arguments
Procurement processes must shift from lowest-price, risk-averse approach to outcome-oriented innovation procurement
Continuous investment needed for data-model improvement cycles beyond initial procurement
Topics
The enabling environment for digital development | Artificial intelligence | Financial mechanisms
Unexpected consensus
Technology should become invisible and boring for true diffusion
Speakers
– Shankar Maruwada
– Irina Ghose
Arguments
Universal adapters and shared rails compress learning curves, costs, and risks for AI adoption
AI deployment fails due to perception of complexity rather than actual technical limitations
Explanation
There was unexpected consensus that successful technology diffusion occurs when technology becomes so integrated and intuitive that users stop thinking of it as ‘technology’ at all. This philosophical view of diffusion as making technology invisible was shared across speakers from different backgrounds.
Topics
Artificial intelligence | Closing all digital divides | Social and economic development
Cross-sector learning and replication potential
Speakers
– Trevor Mundeli
– Nandan Nilekani
Arguments
Open AgriNet demonstrates successful modular, adaptable infrastructure model for other sectors
Pathways reduce implementation time from nine months to three weeks through shared learning and institutional capability
Explanation
Unexpectedly, there was strong consensus that successful AI implementations in one sector (like agriculture) can and should be adapted for other sectors (like healthcare), with speakers from different domains expressing enthusiasm for cross-sector learning and replication.
Topics
Artificial intelligence | Information and communication technologies for development | Social and economic development
Overall assessment
Summary
The speakers demonstrated remarkable consensus on key principles for AI scaling: the need for systematic rather than fragmented approaches, importance of contextual localization, requirement for standardized infrastructure, and critical role of proper governance structures. They also agreed on the balance needed between speed and safety, and the value of cross-sector learning.
Consensus level
High level of consensus with complementary perspectives rather than conflicting views. The agreement spans technical, policy, and implementation aspects, suggesting a mature understanding of AI scaling challenges. This consensus has positive implications for the feasibility of achieving the ambitious 100 diffusion pathways by 2030 goal, as it indicates alignment among key stakeholders from government, private sector, and international organizations on fundamental principles and approaches.
Differences
Different viewpoints
Approach to managing AI deployment speed versus safety
Speakers
– Trevor Mundeli
– Shankar Maruwada
Arguments
Balance needed between urgency of deployment and safety requirements, especially in healthcare applications
Continuous investment needed for data-model improvement cycles beyond initial procurement
Summary
Trevor emphasizes the need for careful safety frameworks and auditable systems before deployment, while Shankar focuses more on the urgency of rapid deployment through shared pathways and continuous improvement cycles
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Centralized versus distributed approach to AI scaling
Speakers
– Trevor Mundeli
– Shankar Maruwada
Arguments
Fragmentation of pilots is a major barrier; scaling hubs can serve as aggregation points for government coordination
Universal adapters and shared rails compress learning curves, costs, and risks for AI adoption
Summary
Trevor advocates for centralized scaling hubs to aggregate fragmented efforts, while Shankar promotes distributed shared infrastructure that allows multiple pathways to coexist
Topics
Artificial intelligence | The enabling environment for digital development
Unexpected differences
Priority between technical standardization versus organizational pathways
Speakers
– Irina Ghose
– Nandan Nilekani
Arguments
Model Context Protocol (MCP) can serve as universal language for AI tools and data, similar to UPI for payments
Pathways reduce implementation time from nine months to three weeks through shared learning and institutional capability
Explanation
While both speakers work toward similar goals of AI scaling, they emphasize different approaches – technical protocols versus organizational learning pathways – which could represent competing priorities for resource allocation
Topics
Artificial intelligence | Information and communication technologies for development
Government transformation approach
Speakers
– Esther Dweck
– Trevor Mundeli
Arguments
Procurement processes must shift from lowest-price, risk-averse approach to outcome-oriented innovation procurement
Fragmentation of pilots is a major barrier; scaling hubs can serve as aggregation points for government coordination
Explanation
Dweck focuses on internal government reform and cultural change, while Trevor emphasizes external coordination mechanisms, suggesting different theories of change for government AI adoption
Topics
The enabling environment for digital development | Artificial intelligence | Capacity development
Overall assessment
Summary
The discussion reveals subtle but important disagreements about implementation approaches rather than fundamental goals. Key tensions exist between centralized versus distributed coordination, speed versus safety in deployment, and technical versus organizational solutions.
Disagreement level
Low to moderate disagreement level with high strategic implications. While speakers share common goals of AI scaling and diffusion, their different approaches could lead to competing resource allocation and policy priorities. The disagreements are constructive and reflect different but potentially complementary perspectives on achieving the same objectives.
Partial agreements
Partial agreements
Both agree on the need for coordinated, systematic approaches to AI implementation, but Dweck focuses on internal government transformation while Trevor emphasizes external coordination through scaling hubs
Speakers
– Esther Dweck
– Trevor Mundeli
Arguments
Systemic approach needed combining institutions, data readiness, and governance rather than prioritizing single elements
Fragmentation of pilots is a major barrier; scaling hubs can serve as aggregation points for government coordination
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Both recognize the importance of making AI accessible and usable by end users, but Ghose focuses on technical localization while Dweck emphasizes human capacity building
Speakers
– Irina Ghose
– Esther Dweck
Arguments
Contextual localization in languages and domains critical to prevent gradual abandonment of AI systems
Capacity building programs essential for civil servants across different skill levels and roles
Topics
Artificial intelligence | Capacity development | Closing all digital divides
Both advocate for standardized approaches to AI deployment, but Nilekani focuses on organizational pathways while Ghose emphasizes technical protocols
Speakers
– Nandan Nilekani
– Irina Ghose
Arguments
100 diffusion pathways by 2030 goal with global coalition including major tech companies and foundations
Model Context Protocol (MCP) can serve as universal language for AI tools and data, similar to UPI for payments
Topics
Artificial intelligence | Information and communication technologies for development
Similar viewpoints
Both speakers acknowledge the tension between the urgent need for AI deployment and the critical importance of safety, security, and sovereignty considerations. They emphasize the need for careful frameworks that protect citizens while enabling innovation.
Speakers
– Trevor Mundeli
– Esther Dweck
Arguments
Balance needed between urgency of deployment and safety requirements, especially in healthcare applications
Digital sovereignty concerns require control over data location and operational access
Topics
Building confidence and security in the use of ICTs | Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers support collaborative, multi-stakeholder approaches to AI development and deployment, emphasizing the value of shared learning and cross-sector application of successful models.
Speakers
– Nandan Nilekani
– Trevor Mundeli
Arguments
100 diffusion pathways by 2030 goal with global coalition including major tech companies and foundations
Open AgriNet demonstrates successful modular, adaptable infrastructure model for other sectors
Topics
Artificial intelligence | Information and communication technologies for development | Financial mechanisms
Both speakers recognize that traditional procurement approaches are inadequate for AI implementation and that organizations need to adopt new models that support continuous improvement and innovation.
Speakers
– Esther Dweck
– Shankar Maruwada
Arguments
Procurement processes must shift from lowest-price, risk-averse approach to outcome-oriented innovation procurement
Continuous investment needed for data-model improvement cycles beyond initial procurement
Topics
The enabling environment for digital development | Artificial intelligence | Financial mechanisms
Takeaways
Key takeaways
AI diffusion pathways can dramatically reduce implementation time through shared learning – from 9 months (Maharashtra) to 3 months (Ethiopia) to 3 weeks (Amul implementation)
Technology must become ‘boring’ and invisible to achieve true diffusion at population scale – when people stop thinking of it as technology, it has successfully diffused
Government transformation requires fundamental changes in three areas: procurement processes (shifting from risk-averse to outcome-oriented), digital infrastructure, and governance structures
AI systems for critical applications like healthcare must be auditable and transparent, not black boxes, to ensure safety and accountability
Fragmentation of AI pilots is a major scaling barrier – coordination through scaling hubs and aggregation points is essential
Contextual localization (local languages, workflows, domains) is critical to prevent gradual abandonment of AI systems
Digital sovereignty concerns require countries to maintain control over data location, operational access, and service continuity
Universal standards like Model Context Protocol (MCP) can serve as shared rails for AI adoption, similar to how UPI enabled digital payments
Resolutions and action items
Launch 100 diffusion pathways by 2030 initiative with global coalition including Anthropic, Google, Gates Foundation, and UNDP
Establish scaling hubs in Rwanda, Nigeria, Senegal, and Kenya (with more planned) to aggregate and coordinate AI pilots for government implementation
Brazil to launch new decree on data governance requiring chief data officers in every ministry
Brazil implementing INSPIRE program (AI for Public Service with Innovation, Responsibility, and Ethics) with systemic approach involving government, state-owned companies, private sector, and innovation ecosystem
Brazil deploying verifiable credential technology (developed in India) for rural credit and child online protection use cases
Anthropic expanding Indic language support to 10 Indian languages in latest models
Continue collaboration between Brazil and India on DPI and AI implementation
Unresolved issues
How to balance speed of deployment (100 pathways by 2030) with safety requirements, especially in life-critical applications
Managing the inherent tension between ‘diffusion’ (spreading everywhere) and ‘pathways’ (fixed routes) in AI deployment
Addressing workforce displacement concerns as AI automation increases
Achieving meaningful digital sovereignty for countries that cannot be fully technologically independent
Scaling successful pilots like Open AgriNet to other sectors while maintaining effectiveness
Ensuring continuous investment and evolution of AI systems beyond initial procurement phase
Protecting civil servants from audit fears while enabling innovation and acceptable failure rates
Suggested compromises
Use India as a ‘safe introduction foundry’ for AI applications that can then be adapted for other lower-middle-income countries
Implement ‘big tent’ approach for the 100 diffusion pathways coalition – open to all participants while maintaining focus
Allow choice of AI adoption pathways (proprietary models, sovereign efforts, or hybrid approaches) based on adopter preferences
Balance digital sovereignty goals with practical collaboration – increase sovereignty levels rather than demanding complete independence
Combine urgency of deployment with systematic safety frameworks, using DPI infrastructure as foundation for safe scaling
Channel diffusion through centers of excellence while not inhibiting organic spread of innovation
Thought provoking comments
The societies that create such pathways allow a whole lot of others to prosper to make progress to create impact inclusively and equitably… the diffusion infrastructure we are talking about creating isn’t a platform app or model. It’s shared rails that compress learning curves, cost and risk.
Speaker
Shankar Maruwada
Reason
This comment reframes AI diffusion from a technology problem to an infrastructure problem, introducing the powerful metaphor of ‘shared rails’ that fundamentally shifts how we think about scaling AI solutions. It moves beyond individual implementations to systemic capability building.
Impact
This framing became the conceptual foundation for the entire discussion, with subsequent speakers building on this infrastructure metaphor and addressing different aspects of creating these ‘shared rails’ – from procurement to governance to safety.
The only reason [AI deployment] fails to gain scale is because the perception in our mind about the complexity… AI will start having significance when it stops being a scientific tool to something which is as intuitive for them.
Speaker
Irina Ghose
Reason
This insight challenges the common assumption that technical complexity is the barrier to AI adoption, instead identifying psychological and usability barriers as the real obstacles. It shifts focus from improving models to improving user experience and accessibility.
Impact
This comment redirected the conversation from technical capabilities to human-centered design, influencing subsequent discussions about contextual deployment, local languages, and making AI ‘boring’ rather than magical.
One of the big barriers that we are currently seeing is the fragmentation that is occurring out there… thousands of [pilots] occurring… all of them trying to put in place the necessary DPI infrastructure to support their pilots.
Speaker
Trevor Mundeli
Reason
This observation identifies a critical systemic problem – that well-intentioned pilot projects can actually hinder scaling by creating fragmentation. It’s counterintuitive because it suggests that more innovation attempts can sometimes impede progress.
Impact
This comment introduced the tension between diffusion (spreading everywhere) and pathways (structured approaches), leading to deeper discussion about the need for coordination mechanisms and ‘scaling hubs’ to channel innovation efforts.
If the civil servant cannot make any mistakes, then we never innovate… Instead of more process-oriented, we are looking for a more policy-oriented and looking at the outcomes and not only the lowest price thing.
Speaker
Esther Dweck
Reason
This comment identifies a fundamental institutional barrier to AI adoption – risk-averse procurement cultures that prioritize avoiding mistakes over achieving outcomes. It highlights how organizational incentives can completely undermine technological capabilities.
Impact
This shifted the discussion from technical and design challenges to institutional reform, prompting deeper exploration of how government structures, accountability mechanisms, and civil service cultures need to evolve for AI adoption.
We don’t care about technology as long as it works… A day will come when we don’t think of AI as technology. That is the day we can say that AI has diffused through all of society.
Speaker
Shankar Maruwada
Reason
This profound observation about the nature of technological diffusion – that true adoption means invisibility – provides a clear metric for success that goes beyond usage statistics to cultural integration. The eyeglasses analogy makes this abstract concept tangible.
Impact
This comment provided a philosophical anchor for the entire discussion, giving participants a shared understanding of what ‘diffusion’ actually means and helping frame subsequent conversations about safety, governance, and implementation around this end goal of technological invisibility.
Every year we don’t have the next generation of malaria vaccines we’re seeing hundreds of thousands of young children dying… So there is this urgency to get things done and that might make one think very carefully on the safety front.
Speaker
Trevor Mundeli
Reason
This comment powerfully articulates the moral urgency behind AI deployment while acknowledging the safety tensions. It personalizes the abstract discussion of ‘speed vs. safety’ with concrete human costs, making the stakes viscerally clear.
Impact
This comment elevated the entire discussion by introducing the moral dimension of AI deployment decisions, forcing participants to grapple with the real-world consequences of both action and inaction, and setting up a more nuanced exploration of how to balance speed with safety.
Overall assessment
These key comments fundamentally shaped the discussion by progressively deepening the analysis from technical implementation to systemic infrastructure challenges, then to institutional barriers, and finally to moral imperatives. The conversation evolved from ‘how do we deploy AI?’ to ‘how do we transform entire systems – technological, institutional, and cultural – to enable safe, equitable AI diffusion?’ The most impactful comments introduced conceptual frameworks (shared rails, technological invisibility) and identified counterintuitive barriers (fragmentation from too many pilots, risk-averse procurement cultures) that reframed the entire challenge. The discussion became increasingly sophisticated as speakers built on these insights, moving from individual solutions to systemic transformation approaches.
Follow-up questions
How can we develop frameworks for safe introduction of AI in healthcare applications, particularly for self-diagnosis tools?
Speaker
Trevor Mundeli
Explanation
This is critical because lives are at stake in healthcare applications, and there’s a need to balance speed of deployment with safety requirements when implementing AI systems that could affect patient outcomes.
How can AI systems be made auditable and transparent, especially in healthcare contexts where black box recommendations are inadequate?
Speaker
Trevor Mundeli
Explanation
Healthcare professionals and patients need to understand how AI systems arrive at their recommendations, similar to how human clinicians can explain their diagnostic reasoning, which is essential for trust and accountability.
How can we create a personal health assistant system for low- and middle-income countries where people are far from healthcare facilities?
Speaker
Trevor Mundeli
Explanation
This addresses a critical gap in healthcare access for underserved populations who may be 10-20 miles from primary healthcare clinics and need safe, personalized health information.
How can countries increase their digital sovereignty while still benefiting from AI technologies developed elsewhere?
Speaker
Esther Dweck
Explanation
This is a strategic concern for national security and autonomy, as very few countries can be completely digitally sovereign, but they need to ensure continuity of services and control over their data.
How can we implement age verification systems that protect children online while preserving privacy?
Speaker
Esther Dweck
Explanation
This addresses the challenge of complying with child protection laws in digital spaces without creating surveillance systems that compromise user privacy.
How should wealth generated by AI and automation be distributed as machines potentially replace human work?
Speaker
Esther Dweck
Explanation
This is a fundamental economic and social question about the future of work and wealth distribution in an AI-driven economy.
How can we ensure continuous investment and evolution of AI systems beyond initial procurement, given that AI requires ongoing data and model improvements?
Speaker
Shankar Maruwada
Explanation
This addresses a critical gap in understanding that AI deployment is not a one-time purchase but requires ongoing investment in a cycle of data collection, model improvement, and service enhancement.
How can we measure ROI for AI investments in terms of net new use cases opened up and people benefited, particularly for language-specific implementations?
Speaker
Irina Ghose
Explanation
This is important for justifying continued investment in AI localization and ensuring that resources are allocated effectively to maximize social impact.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

