Collaborative AI Network – Strengthening Skills Research and Innovation
20 Feb 2026 12:00h - 13:00h
Collaborative AI Network – Strengthening Skills Research and Innovation
Summary
The panel opened by framing AI diffusion as a potential digital public infrastructure that must first demonstrate concrete use cases before delivering value [12-13]. Saurabh Garg argued that for AI to become a trusted, interoperable DPI similar to Aadhaar or UPI, four foundational resources-compute, data sets, talent and models-need to be democratized and governed by appropriate frameworks [14-15]. He outlined four criteria for “AI-ready” data-discoverability, trustworthiness, interoperability and usability-while emphasizing privacy-preserving access and the importance of locally relevant datasets [15-18]. To coordinate this effort, the AI summit’s working group proposed a voluntary, modular platform called METRI (Multi-stakeholder AI for Resilient Infrastructure) to foster shared development of these resources [23-28].
Building on that, Speaker 2 highlighted that AI, like earlier general-purpose technologies, was invented in the West but must be diffused through the Global South via 100 targeted pathways by 2030, with Kizom acting as a key partner [46-54]. He questioned how such pathways could be operationalized across sectors, noting the gap between invention and impact [39-41].
Representing the UN Development Programme, Speaker 3 described the G7 AI hub as a mechanism to unlock compute, data and talent for low-income regions, stressing the need to build business cases for local data-centers and to retain diaspora talent for co-architecting solutions for farmers and women entrepreneurs [61-68]. Janet Zhou warned that “pilotitis” hampers scale and argued that lasting impact requires governments to be involved from design through implementation, creating trustworthy, inclusive institutions that lower market entry costs for innovators [75-86]. Brazil’s Beatriz Vasconcellos illustrated a national approach that creates shared data ecosystems, standardised early-childhood and environmental datasets, and a centralized chatbot platform built on digital ID to move from pilot to transactional services [95-131].
Kizom later explained that existing digital public rails such as UPI, DigiLocker and language stacks like Bhashani enable AI services to become “invisible” and seamlessly integrated into everyday workflows, while new rails are emerging to support multilingual voice interactions [140-158]. She cited the MOSIP open-source ID platform as an example of how technical standards, operational support and financing combine to lubricate adoption of public-good infrastructure across countries [178-193]. Participants agreed that avoiding vendor lock-in and fostering modular, multi-model solutions are essential, and highlighted a jointly developed “use-case adoption framework” that maps vertical sector needs to horizontal data and compute enablers to guide the 100 diffusion pathways [231-236][240-247].
The discussion concluded that scaling AI in the Global South will depend on building trustworthy digital public infrastructure, democratizing core resources, and institutionalising collaborative, standards-based pathways that move pilots to production at population scale [14-15][75-86][240-247].
Keypoints
Major discussion points
– AI must be treated as a Digital Public Infrastructure (DPI) and democratized through shared foundational resources.
Saurabh Garg emphasized that AI will only deliver value once concrete use-cases are identified and that, like Aadhaar or UPI, AI needs to become a trusted, interoperable, and shareable public good built on four core resources – data, compute, talent, and models [12-15]. He outlined the need for “AI-ready” data that is discoverable, trustworthy, interoperable and usable, and introduced the METRI platform as a modular, voluntary framework to develop these resources [16-18][23-29].
– Making data “AI-ready” is a prerequisite for diffusion.
The speaker detailed four criteria for data readiness: discoverability via common metadata, quality-based trustworthiness, technical interoperability through unique identifiers, and usability enabled by international standards [15-18]. He also stressed privacy safeguards while ensuring data remains locally relevant and can drive context-specific AI solutions [17-22].
– Existing digital public rails (e.g., UPI, digital IDs, DigiLocker) are the backbone for scaling AI across sectors and borders.
Participants highlighted how public infrastructure that is invisible to users-such as payment systems, identity platforms, and emerging language stacks like “Bhashani”-provides the “rails” on which AI services (e.g., chat-bots for farmers) can be layered [140-152][155-162]. The discussion linked this to the broader vision of 100 AI diffusion pathways by 2030, stressing convergence of multiple national rails into a global ecosystem.
– Transitioning from pilots to production requires institutional trust, inclusive governance, and coordinated standards.
Janet Zhou pointed out that “pilotitis” is solved when governments are at the design table, creating trustworthy, inclusive institutions that lower market entry costs for innovators [75-82][84-86]. Brazil’s experience illustrated concrete steps: building shared data ecosystems, standardizing early-childhood and environmental data, and centralising chatbot services under a national digital ID framework [95-104][115-124].
– Removing friction through open-source platforms, centralized services, and capacity-building avoids vendor lock-in and builds domestic capability.
The MOSIP open-source ID platform was cited as a model for establishing technical standards, operational support, and financing that enable cross-country adoption [178-190]. Brazil’s Secretariat for Shared Services demonstrates how a single procurement channel and internal innovation units can streamline AI deployment while resisting reliance on external vendors [204-216][222-229].
Overall purpose / goal
The panel was convened to map out concrete pathways for “AI diffusion” – i.e., moving AI from isolated pilots to scalable, inclusive public services worldwide. Participants shared experiences, frameworks (METRI, use-case adoption framework), and policy ideas aimed at establishing AI as a trusted digital public infrastructure that can be leveraged across sectors and geographies by 2030.
Tone of the discussion
– The conversation began with a formal, forward-looking tone, focusing on high-level concepts such as DPI and resource democratization.
– As speakers introduced regional case studies (India, Brazil, Africa), the tone shifted to pragmatic and collaborative, acknowledging real-world constraints and the need for institutional trust.
– Towards the end, the dialogue became more candid and slightly informal, with participants noting operational hurdles, vendor-lock-in concerns, and even a light-hearted “we’ve been kicked out of the room” remark, while still maintaining an overall constructive and solution-oriented spirit.
Speakers
– Speaker 1 – Role/Title: Moderator / Host; Area of Expertise:
– Saurabh Garg – Role/Title: Secretary, Ministry of Statistics and Programme Implementation (MOSPI), Government of India; Area of Expertise: AI policy, Digital Public Infrastructure, AI democratization [S12]
– Speaker 2 – Role/Title: Moderator / Chair of the panel; Area of Expertise: AI diffusion, inclusive AI development [S6][S7]
– Speaker 3 – Role/Title: United Nations Development Programme (UNDP) representative; Area of Expertise: AI implementation in developing regions, AI diffusion pathways [S1]
– Beatriz Vasconcellos – Role/Title: Brazilian government official (AI lead); Area of Expertise: AI adoption in the public sector, digital public infrastructure in Brazil [S4]
– Janet Zhou – Role/Title: AI adoption specialist; Area of Expertise: Scaling AI pilots, institutional capacity for AI [S5]
Additional speakers:
– (none)
The session opened with a brief logistical exchange – the moderator asked panelists Mr Shankar and Mr Saurabh to pose for a quick photograph, thanked everyone for joining, and then invited the Secretary of MOSIP India, Mr Saurabh Garg, to deliver the keynote address [1-8].
In his opening remarks, Garg characterised artificial intelligence as “a solution in search of a problem” and argued that AI will generate value only when concrete use-cases are identified [12-13]. He positioned AI as a potential digital public infrastructure (DPI), comparable to Aadhaar or UPI, and asserted that for AI to become a trusted, interoperable and shareable public good it must rest on four foundational resources – compute, data sets, talent and models [14-15]. Garg then outlined a four-point rubric for “AI-ready” data: (i) discoverability through common metadata, (ii) trustworthiness via quality assessments, (iii) interoperability enabled by unique identifiers, and (iv) usability ensured by internationally aligned standards [15-18]. He noted that access must be balanced with privacy safeguards and that locally relevant data are essential for context-specific AI solutions [17-22]. To coordinate the democratisation of these resources, the AI summit’s working group proposed a voluntary, modular platform called METRI (Multi-Stakeholder AI for Resilient and Trustworthy Infrastructure) that would allow stakeholders to contribute compute, data, models and talent on a non-committal basis [23-29].
The moderator (Speaker 2) then introduced the metaphor of “diffusion pathways”, observing that, like earlier general-purpose technologies, AI was invented in the West but its impact must be realised in the Global South. She announced the ambition of 100 AI diffusion pathways by 2030 and asked how such pathways could be operationalised across sectors [39-41][46-54].
Kizom (UNDP) responded by describing the newly created G7 AI Hub as a mechanism to unlock compute, data and talent for low-income regions [61-68]. He highlighted the need to build business cases for local data-centres, to break data silos despite the abundance of data in the Global South, and to retain diaspora talent so that AI solutions can be co-architected for smallholder farmers and women entrepreneurs [62-66]. In illustrating the scale of talent networks, Kizom named “Selena from Zindi” and noted that Zindi’s community of ≈ 100 000 African data scientists functions as a public-interest infrastructure [150-158]. He also pointed to existing digital public “rails” – such as UPI, DigiLocker and emerging language stacks like Bhashani – that make AI services invisible to end-users, allowing AI-driven chat-bots for agriculture or health to ride on already trusted infrastructure [140-152][155-162]. The MOSIP open-source digital-ID platform was cited as an illustration of how technical standards, operational support and financing together “lubricate” the adoption of public-good infrastructure across countries [178-190].
The moderator then asked Janet Zhou (Speaker 3) whether the barrier for moving AI from pilot to production was merely funding and how AI could transition from prototype to scale [70-78]. Zhou answered that “pilotitis” – projects stuck at the prototype stage – predates AI and can be overcome only when governments sit at the design table from the outset. She argued that trustworthy, inclusive institutions are essential for lowering market-entry costs for innovators, and that shared infrastructure must be built on standards that create a positive feedback loop of trust and adoption [79-86].
Building on this, Zhou used the MOSIP open-source digital-ID platform as a road-analogy: standards are the road surface, side-of-the-road rules govern traffic, and financing provides the fuel that keeps the road functional. She stressed that even after a “road” has been built, agreed-upon technical standards, side-rules and financing are required to sustain adoption [178-190].
When the moderator turned to the Brazilian perspective, Beatriz Vasconcellos (Speaker 4) presented Brazil’s DPI vision of “one government for each person”. She described the creation of shared data ecosystems – thematic datasets for early-childhood and environmental domains that are standardised, interoperable and linked to a canonical citizen profile [95-110]. A centralised chatbot platform, built on the national digital-ID system (gov.vr), has moved from informational pilots to transactional services, enabling citizens to complete service requests securely [115-124][125-131]. To avoid duplication and vendor lock-in, Brazil’s Secretariat for Shared Services now offers a single procurement channel for AI solutions, allowing ministries to acquire services with a simple digital transfer, thereby reducing implementation time and dependence on external vendors [211-216]. Vasconcellos also warned against over-reliance on external vendors, stressing the need to develop domestic AI capability through internal experimentation and capacity-building [222-230].
Later, the moderator asked about safe conversations in agriculture and health and whether reusable playbooks could be created. The panel highlighted the need for safety guardrails and voice-AI playbooks to ensure trustworthy interactions in high-risk domains [165-170].
Kizom (UNDP) then elaborated on the “use-case adoption framework”. He explained that the framework maps vertical sectoral impact (e.g., agriculture, health, education) to horizontal unlocks such as language localisation, compute access, and AI-ready data. Co-design between governments, private sector and civil society is required to fuse vertical needs with horizontal enablers, ensuring that each use-case can scale efficiently [210-225].
Across the discussion, several points of agreement emerged. All speakers concurred that AI must be treated as a DPI, requiring trust, interoperability and shared “rails” to enable cross-border diffusion (Garg, Kizom, Zhou) [14-15][140-152][155-158][179-190]. They also agreed on the four criteria for AI-ready data and on the necessity of standardised metadata, quality checks, unique identifiers and international classifications [15-18]. The need to overcome pilotitis through early government involvement and shared infrastructure was echoed by Zhou, Vasconcellos and the moderator [79-86][211-216][71-74]. Finally, multilingual and voice AI were identified as equalisers that broaden inclusion, a view shared by the moderator and Kizom [264-267][154-156][168-170].
Nevertheless, moderate disagreement was evident on the preferred architecture for AI DPI. Garg framed AI as a government-led DPI with strict standards, whereas the moderator advocated a multi-stakeholder, open-source, multi-model ecosystem to avoid concentration of Western large language models, and Zhou highlighted MOSIP’s open-source, vendor-free model as a practical alternative [14-15][231-236][179-190]. A second tension concerned the scaling mechanism: Vasconcellos promoted a top-down, centralised procurement and shared-service model, while Garg and the moderator emphasised modular, voluntary rails and horizontal enablers such as compute and talent [211-216][231-236][14-15]. A third divergence related to external assistance: Kizom’s G7 AI Hub seeks to import compute and talent to the Global South, whereas Vasconcellos warned that excessive reliance on external vendors could erode domestic capability [61-66][222-230].
From these convergences and divergences, the panel distilled a set of key take-aways:
1. AI should be institutionalised as DPI, with trust, interoperability and shareability as core attributes.
2. The four foundational resources-compute, data, talent and models-must be democratised, and data must satisfy the discoverability, trustworthiness, interoperability and usability criteria.
3. The “100 AI diffusion pathways by 2030” agenda stresses horizontal enablers (language, compute, talent) linked to sector-specific use-cases.
4. Early government participation and the provision of shared public rails (identity, payments, data exchange) are essential to move AI from pilot to production at population scale.
5. Open-source, multi-model platforms such as METRI and MOSIP are preferred to avoid vendor lock-in and to build domestic capability.
6. Multilingual and voice AI are critical equalisers for reaching underserved users.
7. Co-architecting public-private partnerships and multi-stakeholder collaborations (e.g., the G7 AI Hub, XTEP, Gates Foundation) are required for sustainable diffusion [14-15][23-29][53-54][75-86][231-236][178-190][240-247].
The panel also identified concrete actions. The METRI platform will be further developed as a voluntary, modular framework for sharing compute, data, models and talent [23-28]. The G7 AI Hub will continue to unlock resources for Africa, Latin America and Asia [61-66]. Countries are encouraged to adopt the AI use-case adoption framework, which maps vertical sectoral impact to horizontal unlocks, to guide scaling [240-247]. Brazil will proceed with its centralised procurement and shared-service model to streamline AI deployment across ministries [211-216]. Nations are urged to explore open-source digital-ID solutions such as MOSIP, accompanied by operational support, training and financing [178-190]. Finally, stakeholders are asked to promote multi-model, open-source AI solutions to mitigate vendor lock-in [231-236].
Unresolved issues remain. Detailed governance mechanisms for ensuring data trustworthiness and privacy across jurisdictions have yet to be finalised. Viable business models for building compute infrastructure (e.g., data-centres and GPU clusters) in the Global South need further articulation. Clear timelines, metrics and accountability structures for achieving the 100-pathway target by 2030 are still missing. The development of safety guardrails and reusable playbooks for voice-enabled health or agricultural AI interactions requires additional research [165-170]. Finally, the extent of regulatory reforms needed to support an AI-centric DPI while preventing vendor lock-in is an open question [16-22][66-70][84-86][222-230].
In sum, the discussion moved from a high-level framing of AI as a nascent technology to a nuanced blueprint for embedding AI within trusted, interoperable public infrastructure. The pivotal moments were Garg’s articulation of AI-ready data and the DPI metaphor, Zhou’s diagnosis of “pilotitis” with a governance remedy, and the concrete national examples from Brazil and the MOSIP model. These insights reshaped the conversation from problem identification to solution design, culminating in a shared AI adoption framework that links sectoral needs with horizontal resources. The consensus on the importance of DPI, data readiness, early government involvement and open-source, multi-model ecosystems provides a solid foundation for future policy work, while the identified disagreements highlight the need for flexible, context-specific pathways that balance standardisation, sovereignty and openness. Together, the three pillars-digital public infrastructure, democratised foundational resources, and the 100-pathway agenda-chart a clear route forward, with next steps focused on developing METRI, scaling the G7 AI Hub and mainstreaming the use-case adoption framework. [14-15][75-86][140-152][240-247]
request all the panelists along with Mr. Shankar and Mr. Saurabh for a picture, please, because everyone has different schedules. So we just want to get a quick photo of this moment before we move ahead. Yeah, content first. All right. Thank you so much. Panelists, you can take your seat. To take us forward, I’d like to invite to deliver a keynote Mr. Saurabh Garg, who is the Secretary of MOSPE India. If you can take us forward. Thank you so much.
Thank you. Good afternoon and great to be here on this session. We’re talking of diffusion, AI diffusion. I’ll just speak of one or two aspects of it because I’m sure the panelists would lend a lot more color to this topic. Just to take off where Shankar left, sometimes he’s talking about use cases and that’s very necessary because AI is perhaps something like a solution in search of a problem. So until and until we don’t find use cases for that, it will not be able to give the value that it potentially can and I think that’s really, really important. We’re talking of AI being a possible DPI, a digital public infrastructure. and I suppose for that some steps would be needed to ensure that it becomes trusted, interoperable and shareable.
I think those are aspects which a DPI like Aadhaar or UPI has and I think we are still in early days but the mechanisms for that, how we can ensure that it would be possible and given that we talk of four resources as foundational AI resources, compute, data sets, talent and models apart from obviously the frameworks that would be necessary for this and I mention this because I had the privilege of chairing the democratizing AI resources group, working group of the AI summit and various… various options that we discussed with other countries on how we can ensure democratization of these four foundational resources. obviously each of them would have a different mechanism but one thing I would just go down in slightly greater details is on the data sets back part which is also something that we are doing within the Ministry of Statistics across the two different ministries and and states and why I am saying data sets is also because perhaps data is also the raw material for AI models so it’s a very foundational resource in that sense and compute is perhaps something that we can acquire and therefore we have discussions around models whether they need to be more efficient they are right now extremely power both compute and energy intensive or we can make them lighter going forward that is something which is work in progress I think it will take some time before the small and domain small domain models come in which will perhaps improve diffusion but data is something that would need to be AI ready going forward and in AI ready I would probably make four things that it needs to be one is discoverable how do you ensure that data is easily discoverable and that’s perhaps by ensuring that the metadata is understood by everyone and that makes it easier for any models also to understand second is on the trustworthiness of the data and that’s the quality assessments that we have whether it’s trustworthy and it’s credible and that would determine its use the third is in its interoperability with the two sets of data sets how interoperable they are what are the kind of unique identifiers it has to be able to identify what is it how to link different data sets and the fourth is its usability across systems and that would be dependent on the standardization and the classifications that we use which are internationally similar so that different conclusions do not come from the same data set.
And obviously, the focus would have to be on access and dissemination so that it is available for use while preserving the privacy of the data, the safeguards that would need to be built. And why I am saying about data is because this would be also where a lot of the local contexts, linguistic contexts, cultural contexts will come in and that will come in from the data sets that are. We talk of ensuring that it is locally relevant, the inferences and the solutions and I suppose the data would determine its relevance. So, we have to be very careful about that. So, we have to be very careful about that. So, we have to be very careful about that.
So, we have to be very careful about that. and ensure that it’s useful at different levels. So I’ll stop here, apart from the fact saying that for democratizing AI resources, the working group discussed with the others and a kind of a platform has been suggested going forward, which has been named as METRI. METRI in Hindi means friendship for those who are not aware. And it’s an acronym for multi -stakeholder AI for resilient and I’m forgetting what’s the T for. Now that, sorry, trustworthy. So and infrastructure. So that’s the acronym that we hope to be able to. But what the concept is that on a modular level, on a voluntary basis level, on a non -commitment level.
how we can develop on the foundational AI resources of availability of compute, data sets, models and talent. And I think the way we are able to develop this and move towards a DPI for AI resources, I am sure diffusion would become all the more easier. So thank you for this opportunity and look forward to a great time. Thank you.
Thank you everyone. And we will carry on. We don’t have enough panels in which all of us are women. So three cheers for that. Don’t look at each other. You guys had a great contribution. So a couple of weeks back, some of us got together and we said, invention has happened in West. Impact has to happen at each one of us. What’s the gap between invention and impact? And that’s where we came out and thought about we thought about adoption and then we said, isn’t it diffusion? And why did we pick diffusion? We actually read a book. We read a book by Jeffrey Ding. He’s a professor in it’s in D .C. Why am I forgetting the name of the institute?
It’s D .C. Georgetown. Sorry. Georgetown in D .C. And we read about AI diffusion and that GPT, the general purpose technology like electricity, it diffused into the society over several decades. It was created in Europe but actually diffused in U .S. quite a lot. U .S. capture and also chemical engineering which Shankar talked about that chemical the chemical engineering creation if you see, if you remember chemistry, Bohr model you know all those were Germans but actually it’s US who capitalized on that. AI is like that. Invention happened in the West. We all know that. But it’s the global south is going to have use cases who are going to diffuse it into sectors into the and the horizontal enablers have to happen across these sectors for us to benefit, for us to have more economic benefit out of AI.
So that’s when all of us said that yes we will do 100 diffusion pathways by 2030. And one of the partner in crime was Kizom. She is here with us and Kizom my first question is to you. Tell us about how you think because Kenya comes in, you are based in Italy and we did a tripartite with Kenya, Italy and India. How do you think 100 pathways to 2030 pan out for you and what does it mean for you? How do you think it will happen?
absolutely Shalini how long do I have to answer this question short version long version short version
as long as people are okay with stories you can carry on
um well I mean as Saurabhji the chair of the working group for democratization of AI spoke about there are some fundamental resources or inputs AI need in order for it to actually work in a way that can help a common citizen or a small business owner and some of those foundations that he spoke about our AI ready data compute and those are the things that I in my role at the United Nations development program working in Africa in parts of Latin America in Asia discovered that they were able to do in the United Nations development program working in Africa in parts of Latin America in Asia discovered that they were able to do in the United Nations development program working in Africa in Asia discovered that there is a constraint on access to some of these there is a constraint on access to some of these foundational resources And so this G7 AI hub was created to address that constraint by, one, unlocking additional resources from, of course, the friendly G7 countries that wanted to focus on parts of Africa.
But also, as we do that, to think about what is the business case for data centers, for GPUs on the continent? How do you break data silos, even though the global south is so rich in data? As well as how do you orchestrate talent, especially since we saw that much of, you know, let’s say Microsoft’s or big tech’s talent pool on the continent of Africa and in other parts of the world, were actually coming from the global south countries. And over the last one year or so, I’ve seen this tremendous momentum of many of the African people who worked in big tech or large companies moving back to the continent because they actually don’t want the continent to be…
left behind. They want to be co -architects of the future, this fundamental shift that humanity is going through. And this is where when we talk about 100 AI diffusion pathways, it is about co -architecting pathways where we look at how do we bring not just language data, but voice adoption into solutions that a smallholder farmer can use, that a woman entrepreneur can use, and not just as pilots, but to think about it from a infrastructure perspective, a digital public infrastructure perspective where we can scale to millions of farmers, go across national boundaries and be able to look across borders either as digital public goods or as expansion of private sector innovations or public private partnerships. So as Shankar said, diffusion pathways could be many, and it’s for
Thank you, Kizom. I’ll come to you, Janet. You lead global development for AI across multiple geographies. But most of them are stuck in pilots, right? How does AI become production scale? And do you think it’s funding that they lack only? Or are there more diffusion pathways that we can create so that actually AI pilots most of population scale?
Hello? Hi. Maybe I would first start by saying the problem of pilotitis is actually one that sort of predates AI. And we have many technologies that are enormously beneficial for pilotizing. humanity that I think are currently still stuck in not having diffused. But when I think about the positive examples, the places where I think as a global community, we’ve had tremendous scaled impact, right? Having or reducing child mortality by half since 2000, 170 million people out of extreme poverty. The common threads are often that we’ve managed to figure out how to get both governments and markets to really focus and work for the most vulnerable populations. And so whether it’s vaccines that we’re talking about or instant payment systems, often it is really just ensuring that government is there at the design phase, at the table, in the driver’s seat, not brought in after the pilot results come in.
It is very much focused on making sure that… We make it easier for local innovators to be able to enter markets. So whether that… That’s you can aggregate low margin demand, you can streamline market entry, but really making it easy to lower the cost to serve for the most kind of vulnerable people at the edge. And then it is very much also building institutional capacity and making sure that, you know, it’s not there’s playbooks and training and all of that, but really shared infrastructure that allow sort of all boats to rise and making sure that that infrastructure is trustworthy, is inclusive, sort of creates, I think, a really positive feedback loop because I loved what Nanda Nilakani expressed, which is that we, you know, really rely on institutions for trust, not on algorithms.
And I think one of the ways that institutions become trustworthy is by being inclusive and making sure that they actually serve the people that otherwise, would be less to benefit.
Yeah, absolutely. I think that’s a key that how do you trust the institutions and AI output will, you know, suppose it’s coming out of a AI advisory application. Do you trust that or do you trust the institution which is giving in a physical or will the institution adopt this AI advisory so that there’s more trust on the advice itself being given? I mean, that’s a quite hybrid and risky manner and institutions have to understand AI and adopt and first trust the AI output before they say that this is ours. I think that that part is key on AI adoption. Bia, tell us about Brazil that, you know, a very different perspective. Just let us first let us understand that how is the AI adoption in that region?
And are you also stuck in this pilot? pilot to production and is there a gap and how do you see that being bridged?
Perfect. So I think there are many different ways and perspectives to think about AI. In the Brazilian government we chose to establish a vision for one government for each person. So that means we are going fully on the personalization and even in the agentic state vision, right? So for that we need to be thinking about some shared infrastructure and shared capabilities. So what we did was starting with the data. We have a project now to not just catalog but also prepare the data sets for training. We are also building some shared platforms for personalization and to understand citizens’ characteristics. So it’s… within our state -owned enterprises, we have two large IT state -owned enterprises, and we are making them collaborate on a shared platform in which we have some canonical data sets about citizens, and every ministry contributes with different characteristics, and we are creating different labels for every citizen.
And then one different way in which we are trying to break the data silos, which, of course, is a very big issue, is to think about the data ecosystems. So we came up with this concept, and it doesn’t mean that we’re doing data lakes. It means that we’re thinking about interoperability from a thematic perspective. So one example is the early childhood ecosystem, data ecosystem. So we know that a lot of policies related to early childhood, they have different data requirements, and they need to use similar registries, and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems So we created this ecosystem.
We brought together five ministries, Ministry of Health, Education, Social Development, Management, and Human Rights. And we cataloged the policies and what kind of data would be needed. And then we started creating the standards for that specific ecosystem. So we prioritized early childhood and environmental and land ecosystems. Land, environmental, land, and climate. It’s in the same group. So we are starting with that, and it seems to be an interesting approach. It seems to be working. The other thing is coming back to the GPI discussion. It is very helpful for us to have the digital ID and authentication to implement this vision. So what we’re doing now is we started, well, a lot of people in the government want to do Gen AI, right?
because I think it’s the easiest and maybe most famous type of AI implementation. So a lot of government entities and ministries wanted to do their own chatbots. So it was being spread all over. So what we did was also to try to centralize that capability. And we started with informational chats. So what kind of policies or information would be helpful. Now we are just starting the transactional part of the chatbots. So the idea is that a citizen will be able to actually complete a service request or get their service done through the chat. And that’s only possible because we have the gov .vr authentication. So we know that the person is actually the right person.
And then the third step, which we still haven’t entered, but that’s the vision, is the agentic state. So the agentic state. To build the agent specific for that person. And that will only be able to happen once we have the data platform. infrastructure. So that’s more or less how we’re thinking about it.
Okay. And thanks for bringing DPI into the picture because my next question is on that. Nandan announced yesterday 100 Pathways to 2030 because it comes back, comes from a lot of experience on DPI. And Kizo, my next question is to you that, you know, you were also in the DPI journey, working with India. Do you think in AI, there are rails like, you know, DPI lays down rails, roads, which then other countries can take. Do you think in AI, how do use cases cross borders? How do use cases, what are the pathways? What are the playbooks that different countries can benefit from? How do you think that can happen?
Shalini, great question. And I’m assuming this room is fully aware of or is a user of digital public infrastructure. Raise your hands if you’re not. Oh my God. One or two people. It’s probably one of the reasons why I don’t. Okay, we’re not going to get into that right now. You use UPI, right? You use DigiLocker. You don’t use DigiLocker, but you use DigiYatra. No? Okay. I think you should. But you use UPI. Okay, so he’s a DPI user. And that’s the beauty of digital public infrastructure. You actually want to be invisible. And one of the sort of design, one of the ambitions that we have as part of the AI diffusion pathways is that we actually don’t want AI to be this noisy, chaotic technology.
We want it to be so invisible because it’s actually part of your life. Part of not just our life, because obviously for us it’s very convenient. We’re English speakers. and so we are at the summit but for a small holder farmer, for a small micro -entrepreneur business, a woman who is crossing borders between Guinea and Sierra Leone for example so to go back to your question Shalini one as Bia was already starting to say as she is seeing in Brazil and as certainly we are seeing in many parts of the world including in India, when you have data that’s already interoperable and public rails such as identity payments, data exchange then the power of AI is much more easier to bring to that same service that you wanted to reach out that’s now an AI trackbot to a farmer on those rails so that’s fantastic but then we are also seeing an emergence of additional rails and I think that’s a great point For those of you who are from India, you probably have heard of someone using Bhashani, which is built on AI for Bharat and sort of the Indic language stack.
So that is definitely a public rail. And I know that in different parts of the world, there are many such rails being created. And I hope that we see the emergence of rails, but also the convergence of rails. Because as the French president yesterday was saying, along with Honorable Prime Minister Modi, that it’s not that we need to do more, it’s that we need to do better together. So this is where the public rails really need to come together. And then I want to recognize Selena from Zindi here from Africa. She runs a public infrastructure. She runs a network of 100 ,000 data scientists across Africa. And that’s already infrastructure. It’s public interest, public value. And we’re at a place where we’re trying to figure out what is the business case.
how do we still make them sustainable by creating those innovation layers on top of the public rails that are also emerging on AI but it’s not like you have to compete between DPI and AI it’s like the DPI principles of interoperability, modularity, reusability becoming a digital public good those still remain quite intact and this is how we might see population scale the scale towards impact
Thank you Kizum for explaining it so well and actually that’s happening because not just the language the multilinguality voice AI that is becoming a DPI because you should be able to interact in voice and the voice stack is something which should be available for most of the people to build on top of it safety, the guardrails they are DPI in itself they can become DPI in itself how do you do safety safe conversations in agriculture So how do you do safe conversations if someone is calling up for patient care in health care? And can those conversations become a playbook in itself? So these are the playbooks which can get created. So thank you so much for talking about it.
I’ll come to you, Janet, that, you know, the frictions which are there, right? I mean, do you think there could be certain programs or investments to be done where such frictions? Because everybody is building like the full stack. Hey, I’ll do language translation. Hey, I will do compute I need, data I need. So how do you remove the frictions and do you think some programs and investments can help this?
You know, I was thinking about this question and the example that kind of came to mind that maybe kind of illustrates it really well. I was thinking about the MOSIP program, which is really kind of an open source platform. It’s inspired by the Adar program, but it is part of a larger effort with World Bank and many other partners to actually try to take that open source, you know, vendor free lock in national ID system and bring it to many, many countries. And when I thought about the components of that, the programming components, I know a lot of it was around ensuring that there was sort of an open production ready reference implementation frame. And maybe if we’re going to continue on the road analogy, I was trying to think of like, what would that be?
And you still, if you have a road, you still need to pick sides of the road and agree on which sides of the road everyone’s going to drive. And you have to agree that a stop sign means stop and that red means stop. And so there. Are still, I think, a set of programmatic standards and norms. that really makes it easier, not only for the adoption, but for those that have adopted to be then able to benefit from that adoption. And then, you know, a lot of when I think about programmatically what has happened in something like MOSIP is, in addition to the technical implementation, there was a lot of operational support, a lot of just examples, countries visiting each other.
And, you know, I think India has sent many delegations to many countries to help explain their story, share their pathway. There’s training that needs to be had, right? You still have to get your driver’s license and prove that you know how to use it. And so, you know, I think even after building the rails, there’s still plenty of program implementation work to actually really help facilitate and lubricate that adoption. And, of course, financing as well, which kind of came through the World Bank program. So, you know, it’s a bit of… no sort of single bullet, even after, I think, after… having the rails set. There’s still, I think, a lot of work to be done, program implementation and operational support.
Thank you. Bia, what’s the hardest challenge? I mean, this all sounds very easy. Have diffusion pathways, go and build it. But it has to be operational. It has to be adopted. They’re people, right? The human in the loop is the most important in AI. We can never ignore that. What’s the hardest challenge that you see in this? Just one? Just one? Oh, we’re lucky.
about those. So obviously, it’s not just creating applications. It’s the same old story of digital transformation, right? It’s just at a different level, but you’ve got to change the processes, the way things work. So what we’re doing, I think maybe three interesting things that we are trying to do. Also, I’m not trying to sell it, right? Everything that we are doing, we’re passing. So let’s see what works and what doesn’t. But one thing that we’re doing now is in the Ministry of Management, we have a Secretariat for Shared Services, and they didn’t used to work with AI. So the idea is that we make it very, very simple for any ministry to use a service that is centralized in the Ministry of Management.
So for example, with these chatbots that I was telling you about, we came up, we’ve centralized the procurement, and we chose one that was going to be a service that was going to be a service that was going to be a service vendor to help us build the solution. and each ministry was doing their own. So we said, hey, if you buy it through the centralized service, you only need, it takes just a few hours, you just need to sign a document and transfer some money digitally to the Ministry of Management and you can use the service. So you don’t have to go through any procurement. So that’s one way that we’re trying to overcome the problem of multiple solutions and difficult implementation.
We also came up with an interesting, I think, institutional arrangement, which is, when we’re talking about AI, we’re talking about innovation and new capabilities. We’re talking about innovation capabilities through the Ministry of Management, which means… that they’re building the whole process of how you can come up with first a policy goal, like what the AI project is going to target, how do you experiment. They build the process for experimenting. They have analysts looking at the data and seeing if things are working. So that’s something that seems to be working well, and we think it’s going to be good. The other real challenge that we have, I think, is with the vendors. And I’m using my development hat here from my previous background.
I think everyone is talking about AI and how every agency and ministry needs to be doing something on AI. And obviously there are some big vendors who are saying, yeah, like you government, you don’t have the capabilities. We have the capabilities. We can do it very fast. We do it at scale. And if you start making these decisions day after day, you’re not going to build any capabilities. You’ll just… You’re not going to build sources. and I use an analogy with for example the army no one thinks that it’s reasonable to outsource your army to a country that has a stronger army or a better army but in terms of digital we’re doing it every day like for every decision oh this country, this company does it better so we’re just going to outsource and there are some essential capabilities it’s not just an AI tool or something like we’re playing with national data, we have some very strategic goals also so I think if we don’t think about building these capabilities even if you start small and it takes a while to build the muscles we’ve got to build the muscles so we’re trying to incentivize also the agencies to test and experiment and don’t buy prepackaged solutions because we’ve got to build our own muscles
yeah I think you brought in a very valid point which is what a lot of people are scared of is a vendor lock -in oh we’re going to have to do this and we’re going to have to do this and you would have seen Amul AI which got launched by the Prime Minister and VSA Step Foundation made it possible and the one key thing was there that how do you keep it multi -model like multiple models should be able to do it why just one and that’s been a key thing that how do you give choice to people how do you give you know not logged into the system because that’s where the diffusion works.
Diffusion is not about like concentrated western LLMs all together and just deploy it. It’s about actually walking the path give choice, replaceability have domain knowledge, have data with you because data is there with us in our enterprise systems and we don’t want you know learning from them. How can you separate that so it’s about actually and now this know -how that we have got We want to share with everybody. And that playbook is a diffusion pathway. That’s exactly and that gives an example for that. Kizum, you and me co -authored a paper, which is up in Atlantic Council. And we also talk about the use case adoption framework. Would you like to tell people about the use case adoption framework and how that can be a friction remover?
Oh, absolutely. And I’m looking for the key author that I saw of the use case adoption framework, Tanvi Lal, director at People Plus AI at XTREP Foundation. So, you know, when we were preparing for the AI Impact Summit many months ago, which feels like many years ago, we started with this idea saying adoption. Adoption is proving to be a challenge. what are we learning from our experience? And this is where XCHEP looked at Mahavish Star, its work with AI for Bharat, its ongoing conversations with Entropic and other private sector companies on safety tooling. And I did the same across a number of countries, and I think together we consulted about 20 -plus countries, had convenings in South Africa to New York to, I don’t know, many, many more places along with the Gates Foundation as well.
And what we learned was that the impact of what technology like artificial intelligence can do sits in sectors, so education, health, climate change, but its ability to move from pilot to scale depends on the horizontal unlocks. So this 100 AI diffusion pathways, underpinning that is this framework that we call… the AI adoption framework, the use case adoption framework. where we see impact in sectors where you need contextual data, contextual knowledge, process, workflows, things that have to change in a department of education to a department of health and so on. But then the horizontal unlocks are the language data, compute. Generally, how do you make AI data AI -ready? Or how do you make data interoperable because a farmer is going to be buying things, selling things, getting public services?
So we have to think about it from a user life perspective. So this is really, I think, a bit about the use case adoption framework that we’ve done together with countries, Gates Foundation, with XTEP. And we hope that this helps us ground our 100 AI diffusion pathways because as Shalini was saying, this is not about just going and saying, I have the solution, you adopt it. we’re not going to see that impact with that approach. We’ll have to co -design some pathways. We’ll have to fuse verticals and horizontals. And this is where, at least when I talk to many innovators, private sector companies in the global south, I see them saying, aha, this is how we co -architect the future.
This is where, when we develop a voice optimization solution as a public good, that goes out to the world. We are builders of the future too. So this, you know, it’s just such a powerful kind of learning that we’ve put together into this 100 AI diffusion pathways towards impact.
Thank you. Thank you, Kizam. I’m looking at the time, and I would like to ask the audience to have like two questions. So please raise your hand if, yeah. I saw yours first, and okay. would anybody like to take it
yeah yeah I was so distracted by the crowd that’s coming in we’re getting kicked out guys so I think your question was how to address diversity and diffusion but if you can’t read can you hear? because this is where I think voice adoption is something that is key to the inclusion agenda and the impact agenda of AI so I would say to answer your question, voice adoption
yeah actually that’s why AI becomes more equalizer and it actually bridges the divide right so there are inequalities and how do you bring in new language, today bringing a new language into a model has become fairly easy fairly so bringing a new language which we can talk about yeah that’s available there is data which is logged in PDFs into various regions and people are not knowing that today that’s become easier so that’s how it is a leveler that’s a trusted source that’s a trusted source so I’ll maybe talk to you later about what Mr. Saurabh Garg talked about and how evidence one last question yeah I think you’re talking about a pivotal moment right I think you’re talking about a pivotal moment one like you know I am not a fortune teller right but what I can do is I do understand the ecosystem about AI I think the fact that multilinguality can be one very big change because it draws people in what is change about change is always about people that how people are able to when UPI was initially talked about it was like bank said I have to change my whole system about it the user friendliness of it and the fact that it’s so easy to deploy and by people is what drew to it so any AI moment which draws people in because of the interoperability the usability and the fact that itself will become has it happened no can it happen yes and multilinguality is one of them but we have to see that how it pans out Okay, thank you so much Thank you very much We have been kicked out of the room and a great panel Thank you, bye
Thank you everyone for joining us and sharing your thoughtful views On behalf of India AI team we would like to offer a souvenir with our sincere thanks Thank you so much Thank you Thank you
“We’re talking of AI being a possible DPI, a digital public infrastructure.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/collaborative-ai-network-strengthening-skills-research-and-inno…
EventDigital Public Infrastructure (DPI) is defined as society-wide digital capabilities that are essential for citizens, entrepreneurs, and consumers to participate in society and markets. It serves as th…
EventThis connects AI democratization to broader digital infrastructure development, suggesting that individual data empowerment through DPI is prerequisite to meaningful AI access. It reframes the challen…
EventChetty Pria: And thank you so much, Payal, and thanks for sharing also or introducing that what we are witnessing here is these platforms connecting multiple actors. So we’re seeing these network effe…
EventI mean, access to compute is what makes or breaks a startup. So the way in India, the way I see it, the way we have started thinking about AI platforms, and I’ll use the word platform, it treats AI as…
EventAnd I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very interesting demo here of energy trading. Now, we never thought of energy as someth…
EventIt’s about institutions. It’s about trust building. It’s about negotiations. It’s about guardrails, which Dario mentioned. It’s about working with different stakeholders and making them go towards a c…
Event_reportingAnd when you look at deployment, the guardrails of fairness, accountability, privacy, security need to be maintained. And finally it comes to diffusion which is the when it actually reaches the into t…
Event“Deep work on working on fragmented data silos.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/collaborative-ai-network-strengthening-skills-research-and-innovation?diplo-deep-link-text=…
Event“Distributed software development.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/collaborative-ai-network-strengthening-skills-research-and-innovation?diplo-deep-link-text=And+over+the+…
EventShankar Maruwada from EkStep Foundation provided the technical framework for scaling AI solutions through digital public infrastructure principles. Using a railway analogy, he explained how open proto…
EventAbsolutely, and very rightly said. So it’s becoming a fundamental part of the infrastructure that is being then used to build applications. So earlier I think the perspective that changed was that we …
EventThis comment reframes AI diffusion from a technology problem to an infrastructure problem, introducing the powerful metaphor of ‘shared rails’ that fundamentally shifts how we think about scaling AI s…
EventThe speakers demonstrate strong consensus on fundamental challenges around data fragmentation, the need for standardized architectures, cross-functional capacity building, multi-stakeholder coordinati…
EventFrom connected buildings to advanced AI and decarbonization efforts, companies that embrace these changes thrive in the evolving manufacturing landscape. Government support and partnerships are crucia…
EventOpen architecture and interoperability are critical for long-term sustainability, avoiding vendor lock-in, and maintaining technological sovereignty while building population-scale solutions
EventUsing open-source models with fine-tuning for public institutions to avoid vendor lock-in while maintaining quality
EventCollaborative approaches are essential for addressing complex societal challenges in small populations Nordhaug argues that digital public goods provide governments and organizations with greater con…
EventBrendan Vaughan: It’s really, really important. And I totally agree. Yeah, so I would add to that email. Pretty good DPI, email protocol. In the US, not so good. Walking away from net neutrality…
EventThe tone of the discussion was largely constructive and forward-looking. Speakers acknowledged challenges but focused on opportunities for progress. There was a sense of urgency about addressing gover…
EventThe discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while acknowledging the complexity of the challenges. The tone was constructive but reali…
EventThe discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s potential in healthcare but was tempered by acknowledgment of serious challenge…
EventThe discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving and shared vision. Panelists demonstrated mutual respect and built upon each ot…
EventThe tone was pragmatic and solution-oriented, with speakers expressing both frustration with past failures and cautious optimism about current opportunities. There was a notable shift from theoretical…
EventThe tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized opportunities rather than obstacles, with particular enthusiasm around technology’s p…
EventThe discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged serious challenges and risks (declining public funding, regulatory bottlenecks, conc…
EventThe discussion maintained a serious, urgent tone throughout, with speakers consistently emphasizing the critical nature of IoT security challenges. While technical and policy-focused, the conversation…
EventThe discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging challenges while sharing practical solutions and experiences. The tone was profession…
Event“The moderator thanked everyone for joining, asked panelists Mr Shankar and Mr Saurabh to pose for a quick photograph, and invited the Secretary of MOSIP India, Mr Saurabh Garg, to deliver the keynote address.”
The moderator’s thank-you is recorded in the session notes [S86] and the invitation for a group photograph and the presence of Mr Shankar Maruwada are described in the opening remarks [S87].
“Garg characterised artificial intelligence as “a solution in search of a problem” and argued that AI will generate value only when concrete use‑cases are identified.”
A comment highlighting that AI exists but lacks a clear problem-solution fit matches Garg’s statement and is documented in the discussion summary [S4].
“Garg outlined a four‑point rubric for “AI‑ready” data: discoverability through common metadata, trustworthiness via quality assessments, interoperability enabled by unique identifiers, and usability ensured by internationally aligned standards.”
Dr Saurabh Garg’s description of the four essential elements for AI-ready data infrastructure-discoverability, trustworthiness, interoperability and usability-is recorded in the transcript notes [S16].
“He noted that access must be balanced with privacy safeguards and that locally relevant data are essential for context‑specific AI solutions.”
The need to balance data access with privacy protections is explicitly mentioned in the discussion summary [S93]; the emphasis on local relevance aligns with broader remarks on agency and co-creation in digital public infrastructure [S12].
“The moderator announced the ambition of 100 AI diffusion pathways by 2030.”
The initiative to create 100 AI diffusion pathways by 2030 is documented in the session summary [S100].
The panel demonstrates strong convergence around four core themes: (1) framing AI as a Digital Public Infrastructure with shared, trustworthy rails; (2) establishing AI‑ready data through standards, metadata and interoperability; (3) moving beyond pilot projects via early government involvement and shared service models; (4) ensuring inclusivity through multilingual/voice capabilities while avoiding vendor lock‑in by promoting open‑source, modular solutions. Horizontal enablers—compute, talent and models—are repeatedly identified as prerequisites for the 100‑pathway ambition.
High consensus – most speakers echo each other’s positions, indicating a shared understanding that scaling AI responsibly requires DPI‑style governance, data standards, institutional coordination and open, inclusive technology stacks. This broad agreement suggests that future policy initiatives can build on these common foundations to design coordinated diffusion strategies.
The panel shows broad consensus on the need to scale AI from pilots to production and to avoid vendor lock‑in, but diverges on the preferred architecture and governance model—government‑led DPI with strict standards versus open‑source, multi‑stakeholder rails. Additional tension exists around the role of external assistance (G7 AI Hub) versus building domestic capacity. These disagreements reflect differing national experiences and strategic priorities, suggesting that a one‑size‑fits‑all roadmap may be difficult to achieve without flexible, context‑specific pathways.
Moderate – while all participants share the overarching goal of AI diffusion, they propose distinct routes (centralized government standards, open‑source modularity, external resource hubs). The implications are that policy coordination will need to accommodate multiple models and negotiate trade‑offs between standardisation, sovereignty, and openness.
The discussion evolved from a high‑level framing of AI as a nascent technology to a nuanced blueprint for turning AI into a trusted, interoperable public infrastructure. The most pivotal moments were Saurabh Garg’s articulation of AI as DPI and the data‑readiness framework, Janet Zhou’s ‘pilotitis’ diagnosis with a governance solution, and the concrete national examples (Brazil’s shared data ecosystem and the MOSIP analogy). These comments redirected the conversation from abstract possibilities to actionable standards, cross‑border collaboration, and capacity‑building, ultimately converging on a unified AI adoption framework that ties vertical sectoral impact to horizontal foundational resources. The panel’s flow was repeatedly reshaped by these insights, moving the tone from problem‑identification to solution‑design and setting a clear agenda for future diffusion pathways.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

