Collaborative AI Network – Strengthening Skills Research and Innovation

20 Feb 2026 12:00h - 13:00h

Collaborative AI Network – Strengthening Skills Research and Innovation

Session at a glance

Summary

This discussion focused on AI diffusion and the goal of creating 100 AI diffusion pathways by 2030 to bridge the gap between AI invention in the West and meaningful impact in the Global South. The panel, moderated by Shalini and featuring speakers from India, Kenya, Brazil, and international development organizations, explored how AI can move from pilot projects to population-scale implementation.


Saurabh Garg from India’s Ministry of Statistics emphasized the importance of AI-ready data as a foundational resource, highlighting four key requirements: discoverability, trustworthiness, interoperability, and usability across systems. He introduced METRI, a proposed platform for democratizing AI resources including compute, datasets, models, and talent. The discussion drew parallels between AI diffusion and the success of Digital Public Infrastructure (DPI) like India’s UPI system, suggesting that AI could follow similar rails for widespread adoption.


Kizom from the UN Development Program discussed constraints in accessing foundational AI resources across Africa and the need for co-architecting solutions that incorporate local contexts, languages, and cultural nuances. Janet Zhou emphasized the importance of government involvement from the design phase and building institutional capacity to move beyond “pilotitis” – the common problem of technologies remaining stuck in pilot phases.


Beatriz from Brazil’s government shared their approach of creating shared AI infrastructure and data ecosystems, particularly focusing on early childhood services that span multiple ministries. She stressed the importance of building internal capabilities rather than relying entirely on vendor solutions to avoid dependency and maintain strategic autonomy.


The panel concluded that successful AI diffusion requires combining sectoral impact with horizontal enablers like multilingual capabilities, interoperable data systems, and inclusive design principles that serve vulnerable populations at scale.


Keypoints

Major Discussion Points:

AI as Digital Public Infrastructure (DPI): Discussion of how AI can become a trusted, interoperable, and shareable digital public infrastructure similar to India’s Aadhaar or UPI systems, with emphasis on the four foundational AI resources: compute, datasets, talent, and models.


AI-Ready Data and Interoperability: Focus on making data discoverable, trustworthy, interoperable, and usable across systems through proper metadata, quality assessments, unique identifiers, and standardized classifications to enable effective AI implementation.


Moving from Pilots to Production Scale: Addressing the challenge of “pilotitis” where AI projects remain stuck in pilot phases rather than scaling to population-level impact, with emphasis on government involvement from the design phase and building institutional capacity.


The 100 AI Diffusion Pathways Initiative: Introduction of a collaborative framework between countries (particularly India, Kenya, Italy, and Brazil) to create pathways for AI adoption that bridge the gap between invention in the West and impact in the Global South through 2030.


Localization and Voice Technology as Equalizers: Discussion of how multilingual capabilities and voice AI can make technology more inclusive and accessible, particularly for non-English speakers and those with limited literacy, serving as a bridge to reduce digital divides.


Overall Purpose:

The discussion aimed to explore strategies for accelerating AI adoption and diffusion globally, particularly focusing on how countries in the Global South can move beyond pilot projects to achieve population-scale impact through collaborative frameworks, shared infrastructure, and inclusive design principles.


Overall Tone:

The discussion maintained a collaborative and optimistic tone throughout, with speakers sharing practical experiences and solutions rather than dwelling on challenges. The conversation was forward-looking and solution-oriented, with participants building on each other’s ideas and emphasizing international cooperation. The tone remained consistently professional and constructive, even when addressing complex challenges like vendor lock-in and institutional capacity building.


Speakers

Speakers from the provided list:


Saurabh Garg – Secretary of MOSPE India, chaired the democratizing AI resources group working group of the AI summit


Janet Zhou – Leads global development for AI across multiple geographies


Beatriz Vasconcellos – Works with the Brazilian government on AI implementation and digital transformation


Speaker 1 – Moderator/Host of the panel discussion


Speaker 2 – Co-moderator, involved in organizing the AI Impact Summit and co-authored a paper with Atlantic Council


Speaker 3 – Works with the United Nations Development Program in Africa, parts of Latin America and Asia, involved in G7 AI hub creation, co-authored paper on use case adoption framework


Additional speakers:


Kizom – Based in Italy, works with United Nations Development Program, involved in tripartite collaboration with Kenya, Italy and India, co-authored Atlantic Council paper on AI diffusion


Shalini – Co-moderator, involved in organizing 100 AI diffusion pathways by 2030 initiative


Mr. Shankar – Previously spoke about AI use cases (referenced but not directly quoted in transcript)


Nandan – Referenced as announcing 100 Pathways to 2030 (likely Nandan Nilekani based on context)


Tanvi Lal – Director at People Plus AI at XTEP Foundation, key author of use case adoption framework


Selena – From Zindi, runs a network of 100,000 data scientists across Africa


Full session report

This discussion on AI diffusion brought together government officials and international development experts to explore strategies for bridging the gap between AI invention and meaningful impact in the Global South. The panel featured Shalini as moderator, with speakers including Saurabh Garg from India’s Ministry of Statistics, Janet Zhou from international development, Kizom from the UN Development Programme (based in Italy), and Beatriz Vasconcellos from Brazil’s Ministry of Management. The discussion centered on the goal of creating 100 AI diffusion pathways by 2030.


AI as “Solution in Search of a Problem”

Saurabh Garg opened with the observation that “AI is perhaps something like a solution in search of a problem,” emphasizing that without clear use cases, AI cannot deliver its potential value. He positioned this challenge within the context of making AI a Digital Public Infrastructure (DPI), similar to India’s successful systems like Aadhaar or UPI, which requires AI to become trusted, interoperable, and shareable.


Garg noted how successful digital infrastructure becomes invisible to users—people use UPI payments without consciously thinking about the underlying technology. This invisibility represents the ultimate success of digital infrastructure through seamless integration into daily life.


Democratizing AI Resources and the METRI Initiative

Drawing from his experience chairing the democratizing AI resources working group at the AI summit, Garg outlined four foundational AI resources that need democratization: compute, datasets, talent, and models. He emphasized that different mechanisms would be needed for each resource type, with particular focus on datasets as the “raw material for AI models.”


Garg introduced the METRI platform (Multi-stakeholder AI for Resilient and Trustworthy Infrastructure)—though he initially admitted forgetting what the “T” stood for—named after the Hindi word for friendship. This platform would operate on voluntary, modular, and non-commitment principles, allowing countries to collaborate on foundational AI resources without binding obligations.


He also mentioned specific examples like “Bhashani” as AI public rails for Indic languages and “Amul AI” launched by the Prime Minister as an example of a multi-model approach.


Data Readiness Requirements

Garg detailed four critical requirements for AI-ready data: discoverable (through proper metadata), trustworthy (through quality assessments), interoperable (through unique identifiers and linking capabilities), and usable (through standardized classifications). This framework addresses the fundamental challenge of ensuring data can be effectively utilized across different systems while preserving privacy and security.


Brazil’s Integrated Government Approach

Beatriz Vasconcellos described Brazil’s comprehensive AI strategy centered on “one government for each person” through personalized AI services. Their approach involves shared data platforms with canonical datasets about citizens, centralized AI capabilities to prevent fragmented solutions across ministries, and eventual development of personalized government agents.


Brazil’s practical solutions include centralizing chatbot procurement to prevent each ministry from developing separate solutions, creating shared services through the Ministry of Management, and establishing data ecosystems that enable cross-ministry collaboration. Vasconcellos provided a specific example of their early childhood ecosystem, which brings together five ministries—Health, Education, Social Development, Management, and Human Rights—to catalog policies and create standards for shared data use.


Their implementation follows a three-step approach: starting with informational chatbots, progressing to transactional chatbots (enabled by their gov.br authentication system), and eventually developing agentic state capabilities.


Addressing “Pilotitis” and Scaling Challenges

Janet Zhou highlighted the persistent challenge of “pilotitis”—technologies remaining stuck in pilot phases rather than scaling to population-level impact. She noted that successful scaling examples like vaccine distribution or instant payment systems share common characteristics: government involvement from the design phase, focus on vulnerable populations, and building institutional capacity.


Zhou emphasized that trust in AI systems comes through institutions rather than algorithms themselves, reframing adoption as a governance issue requiring institutions to become trustworthy through inclusive practices.


International Cooperation and Knowledge Sharing

Kizom discussed work across Africa, Latin America, and Asia, mentioning co-authoring a paper (referenced as an Atlantic Council paper with Shalini) based on consulting about 20-plus countries. The discussion revealed the importance of distinguishing between sectoral impact (where AI creates value in specific areas like education or health) and the foundational infrastructure capabilities that enable scaling.


Kizom also mentioned Selena from Zindi, who runs a network of 100,000 data scientists across Africa, as an example of existing infrastructure that can support AI development.


Digital Sovereignty and Capability Building

The speakers emphasized building domestic AI capabilities rather than outsourcing critical functions. Vasconcellos used a striking analogy: “no one thinks it’s reasonable to outsource your army to a country that has a stronger army,” yet countries routinely outsource critical digital capabilities. This perspective frames AI development as a matter of strategic autonomy.


The discussion highlighted the importance of avoiding vendor lock-in through multi-model approaches and building internal capabilities, even if initial efforts are less sophisticated than available commercial solutions.


Multilingual AI and Inclusion

A recurring theme was AI’s potential as an equalizer through voice technology and multilingual capabilities. Speakers noted that voice adoption can bridge literacy gaps and make AI accessible to populations who cannot read, while multilingual capabilities can bring new languages into AI systems more easily than before.


Moving Forward

The panel, which Shalini noted was composed entirely of women panelists, concluded with recognition that achieving meaningful AI diffusion requires coordinated action across multiple dimensions. The discussion revealed that AI diffusion is not primarily a technical challenge but involves governance, institutional, and social considerations.


The conversation positioned AI diffusion as an opportunity for countries in the Global South to become co-architects of humanity’s AI future, developing solutions that reflect local contexts and needs while contributing to global knowledge. The speakers demonstrated alignment around principles such as AI as digital public infrastructure, democratizing foundational resources, and the importance of government involvement from design phases.


The session ended with the moderator noting they needed to leave the room for the next session, indicating this was part of a larger multi-session AI summit.


Session transcript

Speaker 1

request all the panelists along with Mr. Shankar and Mr. Saurabh for a picture, please, because everyone has different schedules. So we just want to get a quick photo of this moment before we move ahead. Yeah, content first. All right. Thank you so much. Panelists, you can take your seat. To take us forward, I’d like to invite to deliver a keynote Mr. Saurabh Garg, who is the Secretary of MOSPE India. If you can take us forward. Thank you so much.

Saurabh Garg

Thank you. Good afternoon and great to be here on this session. We’re talking of diffusion, AI diffusion. I’ll just speak of one or two aspects of it because I’m sure the panelists would lend a lot more color to this topic. Just to take off where Shankar left, sometimes he’s talking about use cases and that’s very necessary because AI is perhaps something like a solution in search of a problem. So until and until we don’t find use cases for that, it will not be able to give the value that it potentially can and I think that’s really, really important. We’re talking of AI being a possible DPI, a digital public infrastructure. and I suppose for that some steps would be needed to ensure that it becomes trusted, interoperable and shareable.

I think those are aspects which a DPI like Aadhaar or UPI has and I think we are still in early days but the mechanisms for that, how we can ensure that it would be possible and given that we talk of four resources as foundational AI resources, compute, data sets, talent and models apart from obviously the frameworks that would be necessary for this and I mention this because I had the privilege of chairing the democratizing AI resources group, working group of the AI summit and various… various options that we discussed with other countries on how we can ensure democratization of these four foundational resources. obviously each of them would have a different mechanism but one thing I would just go down in slightly greater details is on the data sets back part which is also something that we are doing within the Ministry of Statistics across the two different ministries and and states and why I am saying data sets is also because perhaps data is also the raw material for AI models so it’s a very foundational resource in that sense and compute is perhaps something that we can acquire and therefore we have discussions around models whether they need to be more efficient they are right now extremely power both compute and energy intensive or we can make them lighter going forward that is something which is work in progress I think it will take some time before the small and domain small domain models come in which will perhaps improve diffusion but data is something that would need to be AI ready going forward and in AI ready I would probably make four things that it needs to be one is discoverable how do you ensure that data is easily discoverable and that’s perhaps by ensuring that the metadata is understood by everyone and that makes it easier for any models also to understand second is on the trustworthiness of the data and that’s the quality assessments that we have whether it’s trustworthy and it’s credible and that would determine its use the third is in its interoperability with the two sets of data sets how interoperable they are what are the kind of unique identifiers it has to be able to identify what is it how to link different data sets and the fourth is its usability across systems and that would be dependent on the standardization and the classifications that we use which are internationally similar so that different conclusions do not come from the same data set.

And obviously, the focus would have to be on access and dissemination so that it is available for use while preserving the privacy of the data, the safeguards that would need to be built. And why I am saying about data is because this would be also where a lot of the local contexts, linguistic contexts, cultural contexts will come in and that will come in from the data sets that are. We talk of ensuring that it is locally relevant, the inferences and the solutions and I suppose the data would determine its relevance. So, we have to be very careful about that. So, we have to be very careful about that. So, we have to be very careful about that.

So, we have to be very careful about that. and ensure that it’s useful at different levels. So I’ll stop here, apart from the fact saying that for democratizing AI resources, the working group discussed with the others and a kind of a platform has been suggested going forward, which has been named as METRI. METRI in Hindi means friendship for those who are not aware. And it’s an acronym for multi -stakeholder AI for resilient and I’m forgetting what’s the T for. Now that, sorry, trustworthy. So and infrastructure. So that’s the acronym that we hope to be able to. But what the concept is that on a modular level, on a voluntary basis level, on a non -commitment level.

how we can develop on the foundational AI resources of availability of compute, data sets, models and talent. And I think the way we are able to develop this and move towards a DPI for AI resources, I am sure diffusion would become all the more easier. So thank you for this opportunity and look forward to a great time. Thank you.

Speaker 2

Thank you everyone. And we will carry on. We don’t have enough panels in which all of us are women. So three cheers for that. Don’t look at each other. You guys had a great contribution. So a couple of weeks back, some of us got together and we said, invention has happened in West. Impact has to happen at each one of us. What’s the gap between invention and impact? And that’s where we came out and thought about we thought about adoption and then we said, isn’t it diffusion? And why did we pick diffusion? We actually read a book. We read a book by Jeffrey Ding. He’s a professor in it’s in D.C. Why am I forgetting the name of the institute?

It’s D.C. Georgetown. Sorry. Georgetown in D.C. And we read about AI diffusion and that GPT, the general purpose technology like electricity, it diffused into the society over several decades. It was created in Europe but actually diffused in U.S. quite a lot. U.S. capture and also chemical engineering which Shankar talked about that chemical the chemical engineering creation if you see, if you remember chemistry, Bohr model you know all those were Germans but actually it’s US who capitalized on that. AI is like that. Invention happened in the West. We all know that. But it’s the global south is going to have use cases who are going to diffuse it into sectors into the and the horizontal enablers have to happen across these sectors for us to benefit, for us to have more economic benefit out of AI.

So that’s when all of us said that yes we will do 100 diffusion pathways by 2030. And one of the partner in crime was Kizom. She is here with us and Kizom my first question is to you. Tell us about how you think because Kenya comes in, you are based in Italy and we did a tripartite with Kenya, Italy and India. How do you think 100 pathways to 2030 pan out for you and what does it mean for you? How do you think it will happen?

Speaker 3

absolutely Shalini how long do I have to answer this question short version long version short version

Speaker 2

as long as people are okay with stories you can carry on

Speaker 3

um well I mean as Saurabhji the chair of the working group for democratization of AI spoke about there are some fundamental resources or inputs AI need in order for it to actually work in a way that can help a common citizen or a small business owner and some of those foundations that he spoke about our AI ready data compute and those are the things that I in my role at the United Nations development program working in Africa in parts of Latin America in Asia discovered that they were able to do in the United Nations development program working in Africa in parts of Latin America in Asia discovered that they were able to do in the United Nations development program working in Africa in Asia discovered that there is a constraint on access to some of these there is a constraint on access to some of these foundational resources And so this G7 AI hub was created to address that constraint by, one, unlocking additional resources from, of course, the friendly G7 countries that wanted to focus on parts of Africa.

But also, as we do that, to think about what is the business case for data centers, for GPUs on the continent? How do you break data silos, even though the global south is so rich in data? As well as how do you orchestrate talent, especially since we saw that much of, you know, let’s say Microsoft’s or big tech’s talent pool on the continent of Africa and in other parts of the world, were actually coming from the global south countries. And over the last one year or so, I’ve seen this tremendous momentum of many of the African people who worked in big tech or large companies moving back to the continent because they actually don’t want the continent to be… left behind.

They want to be co -architects of the future, this fundamental shift that humanity is going through. And this is where when we talk about 100 AI diffusion pathways, it is about co -architecting pathways where we look at how do we bring not just language data, but voice adoption into solutions that a smallholder farmer can use, that a woman entrepreneur can use, and not just as pilots, but to think about it from a infrastructure perspective, a digital public infrastructure perspective where we can scale to millions of farmers, go across national boundaries and be able to look across borders either as digital public goods or as expansion of private sector innovations or public private partnerships. So as Shankar said, diffusion pathways could be many, and it’s for

Speaker 1

Thank you, Kizom. I’ll come to you, Janet. You lead global development for AI across multiple geographies. But most of them are stuck in pilots, right? How does AI become production scale? And do you think it’s funding that they lack only? Or are there more diffusion pathways that we can create so that actually AI pilots most of population scale?

Janet Zhou

Hello? Hi. Maybe I would first start by saying the problem of pilotitis is actually one that sort of predates AI. And we have many technologies that are enormously beneficial for pilotizing. humanity that I think are currently still stuck in not having diffused. But when I think about the positive examples, the places where I think as a global community, we’ve had tremendous scaled impact, right? Having or reducing child mortality by half since 2000, 170 million people out of extreme poverty. The common threads are often that we’ve managed to figure out how to get both governments and markets to really focus and work for the most vulnerable populations. And so whether it’s vaccines that we’re talking about or instant payment systems, often it is really just ensuring that government is there at the design phase, at the table, in the driver’s seat, not brought in after the pilot results come in.

It is very much focused on making sure that… We make it easier for local innovators to be able to enter markets. So whether that… That’s you can aggregate low margin demand, you can streamline market entry, but really making it easy to lower the cost to serve for the most kind of vulnerable people at the edge. And then it is very much also building institutional capacity and making sure that, you know, it’s not there’s playbooks and training and all of that, but really shared infrastructure that allow sort of all boats to rise and making sure that that infrastructure is trustworthy, is inclusive, sort of creates, I think, a really positive feedback loop because I loved what Nanda Nilakani expressed, which is that we, you know, really rely on institutions for trust, not on algorithms.

And I think one of the ways that institutions become trustworthy is by being inclusive and making sure that they actually serve the people that otherwise, would be less to benefit.

Speaker 1

Yeah, absolutely. I think that’s a key that how do you trust the institutions and AI output will, you know, suppose it’s coming out of a AI advisory application. Do you trust that or do you trust the institution which is giving in a physical or will the institution adopt this AI advisory so that there’s more trust on the advice itself being given? I mean, that’s a quite hybrid and risky manner and institutions have to understand AI and adopt and first trust the AI output before they say that this is ours. I think that that part is key on AI adoption. Bia, tell us about Brazil that, you know, a very different perspective. Just let us first let us understand that how is the AI adoption in that region?

And are you also stuck in this pilot? pilot to production and is there a gap and how do you see that being bridged?

Beatriz Vasconcellos

Perfect. So I think there are many different ways and perspectives to think about AI. In the Brazilian government we chose to establish a vision for one government for each person. So that means we are going fully on the personalization and even in the agentic state vision, right? So for that we need to be thinking about some shared infrastructure and shared capabilities. So what we did was starting with the data. We have a project now to not just catalog but also prepare the data sets for training. We are also building some shared platforms for personalization and to understand citizens’ characteristics. So it’s… within our state -owned enterprises, we have two large IT state -owned enterprises, and we are making them collaborate on a shared platform in which we have some canonical data sets about citizens, and every ministry contributes with different characteristics, and we are creating different labels for every citizen.

And then one different way in which we are trying to break the data silos, which, of course, is a very big issue, is to think about the data ecosystems. So we came up with this concept, and it doesn’t mean that we’re doing data lakes. It means that we’re thinking about interoperability from a thematic perspective. So one example is the early childhood ecosystem, data ecosystem. So we know that a lot of policies related to early childhood, they have different data requirements, and they need to use similar registries, and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems So we created this ecosystem.

We brought together five ministries, Ministry of Health, Education, Social Development, Management, and Human Rights. And we cataloged the policies and what kind of data would be needed. And then we started creating the standards for that specific ecosystem. So we prioritized early childhood and environmental and land ecosystems. Land, environmental, land, and climate. It’s in the same group. So we are starting with that, and it seems to be an interesting approach. It seems to be working. The other thing is coming back to the GPI discussion. It is very helpful for us to have the digital ID and authentication to implement this vision. So what we’re doing now is we started, well, a lot of people in the government want to do Gen AI, right?

because I think it’s the easiest and maybe most famous type of AI implementation. So a lot of government entities and ministries wanted to do their own chatbots. So it was being spread all over. So what we did was also to try to centralize that capability. And we started with informational chats. So what kind of policies or information would be helpful. Now we are just starting the transactional part of the chatbots. So the idea is that a citizen will be able to actually complete a service request or get their service done through the chat. And that’s only possible because we have the gov .vr authentication. So we know that the person is actually the right person.

And then the third step, which we still haven’t entered, but that’s the vision, is the agentic state. So the agentic state. To build the agent specific for that person. And that will only be able to happen once we have the data platform. infrastructure. So that’s more or less how we’re thinking about it.

Speaker 1

Okay. And thanks for bringing DPI into the picture because my next question is on that. Nandan announced yesterday 100 Pathways to 2030 because it comes back, comes from a lot of experience on DPI. And Kizo, my next question is to you that, you know, you were also in the DPI journey, working with India. Do you think in AI, there are rails like, you know, DPI lays down rails, roads, which then other countries can take. Do you think in AI, how do use cases cross borders? How do use cases, what are the pathways? What are the playbooks that different countries can benefit from? How do you think that can happen?

Speaker 3

Shalini, great question. And I’m assuming this room is fully aware of or is a user of digital public infrastructure. Raise your hands if you’re not. Oh my God. One or two people. It’s probably one of the reasons why I don’t. Okay, we’re not going to get into that right now. You use UPI, right? You use DigiLocker. You don’t use DigiLocker, but you use DigiYatra. No? Okay. I think you should. But you use UPI. Okay, so he’s a DPI user. And that’s the beauty of digital public infrastructure. You actually want to be invisible. And one of the sort of design, one of the ambitions that we have as part of the AI diffusion pathways is that we actually don’t want AI to be this noisy, chaotic technology.

We want it to be so invisible because it’s actually part of your life. Part of not just our life, because obviously for us it’s very convenient. We’re English speakers. and so we are at the summit but for a small holder farmer, for a small micro -entrepreneur business, a woman who is crossing borders between Guinea and Sierra Leone for example so to go back to your question Shalini one as Bia was already starting to say as she is seeing in Brazil and as certainly we are seeing in many parts of the world including in India, when you have data that’s already interoperable and public rails such as identity payments, data exchange then the power of AI is much more easier to bring to that same service that you wanted to reach out that’s now an AI trackbot to a farmer on those rails so that’s fantastic but then we are also seeing an emergence of additional rails and I think that’s a great point For those of you who are from India, you probably have heard of someone using Bhashani, which is built on AI for Bharat and sort of the Indic language stack.

So that is definitely a public rail. And I know that in different parts of the world, there are many such rails being created. And I hope that we see the emergence of rails, but also the convergence of rails. Because as the French president yesterday was saying, along with Honorable Prime Minister Modi, that it’s not that we need to do more, it’s that we need to do better together. So this is where the public rails really need to come together. And then I want to recognize Selena from Zindi here from Africa. She runs a public infrastructure. She runs a network of 100 ,000 data scientists across Africa. And that’s already infrastructure. It’s public interest, public value.

And we’re at a place where we’re trying to figure out what is the business case. how do we still make them sustainable by creating those innovation layers on top of the public rails that are also emerging on AI but it’s not like you have to compete between DPI and AI it’s like the DPI principles of interoperability, modularity, reusability becoming a digital public good those still remain quite intact and this is how we might see population scale the scale towards impact

Speaker 1

Thank you Kizum for explaining it so well and actually that’s happening because not just the language the multilinguality voice AI that is becoming a DPI because you should be able to interact in voice and the voice stack is something which should be available for most of the people to build on top of it safety, the guardrails they are DPI in itself they can become DPI in itself how do you do safety safe conversations in agriculture So how do you do safe conversations if someone is calling up for patient care in health care? And can those conversations become a playbook in itself? So these are the playbooks which can get created. So thank you so much for talking about it.

I’ll come to you, Janet, that, you know, the frictions which are there, right? I mean, do you think there could be certain programs or investments to be done where such frictions? Because everybody is building like the full stack. Hey, I’ll do language translation. Hey, I will do compute I need, data I need. So how do you remove the frictions and do you think some programs and investments can help this?

Janet Zhou

You know, I was thinking about this question and the example that kind of came to mind that maybe kind of illustrates it really well. I was thinking about the MOSIP program, which is really kind of an open source platform. It’s inspired by the Adar program, but it is part of a larger effort with World Bank and many other partners to actually try to take that open source, you know, vendor free lock in national ID system and bring it to many, many countries. And when I thought about the components of that, the programming components, I know a lot of it was around ensuring that there was sort of an open production ready reference implementation frame. And maybe if we’re going to continue on the road analogy, I was trying to think of like, what would that be?

And you still, if you have a road, you still need to pick sides of the road and agree on which sides of the road everyone’s going to drive. And you have to agree that a stop sign means stop and that red means stop. And so there. Are still, I think, a set of programmatic standards and norms. that really makes it easier, not only for the adoption, but for those that have adopted to be then able to benefit from that adoption. And then, you know, a lot of when I think about programmatically what has happened in something like MOSIP is, in addition to the technical implementation, there was a lot of operational support, a lot of just examples, countries visiting each other.

And, you know, I think India has sent many delegations to many countries to help explain their story, share their pathway. There’s training that needs to be had, right? You still have to get your driver’s license and prove that you know how to use it. And so, you know, I think even after building the rails, there’s still plenty of program implementation work to actually really help facilitate and lubricate that adoption. And, of course, financing as well, which kind of came through the World Bank program. So, you know, it’s a bit of… no sort of single bullet, even after, I think, after… having the rails set. There’s still, I think, a lot of work to be done, program implementation and operational support.

Speaker 2

Thank you. Bia, what’s the hardest challenge? I mean, this all sounds very easy. Have diffusion pathways, go and build it. But it has to be operational. It has to be adopted. They’re people, right? The human in the loop is the most important in AI. We can never ignore that. What’s the hardest challenge that you see in this? Just one? Just one? Oh, we’re lucky.

Beatriz Vasconcellos

about those. So obviously, it’s not just creating applications. It’s the same old story of digital transformation, right? It’s just at a different level, but you’ve got to change the processes, the way things work. So what we’re doing, I think maybe three interesting things that we are trying to do. Also, I’m not trying to sell it, right? Everything that we are doing, we’re passing. So let’s see what works and what doesn’t. But one thing that we’re doing now is in the Ministry of Management, we have a Secretariat for Shared Services, and they didn’t used to work with AI. So the idea is that we make it very, very simple for any ministry to use a service that is centralized in the Ministry of Management.

So for example, with these chatbots that I was telling you about, we came up, we’ve centralized the procurement, and we chose one that was going to be a service that was going to be a service that was going to be a service vendor to help us build the solution. and each ministry was doing their own. So we said, hey, if you buy it through the centralized service, you only need, it takes just a few hours, you just need to sign a document and transfer some money digitally to the Ministry of Management and you can use the service. So you don’t have to go through any procurement. So that’s one way that we’re trying to overcome the problem of multiple solutions and difficult implementation.

We also came up with an interesting, I think, institutional arrangement, which is, when we’re talking about AI, we’re talking about innovation and new capabilities. We’re talking about innovation capabilities through the Ministry of Management, which means… that they’re building the whole process of how you can come up with first a policy goal, like what the AI project is going to target, how do you experiment. They build the process for experimenting. They have analysts looking at the data and seeing if things are working. So that’s something that seems to be working well, and we think it’s going to be good. The other real challenge that we have, I think, is with the vendors. And I’m using my development hat here from my previous background.

I think everyone is talking about AI and how every agency and ministry needs to be doing something on AI. And obviously there are some big vendors who are saying, yeah, like you government, you don’t have the capabilities. We have the capabilities. We can do it very fast. We do it at scale. And if you start making these decisions day after day, you’re not going to build any capabilities. You’ll just… You’re not going to build sources. and I use an analogy with for example the army no one thinks that it’s reasonable to outsource your army to a country that has a stronger army or a better army but in terms of digital we’re doing it every day like for every decision oh this country, this company does it better so we’re just going to outsource and there are some essential capabilities it’s not just an AI tool or something like we’re playing with national data, we have some very strategic goals also so I think if we don’t think about building these capabilities even if you start small and it takes a while to build the muscles we’ve got to build the muscles so we’re trying to incentivize also the agencies to test and experiment and don’t buy prepackaged solutions because we’ve got to build our own muscles

Speaker 2

yeah I think you brought in a very valid point which is what a lot of people are scared of is a vendor lock -in oh we’re going to have to do this and we’re going to have to do this and you would have seen Amul AI which got launched by the Prime Minister and VSA Step Foundation made it possible and the one key thing was there that how do you keep it multi -model like multiple models should be able to do it why just one and that’s been a key thing that how do you give choice to people how do you give you know not logged into the system because that’s where the diffusion works.

Diffusion is not about like concentrated western LLMs all together and just deploy it. It’s about actually walking the path give choice, replaceability have domain knowledge, have data with you because data is there with us in our enterprise systems and we don’t want you know learning from them. How can you separate that so it’s about actually and now this know -how that we have got We want to share with everybody. And that playbook is a diffusion pathway. That’s exactly and that gives an example for that. Kizum, you and me co -authored a paper, which is up in Atlantic Council. And we also talk about the use case adoption framework. Would you like to tell people about the use case adoption framework and how that can be a friction remover?

Speaker 3

Oh, absolutely. And I’m looking for the key author that I saw of the use case adoption framework, Tanvi Lal, director at People Plus AI at XTREP Foundation. So, you know, when we were preparing for the AI Impact Summit many months ago, which feels like many years ago, we started with this idea saying adoption. Adoption is proving to be a challenge. what are we learning from our experience? And this is where XCHEP looked at Mahavish Star, its work with AI for Bharat, its ongoing conversations with Entropic and other private sector companies on safety tooling. And I did the same across a number of countries, and I think together we consulted about 20 -plus countries, had convenings in South Africa to New York to, I don’t know, many, many more places along with the Gates Foundation as well.

And what we learned was that the impact of what technology like artificial intelligence can do sits in sectors, so education, health, climate change, but its ability to move from pilot to scale depends on the horizontal unlocks. So this 100 AI diffusion pathways, underpinning that is this framework that we call… the AI adoption framework, the use case adoption framework. where we see impact in sectors where you need contextual data, contextual knowledge, process, workflows, things that have to change in a department of education to a department of health and so on. But then the horizontal unlocks are the language data, compute. Generally, how do you make AI data AI -ready? Or how do you make data interoperable because a farmer is going to be buying things, selling things, getting public services?

So we have to think about it from a user life perspective. So this is really, I think, a bit about the use case adoption framework that we’ve done together with countries, Gates Foundation, with XTEP. And we hope that this helps us ground our 100 AI diffusion pathways because as Shalini was saying, this is not about just going and saying, I have the solution, you adopt it. we’re not going to see that impact with that approach. We’ll have to co -design some pathways. We’ll have to fuse verticals and horizontals. And this is where, at least when I talk to many innovators, private sector companies in the global south, I see them saying, aha, this is how we co -architect the future.

This is where, when we develop a voice optimization solution as a public good, that goes out to the world. We are builders of the future too. So this, you know, it’s just such a powerful kind of learning that we’ve put together into this 100 AI diffusion pathways towards impact.

Speaker 2

Thank you. Thank you, Kizam. I’m looking at the time, and I would like to ask the audience to have like two questions. So please raise your hand if, yeah. I saw yours first, and okay. would anybody like to take it

Speaker 3

yeah yeah I was so distracted by the crowd that’s coming in we’re getting kicked out guys so I think your question was how to address diversity and diffusion but if you can’t read can you hear? because this is where I think voice adoption is something that is key to the inclusion agenda and the impact agenda of AI so I would say to answer your question, voice adoption

Speaker 2

yeah actually that’s why AI becomes more equalizer and it actually bridges the divide right so there are inequalities and how do you bring in new language, today bringing a new language into a model has become fairly easy fairly so bringing a new language which we can talk about yeah that’s available there is data which is logged in PDFs into various regions and people are not knowing that today that’s become easier so that’s how it is a leveler that’s a trusted source that’s a trusted source so I’ll maybe talk to you later about what Mr. Saurabh Garg talked about and how evidence one last question yeah I think you’re talking about a pivotal moment right I think you’re talking about a pivotal moment one like you know I am not a fortune teller right but what I can do is I do understand the ecosystem about AI I think the fact that multilinguality can be one very big change because it draws people in what is change about change is always about people that how people are able to when UPI was initially talked about it was like bank said I have to change my whole system about it the user friendliness of it and the fact that it’s so easy to deploy and by people is what drew to it so any AI moment which draws people in because of the interoperability the usability and the fact that itself will become has it happened no can it happen yes and multilinguality is one of them but we have to see that how it pans out Okay, thank you so much Thank you very much We have been kicked out of the room and a great panel Thank you, bye

Speaker 1

Thank you everyone for joining us and sharing your thoughtful views On behalf of India AI team we would like to offer a souvenir with our sincere thanks Thank you so much Thank you Thank you

S

Speaker 1

Speech speed

120 words per minute

Speech length

647 words

Speech time

323 seconds

AI rails as digital public infrastructure

Explanation

Speaker 1 asks whether AI can be built on underlying rails similar to digital public infrastructure, suggesting that invisible, modular rails would let AI services be layered on everyday digital systems. This frames AI as a set of reusable public utilities.


Evidence

“Do you think in AI, there are rails like, you know, DPI lays down rails, roads, which then other countries can take.” [10]. “Thank you Kizum for explaining it so well and actually that’s happening because not just the language the multilinguality voice AI that is becoming a DPI because you should be able to interact in voice and the voice stack is something which should be available for most of the people to build on top of it safety, the guardrails they are DPI in itself they can become DPI in itself how do you do safety safe conversations in agriculture So how do you do safe conversations if someone is calling up for patient care in health care?” [22]


Major discussion point

AI as a Digital Public Infrastructure (DPI) and foundational resources


Topics

Information and communication technologies for development | Closing all digital divides


S

Saurabh Garg

Speech speed

130 words per minute

Speech length

866 words

Speech time

397 seconds

AI as a digital public infrastructure (DPI)

Explanation

Garg frames AI itself as a possible digital public infrastructure that must be trusted, interoperable and shareable, drawing parallels with Aadhaar and UPI. He emphasizes the need for a coordinated platform (METRI) to enable this vision.


Evidence

“We’re talking of AI being a possible DPI, a digital public infrastructure.” [1]. “I think those are aspects which a DPI like Aadhaar or UPI has and I think we are still in early days but the mechanisms for that… we can ensure that it would be possible and given that we talk of four resources as foundational AI resources, compute, data sets, talent and models…” [2]. “and I suppose for that some steps would be needed to ensure that it becomes trusted, interoperable and shareable.” [16]. “a platform has been suggested going forward, which has been named as METRI.” [15]


Major discussion point

AI as a Digital Public Infrastructure (DPI) and foundational resources


Topics

Artificial intelligence | Information and communication technologies for development


Four foundational AI resources must be democratized

Explanation

Garg identifies compute, data sets, talent and models as the four core resources for AI and argues they need to be democratized through coordinated mechanisms such as the METRI platform.


Evidence

“four resources as foundational AI resources, compute, data sets, talent and models…” [2]. “how we can develop on the foundational AI resources of availability of compute, data sets, models and talent.” [3]. “a platform has been suggested going forward, which has been named as METRI.” [15]


Major discussion point

AI as a Digital Public Infrastructure (DPI) and foundational resources


Topics

Artificial intelligence | Enabling environment for digital development


AI‑ready data standards

Explanation

Garg outlines four criteria for AI‑ready data—discoverability, trustworthiness, interoperability, and usability—highlighting the need for metadata, quality assessments, unique identifiers and standard classifications.


Evidence

“AI‑ready data must be discoverable… one is discoverable… second is trustworthiness… third is interoperability… fourth is usability across systems…” [23]


Major discussion point

Data readiness, interoperability, and standards


Topics

Data governance | Artificial intelligence


S

Speaker 2

Speech speed

122 words per minute

Speech length

1028 words

Speech time

504 seconds

Multilingual and voice AI as an equalizer

Explanation

Speaker 2 argues that multilingual and voice capabilities make AI a powerful equalizer, lowering language barriers and bringing underserved populations into AI‑driven services.


Evidence

“yeah actually that’s why AI becomes more equalizer and it actually bridges the divide… bringing a new language into a model has become fairly easy… multilinguality can be one very big change because it draws people in…” [29]


Major discussion point

Inclusion, multilingual/voice AI, and avoiding vendor lock‑in


Topics

Closing all digital divides | Artificial intelligence


Preventing vendor lock‑in through open multi‑model ecosystems

Explanation

Speaker 2 warns against vendor lock‑in and calls for ecosystems that support multiple AI models, giving users choice and avoiding dependence on a single provider.


Evidence

“what a lot of people are scared of is a vendor lock‑in… how do you keep it multi‑model like multiple models should be able to do it why just one… give choice to people…” [33]


Major discussion point

Inclusion, multilingual/voice AI, and avoiding vendor lock‑in


Topics

Enabling environment for digital development | Artificial intelligence


S

Speaker 3

Speech speed

143 words per minute

Speech length

1512 words

Speech time

631 seconds

100 AI diffusion pathways and co‑architecting cross‑border solutions

Explanation

Speaker 3 proposes defining 100 AI diffusion pathways, co‑architected across borders, to move from pilots to scalable solutions for smallholders and entrepreneurs, integrating language, voice and sector‑specific needs.


Evidence

“when we talk about 100 AI diffusion pathways, it is about co‑architecting pathways where we look at how do we bring not just language data, but voice adoption into solutions that a smallholder farmer can use… scale to millions of farmers, go across national boundaries…” [14]. “we hope that this helps us ground our 100 AI diffusion pathways…” [36]. “this 100 AI diffusion pathways, underpinning that is this framework that we call… the AI adoption framework, the use case adoption framework.” [38]


Major discussion point

Diffusion pathways and the “100 pathways to 2030” agenda


Topics

Artificial intelligence | Data governance | Social and economic development


Digital rails principles for AI sustainability

Explanation

Speaker 3 emphasizes that AI should be built on digital public rails that are interoperable, modular and reusable, mirroring the success of existing DPI such as UPI and DigiLocker.


Evidence

“how do we still make them sustainable by creating those innovation layers on top of the public rails… DPI principles of interoperability, modularity, reusability becoming a digital public good…” [4]. “when you have data that’s already interoperable and public rails such as identity payments, data exchange then the power of AI is much more easier to bring to that same service…” [31]


Major discussion point

AI as a Digital Public Infrastructure (DPI) and foundational resources


Topics

Information and communication technologies for development | Artificial intelligence


Voice adoption as inclusion strategy

Explanation

Speaker 3 highlights voice AI as a key inclusion tool, enabling safe, accessible interactions for agriculture, health and other sectors, especially for users with limited literacy or connectivity.


Evidence

“because I think voice adoption is something that is key to the inclusion agenda and the impact agenda of AI…” [45]. “when we develop a voice optimization solution as a public good, that goes out to the world.” [100]


Major discussion point

Inclusion, multilingual/voice AI, and avoiding vendor lock‑in


Topics

Closing all digital divides | Artificial intelligence


Data interoperability for sectoral use

Explanation

Speaker 3 stresses the need for interoperable data to support farmers and other sector actors, and calls for breaking data silos in the Global South to unlock AI potential.


Evidence

“Or how do you make data interoperable because a farmer is going to be buying things, selling things, getting public services?” [66]. “How do you break data silos, even though the global south is so rich in data?” [68]


Major discussion point

Data readiness, interoperability, and standards


Topics

Data governance | Artificial intelligence


J

Janet Zhou

Speech speed

154 words per minute

Speech length

715 words

Speech time

277 seconds

Pilotitis predates AI; need for government at design phase

Explanation

Janet Zhou notes that the chronic problem of ‘pilotitis’ existed before AI and that governments must be at the design table to ensure AI solutions scale effectively.


Evidence

“the problem of pilotitis is actually one that sort of predates AI.” [76]. “often it is really just ensuring that government is there at the design phase, at the table, in the driver’s seat, not brought in after the pilot results come in.” [84]


Major discussion point

Scaling AI from pilots to production (overcoming “pilotitis”)


Topics

Capacity development | Enabling environment for digital development


MOSIP as open‑source digital ID infrastructure with operational support

Explanation

Janet Zhou describes MOSIP as an open‑source digital ID platform that provides not only technical code but also operational support, standards and norms, serving as a model for trustworthy public infrastructure.


Evidence

“a lot of when I think about programmatically what has happened in something like MOSIP is, in addition to the technical implementation, there was a lot of operational support…” [70]. “I was thinking about the MOSIP program, which is really kind of an open source platform.” [71]. “Are still, I think, a set of programmatic standards and norms.” [74]


Major discussion point

Data readiness, interoperability, and standards


Topics

Data governance | Information and communication technologies for development


Building trustworthy, inclusive shared infrastructure

Explanation

Janet Zhou argues that shared infrastructure must be trustworthy and inclusive, with institutional capacity building and inclusive design to serve vulnerable populations.


Evidence

“it is very much also building institutional capacity and making sure that, you know, it’s not there’s playbooks and training and all of that, but really shared infrastructure that allow sort of all boats to rise and making sure that that infrastructure is trustworthy, is inclusive…” [85]. “one of the ways that institutions become trustworthy is by being inclusive and making sure that they actually serve the people that otherwise, would be less to benefit.” [86]


Major discussion point

Scaling AI from pilots to production (overcoming “pilotitis”)


Topics

Capacity development | Enabling environment for digital development


B

Beatriz Vasconcellos

Speech speed

154 words per minute

Speech length

1218 words

Speech time

474 seconds

Shared infrastructure and centralized procurement to overcome pilotitis

Explanation

Beatriz describes a centralized procurement model within a Ministry of Management that simplifies AI adoption for ministries, reducing friction and helping move pilots to production at scale.


Evidence

“we came up, we’ve centralized the procurement, and we chose one that was going to be a service that was going to be a service vendor to help us build the solution.” [90]. “make it very, very simple for any ministry to use a service that is centralized in the Ministry of Management.” [91]. “if you buy it through the centralized service, you only need… you can use the service.” [92]. “that’s one way that we’re trying to overcome the problem of multiple solutions and difficult implementation.” [95]


Major discussion point

Scaling AI from pilots to production (overcoming “pilotitis”)


Topics

Enabling environment for digital development | Capacity development


Building domestic AI capability and avoiding over‑reliance on external vendors

Explanation

Beatriz stresses the strategic need to develop national AI capabilities rather than outsourcing, likening it to not outsourcing a nation’s army, and highlights challenges with vendor dependence.


Evidence

“no one thinks that it’s reasonable to outsource your army to a country that has a stronger army… there are some essential capabilities… we have to build the muscles…” [28]. “The other real challenge that we have, I think, is with the vendors.” [106]


Major discussion point

Inclusion, multilingual/voice AI, and avoiding vendor lock‑in


Topics

Enabling environment for digital development | Artificial intelligence


Shared platforms for personalization and data ecosystem creation

Explanation

Beatriz mentions building shared platforms to personalize services and creating a data ecosystem that harmonizes multiple data systems across sectors.


Evidence

“We are also building some shared platforms for personalization and to understand citizens’ characteristics.” [49]. “we’re going to look at some of these different data systems… we created this ecosystem.” [64]


Major discussion point

Data readiness, interoperability, and standards


Topics

Data governance | Enabling environment for digital development


Agreements

Agreement points

AI should become digital public infrastructure with DPI principles

Speakers

– Saurabh Garg
– Speaker 3

Arguments

AI should become a trusted, interoperable and shareable digital public infrastructure like Aadhaar or UPI


DPI principles of interoperability, modularity, and reusability remain intact for AI and can enable population-scale impact


Summary

Both speakers agree that AI should be developed as digital public infrastructure, maintaining the core principles of interoperability, modularity, and reusability that have made existing DPIs successful


Topics

Information and communication technologies for development | Artificial intelligence


Foundational resources are critical for AI democratization

Speakers

– Saurabh Garg
– Speaker 3

Arguments

Four foundational AI resources needed: compute, datasets, talent, and models, requiring democratization mechanisms


Constraints on access to foundational resources like compute and data limit AI adoption in developing countries


Summary

Both speakers identify that access to foundational AI resources (compute, data, talent, models) is essential for democratization and that constraints on these resources limit adoption in developing countries


Topics

Artificial intelligence | Closing all digital divides | Financial mechanisms


Data quality and interoperability are fundamental for AI success

Speakers

– Saurabh Garg
– Beatriz Vasconcellos

Arguments

AI-ready data must be discoverable, trustworthy, interoperable, and usable across systems with proper standardization


Data ecosystems approach breaks silos by creating thematic interoperability across ministries for specific policy areas


Summary

Both speakers emphasize the critical importance of data interoperability and quality, with Garg focusing on AI-ready data characteristics and Vasconcellos implementing thematic data ecosystems to break silos


Topics

Data governance | Artificial intelligence


Government involvement from design phase is crucial for scaling

Speakers

– Janet Zhou
– Beatriz Vasconcellos

Arguments

The problem of “pilotitis” predates AI, requiring government involvement from design phase and focus on vulnerable populations


Centralized procurement and shared services can overcome implementation barriers and prevent fragmented solutions


Summary

Both speakers agree that government must be involved from the beginning of AI implementation rather than after pilots, with Zhou emphasizing design phase involvement and Vasconcellos implementing centralized approaches


Topics

The enabling environment for digital development | Artificial intelligence | Capacity development


Local context and capabilities are essential for AI relevance

Speakers

– Saurabh Garg
– Beatriz Vasconcellos

Arguments

Local linguistic and cultural contexts embedded in datasets determine AI solution relevance and usefulness


Countries must build internal AI capabilities rather than outsourcing all digital decisions to external vendors


Summary

Both speakers stress the importance of local context – Garg through linguistic and cultural datasets, and Vasconcellos through building domestic capabilities rather than relying on external vendors


Topics

Artificial intelligence | Capacity development | Closing all digital divides


Similar viewpoints

Both speakers emphasize the importance of leveraging existing infrastructure and focusing on vulnerable/marginalized populations to achieve meaningful AI impact and scaling

Speakers

– Speaker 3
– Janet Zhou

Arguments

AI can leverage existing DPI rails like identity, payments, and data exchange to reach vulnerable populations more easily


Successful scaling requires making it easier for local innovators to enter markets and lowering costs to serve marginalized communities


Topics

Information and communication technologies for development | Artificial intelligence | Closing all digital divides


Both speakers agree that trust in AI comes through institutions rather than algorithms themselves, requiring strong institutional capacity and understanding

Speakers

– Speaker 1
– Janet Zhou

Arguments

AI adoption requires institutional trust rather than algorithmic trust, with institutions needing to understand and adopt AI before endorsing its outputs


Building institutional capacity and shared infrastructure creates positive feedback loops for trustworthy, inclusive systems


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


Both speakers see multilingual and voice capabilities as key to making AI more inclusive and accessible, particularly for bridging digital divides

Speakers

– Speaker 2
– Speaker 3

Arguments

AI can serve as an equalizer by making new language integration easier and more accessible


Voice adoption and multilingual capabilities are key to AI’s inclusion agenda and bridging digital divides


Topics

Artificial intelligence | Closing all digital divides


Unexpected consensus

Need for domestic AI capability building over outsourcing

Speakers

– Beatriz Vasconcellos
– Speaker 1

Arguments

Countries must build internal AI capabilities rather than outsourcing all digital decisions to external vendors


Multi-model approaches and choice are essential for preventing vendor lock-in and enabling true AI diffusion


Explanation

Unexpected strong consensus on avoiding vendor dependency, with both speakers emphasizing the strategic importance of maintaining domestic control over AI capabilities and preventing lock-in to external solutions


Topics

Capacity development | Artificial intelligence | The enabling environment for digital development


AI as invisible, user-friendly technology

Speakers

– Speaker 3
– Beatriz Vasconcellos

Arguments

DPI principles of interoperability, modularity, and reusability remain intact for AI and can enable population-scale impact


Brazil is implementing a “one government for each person” vision through personalized AI services and agentic state concepts


Explanation

Unexpected alignment on AI becoming invisible infrastructure that seamlessly serves users, with Speaker 3 wanting AI to be invisible like DPI and Vasconcellos implementing highly personalized government services


Topics

Information and communication technologies for development | Artificial intelligence | Social and economic development


Overall assessment

Summary

Strong consensus emerged around AI as digital public infrastructure, the importance of foundational resources democratization, data interoperability, government involvement from design phase, local context preservation, and avoiding vendor lock-in


Consensus level

High level of consensus with complementary perspectives rather than disagreements. The speakers represented different regions (India, Kenya/Italy, Brazil, global development) but shared remarkably similar views on core AI diffusion principles, suggesting these insights may represent emerging global best practices for AI implementation in the public interest


Differences

Different viewpoints

Approach to building AI capabilities – centralized vs distributed

Speakers

– Beatriz Vasconcellos
– Speaker 3

Arguments

Centralized procurement and shared services can overcome implementation barriers and prevent fragmented solutions


Constraints on access to foundational resources like compute and data limit AI adoption in developing countries


Summary

Beatriz advocates for centralized government control and shared services to prevent fragmentation, while Speaker 3 focuses on democratizing access to foundational resources across stakeholders


Topics

Capacity development | The enabling environment for digital development


Vendor dependency and capability building strategy

Speakers

– Beatriz Vasconcellos
– Janet Zhou

Arguments

Countries must build internal AI capabilities rather than outsourcing all digital decisions to external vendors


Successful scaling requires making it easier for local innovators to enter markets and lowering costs to serve marginalized communities


Summary

Beatriz strongly opposes vendor dependency and emphasizes building domestic capabilities, while Janet focuses on enabling local innovators through market mechanisms rather than avoiding external partnerships


Topics

Capacity development | The digital economy | The enabling environment for digital development


Trust mechanism in AI adoption

Speakers

– Speaker 1
– Janet Zhou

Arguments

AI adoption requires institutional trust rather than algorithmic trust, with institutions needing to understand and adopt AI before endorsing its outputs


Building institutional capacity and shared infrastructure creates positive feedback loops for trustworthy, inclusive systems


Summary

Speaker 1 emphasizes that institutions must first understand and trust AI before endorsing it, while Janet argues that institutions become trustworthy through inclusive practices and shared infrastructure


Topics

Human rights and the ethical dimensions of the information society | Capacity development


Unexpected differences

Role of external vendors in AI development

Speakers

– Beatriz Vasconcellos
– Janet Zhou

Arguments

Countries must build internal AI capabilities rather than outsourcing all digital decisions to external vendors


Operational support, training, and country-to-country knowledge sharing are crucial for successful AI adoption


Explanation

Unexpected because both speakers support capacity building, but Beatriz takes a surprisingly strong nationalist stance against vendor partnerships while Janet advocates for international collaboration and knowledge sharing


Topics

Capacity development | The enabling environment for digital development


Data governance approach

Speakers

– Saurabh Garg
– Beatriz Vasconcellos

Arguments

AI-ready data must be discoverable, trustworthy, interoperable, and usable across systems with proper standardization


Data ecosystems approach breaks silos by creating thematic interoperability across ministries for specific policy areas


Explanation

Unexpected because both are government officials advocating for data interoperability, but they propose fundamentally different architectural approaches – universal standardization vs thematic ecosystems


Topics

Data governance | Information and communication technologies for development


Overall assessment

Summary

The main areas of disagreement center around centralization vs democratization of AI resources, the role of external vendors vs domestic capability building, and different approaches to data governance and trust mechanisms


Disagreement level

Moderate disagreement with significant implications – while speakers share common goals of AI diffusion and inclusion, their different approaches could lead to incompatible implementation strategies. The disagreements reflect broader tensions between national sovereignty and international cooperation in AI development, and between centralized control and distributed innovation models.


Partial agreements

Partial agreements

All agree on the importance of DPI principles for AI, but differ on implementation – Saurabh focuses on data readiness and standardization, Speaker 3 emphasizes leveraging existing rails, while Beatriz prioritizes government-controlled personalized services

Speakers

– Saurabh Garg
– Speaker 3
– Beatriz Vasconcellos

Arguments

AI should become a trusted, interoperable and shareable digital public infrastructure like Aadhaar or UPI


DPI principles of interoperability, modularity, and reusability remain intact for AI and can enable population-scale impact


Digital ID and authentication systems are essential for implementing personalized AI government services


Topics

Information and communication technologies for development | Artificial intelligence


Both recognize the pilot-to-scale challenge but propose different solutions – Janet emphasizes government involvement from design phase and institutional capacity, while Speaker 3 focuses on horizontal infrastructure unlocks and cross-sectoral capabilities

Speakers

– Janet Zhou
– Speaker 3

Arguments

The problem of ‘pilotitis’ predates AI, requiring government involvement from design phase and focus on vulnerable populations


AI impact occurs in sectors but scaling depends on horizontal unlocks like language, data, and compute capabilities


Topics

Artificial intelligence | Capacity development | Closing all digital divides


Both agree on the need for interoperable data systems but differ in approach – Saurabh advocates for broad standardization across systems, while Beatriz prefers thematic ecosystems focused on specific policy areas

Speakers

– Saurabh Garg
– Beatriz Vasconcellos

Arguments

AI-ready data must be discoverable, trustworthy, interoperable, and usable across systems with proper standardization


Data ecosystems approach breaks silos by creating thematic interoperability across ministries for specific policy areas


Topics

Data governance | Information and communication technologies for development


Similar viewpoints

Both speakers emphasize the importance of leveraging existing infrastructure and focusing on vulnerable/marginalized populations to achieve meaningful AI impact and scaling

Speakers

– Speaker 3
– Janet Zhou

Arguments

AI can leverage existing DPI rails like identity, payments, and data exchange to reach vulnerable populations more easily


Successful scaling requires making it easier for local innovators to enter markets and lowering costs to serve marginalized communities


Topics

Information and communication technologies for development | Artificial intelligence | Closing all digital divides


Both speakers agree that trust in AI comes through institutions rather than algorithms themselves, requiring strong institutional capacity and understanding

Speakers

– Speaker 1
– Janet Zhou

Arguments

AI adoption requires institutional trust rather than algorithmic trust, with institutions needing to understand and adopt AI before endorsing its outputs


Building institutional capacity and shared infrastructure creates positive feedback loops for trustworthy, inclusive systems


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


Both speakers see multilingual and voice capabilities as key to making AI more inclusive and accessible, particularly for bridging digital divides

Speakers

– Speaker 2
– Speaker 3

Arguments

AI can serve as an equalizer by making new language integration easier and more accessible


Voice adoption and multilingual capabilities are key to AI’s inclusion agenda and bridging digital divides


Topics

Artificial intelligence | Closing all digital divides


Takeaways

Key takeaways

AI should evolve into a Digital Public Infrastructure (DPI) similar to Aadhaar or UPI, requiring trust, interoperability, and shareability to enable widespread diffusion


Four foundational AI resources need democratization: compute, datasets, talent, and models, with data being particularly critical as the raw material for AI models


The gap between AI invention (happening in the West) and impact (needed globally) can be bridged through 100 AI diffusion pathways by 2030, focusing on use cases in the Global South


AI-ready data must be discoverable, trustworthy, interoperable, and usable across systems, incorporating local linguistic and cultural contexts


Scaling AI from pilots to production requires government involvement from the design phase, focus on vulnerable populations, and building institutional capacity


Voice adoption and multilingual capabilities are key to making AI more inclusive and bridging digital divides


Countries need to build internal AI capabilities rather than relying entirely on external vendors to avoid dependency and maintain strategic autonomy


A Use Case Adoption Framework shows that AI impact occurs in sectors but scaling depends on horizontal unlocks like language, data, and compute capabilities


Resolutions and action items

Implementation of METRI (Multi-stakeholder AI for Resilient and Trustworthy Infrastructure) platform for democratizing AI resources on a voluntary, modular basis


Development of 100 AI diffusion pathways by 2030 as a collaborative initiative between countries


Brazil’s implementation of data ecosystems approach starting with early childhood and environmental/climate ecosystems involving multiple ministries


Centralization of AI chatbot capabilities in Brazil through the Ministry of Management’s Secretariat for Shared Services


Creation of shared platforms for citizen personalization and canonical datasets across government ministries


Publication of Use Case Adoption Framework paper in Atlantic Council as a resource for other countries


Unresolved issues

How to ensure sustainable business models for public AI infrastructure while maintaining accessibility


Specific mechanisms for cross-border collaboration and knowledge sharing between countries implementing AI diffusion pathways


Detailed technical standards and protocols needed for AI interoperability across different systems and countries


How to balance vendor partnerships with building internal capabilities without completely avoiding external expertise


Measurement and evaluation frameworks to assess the success of AI diffusion pathways


Funding mechanisms and resource allocation for developing countries to access foundational AI resources


Suggested compromises

Multi-model approach to avoid vendor lock-in while still leveraging external AI capabilities (as demonstrated in Amul AI example)


Hybrid approach combining centralized shared services with ministry-specific customization to balance efficiency and autonomy


Voluntary, non-commitment basis for international AI resource sharing through METRI platform to encourage participation without binding obligations


Starting small with internal capability building while gradually reducing dependence on external vendors over time


Co-architecting approach where Global South countries become builders of AI solutions rather than just adopters of Western innovations


Thought provoking comments

AI is perhaps something like a solution in search of a problem. So until and until we don’t find use cases for that, it will not be able to give the value that it potentially can

Speaker

Saurabh Garg


Reason

This comment reframes the entire AI discussion by highlighting a fundamental challenge – that AI technology exists but lacks clear problem-solution fit. It’s insightful because it shifts focus from technical capabilities to practical utility and user needs.


Impact

This comment set the foundational tone for the entire discussion, establishing that the conversation should focus on practical applications rather than theoretical possibilities. It influenced subsequent speakers to emphasize real-world use cases and implementation challenges.


We rely on institutions for trust, not on algorithms… one of the ways that institutions become trustworthy is by being inclusive and making sure that they actually serve the people that otherwise would be less to benefit

Speaker

Janet Zhou


Reason

This insight addresses a critical aspect of AI adoption – the trust paradox. It’s thought-provoking because it suggests that AI success depends not on algorithmic sophistication but on institutional credibility and inclusivity.


Impact

This comment shifted the discussion from technical implementation to governance and trust frameworks. It influenced other speakers to discuss how governments and institutions need to be involved from the design phase, not just implementation.


We don’t want AI to be this noisy, chaotic technology. We want it to be so invisible because it’s actually part of your life… for a small holder farmer, for a small micro-entrepreneur business

Speaker

Kizom (Speaker 3)


Reason

This comment provides a profound vision of successful AI diffusion – that true success means invisibility and seamless integration into daily life, especially for marginalized populations. It challenges the notion that AI should be a prominent, visible technology.


Impact

This perspective reoriented the discussion toward user experience and accessibility. It reinforced the focus on making AI serve the most vulnerable populations and influenced the conversation about digital public infrastructure as invisible rails.


I use an analogy with the army – no one thinks that it’s reasonable to outsource your army to a country that has a stronger army… but in terms of digital we’re doing it every day

Speaker

Beatriz Vasconcellos


Reason

This analogy is striking because it frames digital sovereignty in terms of national security, challenging the common practice of outsourcing critical digital capabilities. It’s thought-provoking because it questions fundamental assumptions about technology procurement.


Impact

This comment introduced a new dimension to the discussion – digital sovereignty and capability building. It shifted the conversation from just adoption to strategic autonomy, influencing discussion about building internal capabilities rather than relying on external vendors.


Data is the raw material for AI models… data needs to be AI ready… discoverable, trustworthy, interoperable, and usable across systems

Speaker

Saurabh Garg


Reason

This comment provides a systematic framework for thinking about data preparation for AI, moving beyond generic discussions to specific, actionable requirements. It’s insightful because it breaks down a complex challenge into manageable components.


Impact

This structured approach to data readiness became a recurring theme throughout the discussion, with other speakers building on these four pillars (discoverable, trustworthy, interoperable, usable) when discussing their own implementation experiences.


The impact of what technology like artificial intelligence can do sits in sectors… but its ability to move from pilot to scale depends on the horizontal unlocks

Speaker

Kizom (Speaker 3)


Reason

This insight distinguishes between where AI creates value (vertically in sectors) versus what enables its scaling (horizontal infrastructure). It’s thought-provoking because it provides a framework for understanding why many AI initiatives remain stuck in pilot phase.


Impact

This framework helped structure the latter part of the discussion, with speakers using this vertical/horizontal distinction to explain their approaches to scaling AI solutions and building shared infrastructure.


Overall assessment

These key comments fundamentally shaped the discussion by establishing several critical frameworks: the problem-solution fit challenge, the trust-institution relationship, the invisibility goal for successful AI, the digital sovereignty imperative, the data readiness requirements, and the vertical-horizontal scaling model. Together, these insights moved the conversation from abstract AI potential to concrete implementation challenges, from technical capabilities to governance and trust issues, and from isolated solutions to systemic infrastructure thinking. The discussion evolved from a technology-centric view to a human-centric, institution-focused approach that emphasized practical diffusion pathways and digital sovereignty. The speakers built upon each other’s frameworks, creating a comprehensive view of AI diffusion that balances innovation with inclusion, technical capability with institutional trust, and global collaboration with local autonomy.


Follow-up questions

How can AI models be made more efficient and less compute and energy intensive?

Speaker

Saurabh Garg


Explanation

This is crucial for democratizing AI access, especially for developing countries with limited computational resources. Making models lighter would improve diffusion significantly.


What is the business case for data centers and GPUs on the African continent?

Speaker

Speaker 3 (Kizom)


Explanation

Understanding the economic viability of AI infrastructure in Africa is essential for sustainable AI adoption and preventing the continent from being left behind in the AI revolution.


How do you break data silos in the global south despite being rich in data?

Speaker

Speaker 3 (Kizom)


Explanation

Data silos prevent effective AI implementation. Finding ways to make data interoperable and accessible is critical for AI diffusion in developing regions.


How do institutions first trust AI output before endorsing it to citizens?

Speaker

Speaker 1 (moderator)


Explanation

This addresses the critical challenge of institutional adoption of AI systems, which is necessary before public trust can be established.


How can countries avoid vendor lock-in while building AI capabilities?

Speaker

Beatriz Vasconcellos


Explanation

This is essential for maintaining sovereignty and building indigenous AI capabilities rather than becoming dependent on foreign vendors.


How do you ensure multi-model choice and replaceability in AI systems?

Speaker

Speaker 2 (Shalini)


Explanation

Providing choice and avoiding concentration in western LLMs is important for true AI diffusion and preventing monopolistic control.


How can voice adoption address diversity and inclusion in AI diffusion?

Speaker

Speaker 3 (Kizom)


Explanation

Voice technology can bridge literacy gaps and make AI accessible to populations who cannot read, making AI a true equalizer.


What are the mechanisms for ensuring AI becomes trusted, interoperable and shareable as a DPI?

Speaker

Saurabh Garg


Explanation

Understanding how to transform AI into digital public infrastructure requires specific mechanisms that haven’t been fully developed yet.


How can the METRI platform for democratizing AI resources be effectively implemented?

Speaker

Saurabh Garg


Explanation

The proposed multi-stakeholder platform needs further development to understand how it will work in practice on a voluntary, non-commitment basis.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.