Collaborative AI Network – Strengthening Skills Research and Innovation

20 Feb 2026 12:00h - 13:00h

Collaborative AI Network – Strengthening Skills Research and Innovation

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing AI diffusion as a potential digital public infrastructure that must first demonstrate concrete use cases before delivering value [12-13]. Saurabh Garg argued that for AI to become a trusted, interoperable DPI similar to Aadhaar or UPI, four foundational resources-compute, data sets, talent and models-need to be democratized and governed by appropriate frameworks [14-15]. He outlined four criteria for “AI-ready” data-discoverability, trustworthiness, interoperability and usability-while emphasizing privacy-preserving access and the importance of locally relevant datasets [15-18]. To coordinate this effort, the AI summit’s working group proposed a voluntary, modular platform called METRI (Multi-stakeholder AI for Resilient Infrastructure) to foster shared development of these resources [23-28].


Building on that, Speaker 2 highlighted that AI, like earlier general-purpose technologies, was invented in the West but must be diffused through the Global South via 100 targeted pathways by 2030, with Kizom acting as a key partner [46-54]. He questioned how such pathways could be operationalized across sectors, noting the gap between invention and impact [39-41].


Representing the UN Development Programme, Speaker 3 described the G7 AI hub as a mechanism to unlock compute, data and talent for low-income regions, stressing the need to build business cases for local data-centers and to retain diaspora talent for co-architecting solutions for farmers and women entrepreneurs [61-68]. Janet Zhou warned that “pilotitis” hampers scale and argued that lasting impact requires governments to be involved from design through implementation, creating trustworthy, inclusive institutions that lower market entry costs for innovators [75-86]. Brazil’s Beatriz Vasconcellos illustrated a national approach that creates shared data ecosystems, standardised early-childhood and environmental datasets, and a centralized chatbot platform built on digital ID to move from pilot to transactional services [95-131].


Kizom later explained that existing digital public rails such as UPI, DigiLocker and language stacks like Bhashani enable AI services to become “invisible” and seamlessly integrated into everyday workflows, while new rails are emerging to support multilingual voice interactions [140-158]. She cited the MOSIP open-source ID platform as an example of how technical standards, operational support and financing combine to lubricate adoption of public-good infrastructure across countries [178-193]. Participants agreed that avoiding vendor lock-in and fostering modular, multi-model solutions are essential, and highlighted a jointly developed “use-case adoption framework” that maps vertical sector needs to horizontal data and compute enablers to guide the 100 diffusion pathways [231-236][240-247].


The discussion concluded that scaling AI in the Global South will depend on building trustworthy digital public infrastructure, democratizing core resources, and institutionalising collaborative, standards-based pathways that move pilots to production at population scale [14-15][75-86][240-247].


Keypoints

Major discussion points


AI must be treated as a Digital Public Infrastructure (DPI) and democratized through shared foundational resources.


Saurabh Garg emphasized that AI will only deliver value once concrete use-cases are identified and that, like Aadhaar or UPI, AI needs to become a trusted, interoperable, and shareable public good built on four core resources – data, compute, talent, and models [12-15]. He outlined the need for “AI-ready” data that is discoverable, trustworthy, interoperable and usable, and introduced the METRI platform as a modular, voluntary framework to develop these resources [16-18][23-29].


Making data “AI-ready” is a prerequisite for diffusion.


The speaker detailed four criteria for data readiness: discoverability via common metadata, quality-based trustworthiness, technical interoperability through unique identifiers, and usability enabled by international standards [15-18]. He also stressed privacy safeguards while ensuring data remains locally relevant and can drive context-specific AI solutions [17-22].


Existing digital public rails (e.g., UPI, digital IDs, DigiLocker) are the backbone for scaling AI across sectors and borders.


Participants highlighted how public infrastructure that is invisible to users-such as payment systems, identity platforms, and emerging language stacks like “Bhashani”-provides the “rails” on which AI services (e.g., chat-bots for farmers) can be layered [140-152][155-162]. The discussion linked this to the broader vision of 100 AI diffusion pathways by 2030, stressing convergence of multiple national rails into a global ecosystem.


Transitioning from pilots to production requires institutional trust, inclusive governance, and coordinated standards.


Janet Zhou pointed out that “pilotitis” is solved when governments are at the design table, creating trustworthy, inclusive institutions that lower market entry costs for innovators [75-82][84-86]. Brazil’s experience illustrated concrete steps: building shared data ecosystems, standardizing early-childhood and environmental data, and centralising chatbot services under a national digital ID framework [95-104][115-124].


Removing friction through open-source platforms, centralized services, and capacity-building avoids vendor lock-in and builds domestic capability.


The MOSIP open-source ID platform was cited as a model for establishing technical standards, operational support, and financing that enable cross-country adoption [178-190]. Brazil’s Secretariat for Shared Services demonstrates how a single procurement channel and internal innovation units can streamline AI deployment while resisting reliance on external vendors [204-216][222-229].


Overall purpose / goal


The panel was convened to map out concrete pathways for “AI diffusion” – i.e., moving AI from isolated pilots to scalable, inclusive public services worldwide. Participants shared experiences, frameworks (METRI, use-case adoption framework), and policy ideas aimed at establishing AI as a trusted digital public infrastructure that can be leveraged across sectors and geographies by 2030.


Tone of the discussion


– The conversation began with a formal, forward-looking tone, focusing on high-level concepts such as DPI and resource democratization.


– As speakers introduced regional case studies (India, Brazil, Africa), the tone shifted to pragmatic and collaborative, acknowledging real-world constraints and the need for institutional trust.


– Towards the end, the dialogue became more candid and slightly informal, with participants noting operational hurdles, vendor-lock-in concerns, and even a light-hearted “we’ve been kicked out of the room” remark, while still maintaining an overall constructive and solution-oriented spirit.


Speakers

Speaker 1 – Role/Title: Moderator / Host; Area of Expertise: 


Saurabh Garg – Role/Title: Secretary, Ministry of Statistics and Programme Implementation (MOSPI), Government of India; Area of Expertise: AI policy, Digital Public Infrastructure, AI democratization [S12]


Speaker 2 – Role/Title: Moderator / Chair of the panel; Area of Expertise: AI diffusion, inclusive AI development [S6][S7]


Speaker 3 – Role/Title: United Nations Development Programme (UNDP) representative; Area of Expertise: AI implementation in developing regions, AI diffusion pathways [S1]


Beatriz Vasconcellos – Role/Title: Brazilian government official (AI lead); Area of Expertise: AI adoption in the public sector, digital public infrastructure in Brazil [S4]


Janet Zhou – Role/Title: AI adoption specialist; Area of Expertise: Scaling AI pilots, institutional capacity for AI [S5]


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

The session opened with a brief logistical exchange – the moderator asked panelists Mr Shankar and Mr Saurabh to pose for a quick photograph, thanked everyone for joining, and then invited the Secretary of MOSIP India, Mr Saurabh Garg, to deliver the keynote address [1-8].


In his opening remarks, Garg characterised artificial intelligence as “a solution in search of a problem” and argued that AI will generate value only when concrete use-cases are identified [12-13]. He positioned AI as a potential digital public infrastructure (DPI), comparable to Aadhaar or UPI, and asserted that for AI to become a trusted, interoperable and shareable public good it must rest on four foundational resources – compute, data sets, talent and models [14-15]. Garg then outlined a four-point rubric for “AI-ready” data: (i) discoverability through common metadata, (ii) trustworthiness via quality assessments, (iii) interoperability enabled by unique identifiers, and (iv) usability ensured by internationally aligned standards [15-18]. He noted that access must be balanced with privacy safeguards and that locally relevant data are essential for context-specific AI solutions [17-22]. To coordinate the democratisation of these resources, the AI summit’s working group proposed a voluntary, modular platform called METRI (Multi-Stakeholder AI for Resilient and Trustworthy Infrastructure) that would allow stakeholders to contribute compute, data, models and talent on a non-committal basis [23-29].


The moderator (Speaker 2) then introduced the metaphor of “diffusion pathways”, observing that, like earlier general-purpose technologies, AI was invented in the West but its impact must be realised in the Global South. She announced the ambition of 100 AI diffusion pathways by 2030 and asked how such pathways could be operationalised across sectors [39-41][46-54].


Kizom (UNDP) responded by describing the newly created G7 AI Hub as a mechanism to unlock compute, data and talent for low-income regions [61-68]. He highlighted the need to build business cases for local data-centres, to break data silos despite the abundance of data in the Global South, and to retain diaspora talent so that AI solutions can be co-architected for smallholder farmers and women entrepreneurs [62-66]. In illustrating the scale of talent networks, Kizom named “Selena from Zindi” and noted that Zindi’s community of ≈ 100 000 African data scientists functions as a public-interest infrastructure [150-158]. He also pointed to existing digital public “rails” – such as UPI, DigiLocker and emerging language stacks like Bhashani – that make AI services invisible to end-users, allowing AI-driven chat-bots for agriculture or health to ride on already trusted infrastructure [140-152][155-162]. The MOSIP open-source digital-ID platform was cited as an illustration of how technical standards, operational support and financing together “lubricate” the adoption of public-good infrastructure across countries [178-190].


The moderator then asked Janet Zhou (Speaker 3) whether the barrier for moving AI from pilot to production was merely funding and how AI could transition from prototype to scale [70-78]. Zhou answered that “pilotitis” – projects stuck at the prototype stage – predates AI and can be overcome only when governments sit at the design table from the outset. She argued that trustworthy, inclusive institutions are essential for lowering market-entry costs for innovators, and that shared infrastructure must be built on standards that create a positive feedback loop of trust and adoption [79-86].


Building on this, Zhou used the MOSIP open-source digital-ID platform as a road-analogy: standards are the road surface, side-of-the-road rules govern traffic, and financing provides the fuel that keeps the road functional. She stressed that even after a “road” has been built, agreed-upon technical standards, side-rules and financing are required to sustain adoption [178-190].


When the moderator turned to the Brazilian perspective, Beatriz Vasconcellos (Speaker 4) presented Brazil’s DPI vision of “one government for each person”. She described the creation of shared data ecosystems – thematic datasets for early-childhood and environmental domains that are standardised, interoperable and linked to a canonical citizen profile [95-110]. A centralised chatbot platform, built on the national digital-ID system (gov.vr), has moved from informational pilots to transactional services, enabling citizens to complete service requests securely [115-124][125-131]. To avoid duplication and vendor lock-in, Brazil’s Secretariat for Shared Services now offers a single procurement channel for AI solutions, allowing ministries to acquire services with a simple digital transfer, thereby reducing implementation time and dependence on external vendors [211-216]. Vasconcellos also warned against over-reliance on external vendors, stressing the need to develop domestic AI capability through internal experimentation and capacity-building [222-230].


Later, the moderator asked about safe conversations in agriculture and health and whether reusable playbooks could be created. The panel highlighted the need for safety guardrails and voice-AI playbooks to ensure trustworthy interactions in high-risk domains [165-170].


Kizom (UNDP) then elaborated on the “use-case adoption framework”. He explained that the framework maps vertical sectoral impact (e.g., agriculture, health, education) to horizontal unlocks such as language localisation, compute access, and AI-ready data. Co-design between governments, private sector and civil society is required to fuse vertical needs with horizontal enablers, ensuring that each use-case can scale efficiently [210-225].


Across the discussion, several points of agreement emerged. All speakers concurred that AI must be treated as a DPI, requiring trust, interoperability and shared “rails” to enable cross-border diffusion (Garg, Kizom, Zhou) [14-15][140-152][155-158][179-190]. They also agreed on the four criteria for AI-ready data and on the necessity of standardised metadata, quality checks, unique identifiers and international classifications [15-18]. The need to overcome pilotitis through early government involvement and shared infrastructure was echoed by Zhou, Vasconcellos and the moderator [79-86][211-216][71-74]. Finally, multilingual and voice AI were identified as equalisers that broaden inclusion, a view shared by the moderator and Kizom [264-267][154-156][168-170].


Nevertheless, moderate disagreement was evident on the preferred architecture for AI DPI. Garg framed AI as a government-led DPI with strict standards, whereas the moderator advocated a multi-stakeholder, open-source, multi-model ecosystem to avoid concentration of Western large language models, and Zhou highlighted MOSIP’s open-source, vendor-free model as a practical alternative [14-15][231-236][179-190]. A second tension concerned the scaling mechanism: Vasconcellos promoted a top-down, centralised procurement and shared-service model, while Garg and the moderator emphasised modular, voluntary rails and horizontal enablers such as compute and talent [211-216][231-236][14-15]. A third divergence related to external assistance: Kizom’s G7 AI Hub seeks to import compute and talent to the Global South, whereas Vasconcellos warned that excessive reliance on external vendors could erode domestic capability [61-66][222-230].


From these convergences and divergences, the panel distilled a set of key take-aways:


1. AI should be institutionalised as DPI, with trust, interoperability and shareability as core attributes.


2. The four foundational resources-compute, data, talent and models-must be democratised, and data must satisfy the discoverability, trustworthiness, interoperability and usability criteria.


3. The “100 AI diffusion pathways by 2030” agenda stresses horizontal enablers (language, compute, talent) linked to sector-specific use-cases.


4. Early government participation and the provision of shared public rails (identity, payments, data exchange) are essential to move AI from pilot to production at population scale.


5. Open-source, multi-model platforms such as METRI and MOSIP are preferred to avoid vendor lock-in and to build domestic capability.


6. Multilingual and voice AI are critical equalisers for reaching underserved users.


7. Co-architecting public-private partnerships and multi-stakeholder collaborations (e.g., the G7 AI Hub, XTEP, Gates Foundation) are required for sustainable diffusion [14-15][23-29][53-54][75-86][231-236][178-190][240-247].


The panel also identified concrete actions. The METRI platform will be further developed as a voluntary, modular framework for sharing compute, data, models and talent [23-28]. The G7 AI Hub will continue to unlock resources for Africa, Latin America and Asia [61-66]. Countries are encouraged to adopt the AI use-case adoption framework, which maps vertical sectoral impact to horizontal unlocks, to guide scaling [240-247]. Brazil will proceed with its centralised procurement and shared-service model to streamline AI deployment across ministries [211-216]. Nations are urged to explore open-source digital-ID solutions such as MOSIP, accompanied by operational support, training and financing [178-190]. Finally, stakeholders are asked to promote multi-model, open-source AI solutions to mitigate vendor lock-in [231-236].


Unresolved issues remain. Detailed governance mechanisms for ensuring data trustworthiness and privacy across jurisdictions have yet to be finalised. Viable business models for building compute infrastructure (e.g., data-centres and GPU clusters) in the Global South need further articulation. Clear timelines, metrics and accountability structures for achieving the 100-pathway target by 2030 are still missing. The development of safety guardrails and reusable playbooks for voice-enabled health or agricultural AI interactions requires additional research [165-170]. Finally, the extent of regulatory reforms needed to support an AI-centric DPI while preventing vendor lock-in is an open question [16-22][66-70][84-86][222-230].


In sum, the discussion moved from a high-level framing of AI as a nascent technology to a nuanced blueprint for embedding AI within trusted, interoperable public infrastructure. The pivotal moments were Garg’s articulation of AI-ready data and the DPI metaphor, Zhou’s diagnosis of “pilotitis” with a governance remedy, and the concrete national examples from Brazil and the MOSIP model. These insights reshaped the conversation from problem identification to solution design, culminating in a shared AI adoption framework that links sectoral needs with horizontal resources. The consensus on the importance of DPI, data readiness, early government involvement and open-source, multi-model ecosystems provides a solid foundation for future policy work, while the identified disagreements highlight the need for flexible, context-specific pathways that balance standardisation, sovereignty and openness. Together, the three pillars-digital public infrastructure, democratised foundational resources, and the 100-pathway agenda-chart a clear route forward, with next steps focused on developing METRI, scaling the G7 AI Hub and mainstreaming the use-case adoption framework. [14-15][75-86][140-152][240-247]


Session transcriptComplete transcript of the session
Speaker 1

request all the panelists along with Mr. Shankar and Mr. Saurabh for a picture, please, because everyone has different schedules. So we just want to get a quick photo of this moment before we move ahead. Yeah, content first. All right. Thank you so much. Panelists, you can take your seat. To take us forward, I’d like to invite to deliver a keynote Mr. Saurabh Garg, who is the Secretary of MOSPE India. If you can take us forward. Thank you so much.

Saurabh Garg

Thank you. Good afternoon and great to be here on this session. We’re talking of diffusion, AI diffusion. I’ll just speak of one or two aspects of it because I’m sure the panelists would lend a lot more color to this topic. Just to take off where Shankar left, sometimes he’s talking about use cases and that’s very necessary because AI is perhaps something like a solution in search of a problem. So until and until we don’t find use cases for that, it will not be able to give the value that it potentially can and I think that’s really, really important. We’re talking of AI being a possible DPI, a digital public infrastructure. and I suppose for that some steps would be needed to ensure that it becomes trusted, interoperable and shareable.

I think those are aspects which a DPI like Aadhaar or UPI has and I think we are still in early days but the mechanisms for that, how we can ensure that it would be possible and given that we talk of four resources as foundational AI resources, compute, data sets, talent and models apart from obviously the frameworks that would be necessary for this and I mention this because I had the privilege of chairing the democratizing AI resources group, working group of the AI summit and various… various options that we discussed with other countries on how we can ensure democratization of these four foundational resources. obviously each of them would have a different mechanism but one thing I would just go down in slightly greater details is on the data sets back part which is also something that we are doing within the Ministry of Statistics across the two different ministries and and states and why I am saying data sets is also because perhaps data is also the raw material for AI models so it’s a very foundational resource in that sense and compute is perhaps something that we can acquire and therefore we have discussions around models whether they need to be more efficient they are right now extremely power both compute and energy intensive or we can make them lighter going forward that is something which is work in progress I think it will take some time before the small and domain small domain models come in which will perhaps improve diffusion but data is something that would need to be AI ready going forward and in AI ready I would probably make four things that it needs to be one is discoverable how do you ensure that data is easily discoverable and that’s perhaps by ensuring that the metadata is understood by everyone and that makes it easier for any models also to understand second is on the trustworthiness of the data and that’s the quality assessments that we have whether it’s trustworthy and it’s credible and that would determine its use the third is in its interoperability with the two sets of data sets how interoperable they are what are the kind of unique identifiers it has to be able to identify what is it how to link different data sets and the fourth is its usability across systems and that would be dependent on the standardization and the classifications that we use which are internationally similar so that different conclusions do not come from the same data set.

And obviously, the focus would have to be on access and dissemination so that it is available for use while preserving the privacy of the data, the safeguards that would need to be built. And why I am saying about data is because this would be also where a lot of the local contexts, linguistic contexts, cultural contexts will come in and that will come in from the data sets that are. We talk of ensuring that it is locally relevant, the inferences and the solutions and I suppose the data would determine its relevance. So, we have to be very careful about that. So, we have to be very careful about that. So, we have to be very careful about that.

So, we have to be very careful about that. and ensure that it’s useful at different levels. So I’ll stop here, apart from the fact saying that for democratizing AI resources, the working group discussed with the others and a kind of a platform has been suggested going forward, which has been named as METRI. METRI in Hindi means friendship for those who are not aware. And it’s an acronym for multi -stakeholder AI for resilient and I’m forgetting what’s the T for. Now that, sorry, trustworthy. So and infrastructure. So that’s the acronym that we hope to be able to. But what the concept is that on a modular level, on a voluntary basis level, on a non -commitment level.

how we can develop on the foundational AI resources of availability of compute, data sets, models and talent. And I think the way we are able to develop this and move towards a DPI for AI resources, I am sure diffusion would become all the more easier. So thank you for this opportunity and look forward to a great time. Thank you.

Speaker 2

Thank you everyone. And we will carry on. We don’t have enough panels in which all of us are women. So three cheers for that. Don’t look at each other. You guys had a great contribution. So a couple of weeks back, some of us got together and we said, invention has happened in West. Impact has to happen at each one of us. What’s the gap between invention and impact? And that’s where we came out and thought about we thought about adoption and then we said, isn’t it diffusion? And why did we pick diffusion? We actually read a book. We read a book by Jeffrey Ding. He’s a professor in it’s in D .C. Why am I forgetting the name of the institute?

It’s D .C. Georgetown. Sorry. Georgetown in D .C. And we read about AI diffusion and that GPT, the general purpose technology like electricity, it diffused into the society over several decades. It was created in Europe but actually diffused in U .S. quite a lot. U .S. capture and also chemical engineering which Shankar talked about that chemical the chemical engineering creation if you see, if you remember chemistry, Bohr model you know all those were Germans but actually it’s US who capitalized on that. AI is like that. Invention happened in the West. We all know that. But it’s the global south is going to have use cases who are going to diffuse it into sectors into the and the horizontal enablers have to happen across these sectors for us to benefit, for us to have more economic benefit out of AI.

So that’s when all of us said that yes we will do 100 diffusion pathways by 2030. And one of the partner in crime was Kizom. She is here with us and Kizom my first question is to you. Tell us about how you think because Kenya comes in, you are based in Italy and we did a tripartite with Kenya, Italy and India. How do you think 100 pathways to 2030 pan out for you and what does it mean for you? How do you think it will happen?

Speaker 3

absolutely Shalini how long do I have to answer this question short version long version short version

Speaker 2

as long as people are okay with stories you can carry on

Speaker 3

um well I mean as Saurabhji the chair of the working group for democratization of AI spoke about there are some fundamental resources or inputs AI need in order for it to actually work in a way that can help a common citizen or a small business owner and some of those foundations that he spoke about our AI ready data compute and those are the things that I in my role at the United Nations development program working in Africa in parts of Latin America in Asia discovered that they were able to do in the United Nations development program working in Africa in parts of Latin America in Asia discovered that they were able to do in the United Nations development program working in Africa in Asia discovered that there is a constraint on access to some of these there is a constraint on access to some of these foundational resources And so this G7 AI hub was created to address that constraint by, one, unlocking additional resources from, of course, the friendly G7 countries that wanted to focus on parts of Africa.

But also, as we do that, to think about what is the business case for data centers, for GPUs on the continent? How do you break data silos, even though the global south is so rich in data? As well as how do you orchestrate talent, especially since we saw that much of, you know, let’s say Microsoft’s or big tech’s talent pool on the continent of Africa and in other parts of the world, were actually coming from the global south countries. And over the last one year or so, I’ve seen this tremendous momentum of many of the African people who worked in big tech or large companies moving back to the continent because they actually don’t want the continent to be…

left behind. They want to be co -architects of the future, this fundamental shift that humanity is going through. And this is where when we talk about 100 AI diffusion pathways, it is about co -architecting pathways where we look at how do we bring not just language data, but voice adoption into solutions that a smallholder farmer can use, that a woman entrepreneur can use, and not just as pilots, but to think about it from a infrastructure perspective, a digital public infrastructure perspective where we can scale to millions of farmers, go across national boundaries and be able to look across borders either as digital public goods or as expansion of private sector innovations or public private partnerships. So as Shankar said, diffusion pathways could be many, and it’s for

Speaker 1

Thank you, Kizom. I’ll come to you, Janet. You lead global development for AI across multiple geographies. But most of them are stuck in pilots, right? How does AI become production scale? And do you think it’s funding that they lack only? Or are there more diffusion pathways that we can create so that actually AI pilots most of population scale?

Janet Zhou

Hello? Hi. Maybe I would first start by saying the problem of pilotitis is actually one that sort of predates AI. And we have many technologies that are enormously beneficial for pilotizing. humanity that I think are currently still stuck in not having diffused. But when I think about the positive examples, the places where I think as a global community, we’ve had tremendous scaled impact, right? Having or reducing child mortality by half since 2000, 170 million people out of extreme poverty. The common threads are often that we’ve managed to figure out how to get both governments and markets to really focus and work for the most vulnerable populations. And so whether it’s vaccines that we’re talking about or instant payment systems, often it is really just ensuring that government is there at the design phase, at the table, in the driver’s seat, not brought in after the pilot results come in.

It is very much focused on making sure that… We make it easier for local innovators to be able to enter markets. So whether that… That’s you can aggregate low margin demand, you can streamline market entry, but really making it easy to lower the cost to serve for the most kind of vulnerable people at the edge. And then it is very much also building institutional capacity and making sure that, you know, it’s not there’s playbooks and training and all of that, but really shared infrastructure that allow sort of all boats to rise and making sure that that infrastructure is trustworthy, is inclusive, sort of creates, I think, a really positive feedback loop because I loved what Nanda Nilakani expressed, which is that we, you know, really rely on institutions for trust, not on algorithms.

And I think one of the ways that institutions become trustworthy is by being inclusive and making sure that they actually serve the people that otherwise, would be less to benefit.

Speaker 1

Yeah, absolutely. I think that’s a key that how do you trust the institutions and AI output will, you know, suppose it’s coming out of a AI advisory application. Do you trust that or do you trust the institution which is giving in a physical or will the institution adopt this AI advisory so that there’s more trust on the advice itself being given? I mean, that’s a quite hybrid and risky manner and institutions have to understand AI and adopt and first trust the AI output before they say that this is ours. I think that that part is key on AI adoption. Bia, tell us about Brazil that, you know, a very different perspective. Just let us first let us understand that how is the AI adoption in that region?

And are you also stuck in this pilot? pilot to production and is there a gap and how do you see that being bridged?

Beatriz Vasconcellos

Perfect. So I think there are many different ways and perspectives to think about AI. In the Brazilian government we chose to establish a vision for one government for each person. So that means we are going fully on the personalization and even in the agentic state vision, right? So for that we need to be thinking about some shared infrastructure and shared capabilities. So what we did was starting with the data. We have a project now to not just catalog but also prepare the data sets for training. We are also building some shared platforms for personalization and to understand citizens’ characteristics. So it’s… within our state -owned enterprises, we have two large IT state -owned enterprises, and we are making them collaborate on a shared platform in which we have some canonical data sets about citizens, and every ministry contributes with different characteristics, and we are creating different labels for every citizen.

And then one different way in which we are trying to break the data silos, which, of course, is a very big issue, is to think about the data ecosystems. So we came up with this concept, and it doesn’t mean that we’re doing data lakes. It means that we’re thinking about interoperability from a thematic perspective. So one example is the early childhood ecosystem, data ecosystem. So we know that a lot of policies related to early childhood, they have different data requirements, and they need to use similar registries, and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems So we created this ecosystem.

We brought together five ministries, Ministry of Health, Education, Social Development, Management, and Human Rights. And we cataloged the policies and what kind of data would be needed. And then we started creating the standards for that specific ecosystem. So we prioritized early childhood and environmental and land ecosystems. Land, environmental, land, and climate. It’s in the same group. So we are starting with that, and it seems to be an interesting approach. It seems to be working. The other thing is coming back to the GPI discussion. It is very helpful for us to have the digital ID and authentication to implement this vision. So what we’re doing now is we started, well, a lot of people in the government want to do Gen AI, right?

because I think it’s the easiest and maybe most famous type of AI implementation. So a lot of government entities and ministries wanted to do their own chatbots. So it was being spread all over. So what we did was also to try to centralize that capability. And we started with informational chats. So what kind of policies or information would be helpful. Now we are just starting the transactional part of the chatbots. So the idea is that a citizen will be able to actually complete a service request or get their service done through the chat. And that’s only possible because we have the gov .vr authentication. So we know that the person is actually the right person.

And then the third step, which we still haven’t entered, but that’s the vision, is the agentic state. So the agentic state. To build the agent specific for that person. And that will only be able to happen once we have the data platform. infrastructure. So that’s more or less how we’re thinking about it.

Speaker 1

Okay. And thanks for bringing DPI into the picture because my next question is on that. Nandan announced yesterday 100 Pathways to 2030 because it comes back, comes from a lot of experience on DPI. And Kizo, my next question is to you that, you know, you were also in the DPI journey, working with India. Do you think in AI, there are rails like, you know, DPI lays down rails, roads, which then other countries can take. Do you think in AI, how do use cases cross borders? How do use cases, what are the pathways? What are the playbooks that different countries can benefit from? How do you think that can happen?

Speaker 3

Shalini, great question. And I’m assuming this room is fully aware of or is a user of digital public infrastructure. Raise your hands if you’re not. Oh my God. One or two people. It’s probably one of the reasons why I don’t. Okay, we’re not going to get into that right now. You use UPI, right? You use DigiLocker. You don’t use DigiLocker, but you use DigiYatra. No? Okay. I think you should. But you use UPI. Okay, so he’s a DPI user. And that’s the beauty of digital public infrastructure. You actually want to be invisible. And one of the sort of design, one of the ambitions that we have as part of the AI diffusion pathways is that we actually don’t want AI to be this noisy, chaotic technology.

We want it to be so invisible because it’s actually part of your life. Part of not just our life, because obviously for us it’s very convenient. We’re English speakers. and so we are at the summit but for a small holder farmer, for a small micro -entrepreneur business, a woman who is crossing borders between Guinea and Sierra Leone for example so to go back to your question Shalini one as Bia was already starting to say as she is seeing in Brazil and as certainly we are seeing in many parts of the world including in India, when you have data that’s already interoperable and public rails such as identity payments, data exchange then the power of AI is much more easier to bring to that same service that you wanted to reach out that’s now an AI trackbot to a farmer on those rails so that’s fantastic but then we are also seeing an emergence of additional rails and I think that’s a great point For those of you who are from India, you probably have heard of someone using Bhashani, which is built on AI for Bharat and sort of the Indic language stack.

So that is definitely a public rail. And I know that in different parts of the world, there are many such rails being created. And I hope that we see the emergence of rails, but also the convergence of rails. Because as the French president yesterday was saying, along with Honorable Prime Minister Modi, that it’s not that we need to do more, it’s that we need to do better together. So this is where the public rails really need to come together. And then I want to recognize Selena from Zindi here from Africa. She runs a public infrastructure. She runs a network of 100 ,000 data scientists across Africa. And that’s already infrastructure. It’s public interest, public value. And we’re at a place where we’re trying to figure out what is the business case.

how do we still make them sustainable by creating those innovation layers on top of the public rails that are also emerging on AI but it’s not like you have to compete between DPI and AI it’s like the DPI principles of interoperability, modularity, reusability becoming a digital public good those still remain quite intact and this is how we might see population scale the scale towards impact

Speaker 1

Thank you Kizum for explaining it so well and actually that’s happening because not just the language the multilinguality voice AI that is becoming a DPI because you should be able to interact in voice and the voice stack is something which should be available for most of the people to build on top of it safety, the guardrails they are DPI in itself they can become DPI in itself how do you do safety safe conversations in agriculture So how do you do safe conversations if someone is calling up for patient care in health care? And can those conversations become a playbook in itself? So these are the playbooks which can get created. So thank you so much for talking about it.

I’ll come to you, Janet, that, you know, the frictions which are there, right? I mean, do you think there could be certain programs or investments to be done where such frictions? Because everybody is building like the full stack. Hey, I’ll do language translation. Hey, I will do compute I need, data I need. So how do you remove the frictions and do you think some programs and investments can help this?

Janet Zhou

You know, I was thinking about this question and the example that kind of came to mind that maybe kind of illustrates it really well. I was thinking about the MOSIP program, which is really kind of an open source platform. It’s inspired by the Adar program, but it is part of a larger effort with World Bank and many other partners to actually try to take that open source, you know, vendor free lock in national ID system and bring it to many, many countries. And when I thought about the components of that, the programming components, I know a lot of it was around ensuring that there was sort of an open production ready reference implementation frame. And maybe if we’re going to continue on the road analogy, I was trying to think of like, what would that be?

And you still, if you have a road, you still need to pick sides of the road and agree on which sides of the road everyone’s going to drive. And you have to agree that a stop sign means stop and that red means stop. And so there. Are still, I think, a set of programmatic standards and norms. that really makes it easier, not only for the adoption, but for those that have adopted to be then able to benefit from that adoption. And then, you know, a lot of when I think about programmatically what has happened in something like MOSIP is, in addition to the technical implementation, there was a lot of operational support, a lot of just examples, countries visiting each other.

And, you know, I think India has sent many delegations to many countries to help explain their story, share their pathway. There’s training that needs to be had, right? You still have to get your driver’s license and prove that you know how to use it. And so, you know, I think even after building the rails, there’s still plenty of program implementation work to actually really help facilitate and lubricate that adoption. And, of course, financing as well, which kind of came through the World Bank program. So, you know, it’s a bit of… no sort of single bullet, even after, I think, after… having the rails set. There’s still, I think, a lot of work to be done, program implementation and operational support.

Speaker 2

Thank you. Bia, what’s the hardest challenge? I mean, this all sounds very easy. Have diffusion pathways, go and build it. But it has to be operational. It has to be adopted. They’re people, right? The human in the loop is the most important in AI. We can never ignore that. What’s the hardest challenge that you see in this? Just one? Just one? Oh, we’re lucky.

Beatriz Vasconcellos

about those. So obviously, it’s not just creating applications. It’s the same old story of digital transformation, right? It’s just at a different level, but you’ve got to change the processes, the way things work. So what we’re doing, I think maybe three interesting things that we are trying to do. Also, I’m not trying to sell it, right? Everything that we are doing, we’re passing. So let’s see what works and what doesn’t. But one thing that we’re doing now is in the Ministry of Management, we have a Secretariat for Shared Services, and they didn’t used to work with AI. So the idea is that we make it very, very simple for any ministry to use a service that is centralized in the Ministry of Management.

So for example, with these chatbots that I was telling you about, we came up, we’ve centralized the procurement, and we chose one that was going to be a service that was going to be a service that was going to be a service vendor to help us build the solution. and each ministry was doing their own. So we said, hey, if you buy it through the centralized service, you only need, it takes just a few hours, you just need to sign a document and transfer some money digitally to the Ministry of Management and you can use the service. So you don’t have to go through any procurement. So that’s one way that we’re trying to overcome the problem of multiple solutions and difficult implementation.

We also came up with an interesting, I think, institutional arrangement, which is, when we’re talking about AI, we’re talking about innovation and new capabilities. We’re talking about innovation capabilities through the Ministry of Management, which means… that they’re building the whole process of how you can come up with first a policy goal, like what the AI project is going to target, how do you experiment. They build the process for experimenting. They have analysts looking at the data and seeing if things are working. So that’s something that seems to be working well, and we think it’s going to be good. The other real challenge that we have, I think, is with the vendors. And I’m using my development hat here from my previous background.

I think everyone is talking about AI and how every agency and ministry needs to be doing something on AI. And obviously there are some big vendors who are saying, yeah, like you government, you don’t have the capabilities. We have the capabilities. We can do it very fast. We do it at scale. And if you start making these decisions day after day, you’re not going to build any capabilities. You’ll just… You’re not going to build sources. and I use an analogy with for example the army no one thinks that it’s reasonable to outsource your army to a country that has a stronger army or a better army but in terms of digital we’re doing it every day like for every decision oh this country, this company does it better so we’re just going to outsource and there are some essential capabilities it’s not just an AI tool or something like we’re playing with national data, we have some very strategic goals also so I think if we don’t think about building these capabilities even if you start small and it takes a while to build the muscles we’ve got to build the muscles so we’re trying to incentivize also the agencies to test and experiment and don’t buy prepackaged solutions because we’ve got to build our own muscles

Speaker 2

yeah I think you brought in a very valid point which is what a lot of people are scared of is a vendor lock -in oh we’re going to have to do this and we’re going to have to do this and you would have seen Amul AI which got launched by the Prime Minister and VSA Step Foundation made it possible and the one key thing was there that how do you keep it multi -model like multiple models should be able to do it why just one and that’s been a key thing that how do you give choice to people how do you give you know not logged into the system because that’s where the diffusion works.

Diffusion is not about like concentrated western LLMs all together and just deploy it. It’s about actually walking the path give choice, replaceability have domain knowledge, have data with you because data is there with us in our enterprise systems and we don’t want you know learning from them. How can you separate that so it’s about actually and now this know -how that we have got We want to share with everybody. And that playbook is a diffusion pathway. That’s exactly and that gives an example for that. Kizum, you and me co -authored a paper, which is up in Atlantic Council. And we also talk about the use case adoption framework. Would you like to tell people about the use case adoption framework and how that can be a friction remover?

Speaker 3

Oh, absolutely. And I’m looking for the key author that I saw of the use case adoption framework, Tanvi Lal, director at People Plus AI at XTREP Foundation. So, you know, when we were preparing for the AI Impact Summit many months ago, which feels like many years ago, we started with this idea saying adoption. Adoption is proving to be a challenge. what are we learning from our experience? And this is where XCHEP looked at Mahavish Star, its work with AI for Bharat, its ongoing conversations with Entropic and other private sector companies on safety tooling. And I did the same across a number of countries, and I think together we consulted about 20 -plus countries, had convenings in South Africa to New York to, I don’t know, many, many more places along with the Gates Foundation as well.

And what we learned was that the impact of what technology like artificial intelligence can do sits in sectors, so education, health, climate change, but its ability to move from pilot to scale depends on the horizontal unlocks. So this 100 AI diffusion pathways, underpinning that is this framework that we call… the AI adoption framework, the use case adoption framework. where we see impact in sectors where you need contextual data, contextual knowledge, process, workflows, things that have to change in a department of education to a department of health and so on. But then the horizontal unlocks are the language data, compute. Generally, how do you make AI data AI -ready? Or how do you make data interoperable because a farmer is going to be buying things, selling things, getting public services?

So we have to think about it from a user life perspective. So this is really, I think, a bit about the use case adoption framework that we’ve done together with countries, Gates Foundation, with XTEP. And we hope that this helps us ground our 100 AI diffusion pathways because as Shalini was saying, this is not about just going and saying, I have the solution, you adopt it. we’re not going to see that impact with that approach. We’ll have to co -design some pathways. We’ll have to fuse verticals and horizontals. And this is where, at least when I talk to many innovators, private sector companies in the global south, I see them saying, aha, this is how we co -architect the future.

This is where, when we develop a voice optimization solution as a public good, that goes out to the world. We are builders of the future too. So this, you know, it’s just such a powerful kind of learning that we’ve put together into this 100 AI diffusion pathways towards impact.

Speaker 2

Thank you. Thank you, Kizam. I’m looking at the time, and I would like to ask the audience to have like two questions. So please raise your hand if, yeah. I saw yours first, and okay. would anybody like to take it

Speaker 3

yeah yeah I was so distracted by the crowd that’s coming in we’re getting kicked out guys so I think your question was how to address diversity and diffusion but if you can’t read can you hear? because this is where I think voice adoption is something that is key to the inclusion agenda and the impact agenda of AI so I would say to answer your question, voice adoption

Speaker 2

yeah actually that’s why AI becomes more equalizer and it actually bridges the divide right so there are inequalities and how do you bring in new language, today bringing a new language into a model has become fairly easy fairly so bringing a new language which we can talk about yeah that’s available there is data which is logged in PDFs into various regions and people are not knowing that today that’s become easier so that’s how it is a leveler that’s a trusted source that’s a trusted source so I’ll maybe talk to you later about what Mr. Saurabh Garg talked about and how evidence one last question yeah I think you’re talking about a pivotal moment right I think you’re talking about a pivotal moment one like you know I am not a fortune teller right but what I can do is I do understand the ecosystem about AI I think the fact that multilinguality can be one very big change because it draws people in what is change about change is always about people that how people are able to when UPI was initially talked about it was like bank said I have to change my whole system about it the user friendliness of it and the fact that it’s so easy to deploy and by people is what drew to it so any AI moment which draws people in because of the interoperability the usability and the fact that itself will become has it happened no can it happen yes and multilinguality is one of them but we have to see that how it pans out Okay, thank you so much Thank you very much We have been kicked out of the room and a great panel Thank you, bye

Speaker 1

Thank you everyone for joining us and sharing your thoughtful views On behalf of India AI team we would like to offer a souvenir with our sincere thanks Thank you so much Thank you Thank you

Related ResourcesKnowledge base sources related to the discussion topics (28)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The moderator thanked everyone for joining, asked panelists Mr Shankar and Mr Saurabh to pose for a quick photograph, and invited the Secretary of MOSIP India, Mr Saurabh Garg, to deliver the keynote address.”

The moderator’s thank-you is recorded in the session notes [S86] and the invitation for a group photograph and the presence of Mr Shankar Maruwada are described in the opening remarks [S87].

Confirmedhigh

“Garg characterised artificial intelligence as “a solution in search of a problem” and argued that AI will generate value only when concrete use‑cases are identified.”

A comment highlighting that AI exists but lacks a clear problem-solution fit matches Garg’s statement and is documented in the discussion summary [S4].

Confirmedhigh

“Garg outlined a four‑point rubric for “AI‑ready” data: discoverability through common metadata, trustworthiness via quality assessments, interoperability enabled by unique identifiers, and usability ensured by internationally aligned standards.”

Dr Saurabh Garg’s description of the four essential elements for AI-ready data infrastructure-discoverability, trustworthiness, interoperability and usability-is recorded in the transcript notes [S16].

Confirmedmedium

“He noted that access must be balanced with privacy safeguards and that locally relevant data are essential for context‑specific AI solutions.”

The need to balance data access with privacy protections is explicitly mentioned in the discussion summary [S93]; the emphasis on local relevance aligns with broader remarks on agency and co-creation in digital public infrastructure [S12].

Confirmedhigh

“The moderator announced the ambition of 100 AI diffusion pathways by 2030.”

The initiative to create 100 AI diffusion pathways by 2030 is documented in the session summary [S100].

External Sources (100)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S2
S3
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2- Speaker 3 – Speaker 1- Speaker 3- Moderator
S4
Collaborative AI Network – Strengthening Skills Research and Innovation — – Beatriz Vasconcellos- Speaker 1 – Speaker 3- Beatriz Vasconcellos
S5
Collaborative AI Network – Strengthening Skills Research and Innovation — – Beatriz Vasconcellos- Janet Zhou – Speaker 1- Janet Zhou
S6
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S7
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S8
S9
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S10
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S11
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S12
The Foundation of AI Democratizing Compute Data Infrastructure — -Saurabh Garg: Secretary in the Ministry of Statistics and Program Implementation in the Government of India
S13
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensur…
S14
S15
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — It is very clear to me that the 2030s will be a chaotic era. There will be disruption. There will be large changes. And …
S16
Regional Leaders Discuss AI-Ready Digital Infrastructure — Dr. Saurabh Garg opened the discussion by outlining four essential elements for AI-ready data infrastructure. First, dis…
S17
AI as critical infrastructure for continuity in public services — “Distributed software development.”[65]. “At Bilenium, recently we have developed as well one dedicated solution, which …
S18
Building Population-Scale Digital Public Infrastructure for AI — This comment reframes AI diffusion from a technology problem to an infrastructure problem, introducing the powerful meta…
S19
AI for agriculture Scaling Intelegence for food and climate resiliance — Shankar Maruwada from EkStep Foundation provided the technical framework for scaling AI solutions through digital public…
S20
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Juliana Sakai: Hi everyone, thank you. So we have like right now the policy question three with the theme enhancing en…
S21
International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109 — By relying heavily on external entities for critical technology infrastructure, the country runs the risk of losing cont…
S22
Latin America struggling to join the global AI race — Currently,Latin America is laggingin AI innovation. It contributes only 0.3% of global startup activity and attracts a m…
S23
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned. And by the …
S24
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Digital Public Infrastructure (DPI) is defined as society-wide digital capabilities that are essential for citizens, ent…
S25
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — 3. Contextualising Policies and Technologies: Adamma Isamade: Good afternoon, everyone. The question is very interestin…
S26
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S27
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — But the second aspect of competition is really diffusion or adoption. As each country and the companies from each countr…
S28
Safe and responsible AI — The Czech Republic is one of the most industrialized countries with almost 40% share of value added in the economy. Of t…
S29
Building the AI-Ready Future From Infrastructure to Skills — The progression from proof-of-concept to production represents a critical challenge. Resources like AMD’s Developer Clou…
S30
AI and Data Driving India’s Energy Transformation for Climate Solutions — The expert panel discussion emphasized critical enabling conditions for scaling these solutions beyond pilot projects. K…
S31
Quantum Technologies: Navigating the Path from Promise to Practice — Bogdan-Martin argues that successful quantum technology deployment requires simultaneous progress on multiple fronts bey…
S32
Swiss AI Initiatives and Policy Implementation Discussion — Using open-source models with fine-tuning for public institutions to avoid vendor lock-in while maintaining quality
S33
Connecting open code with policymakers to development | IGF 2023 WS #500 — In conclusion, accessing timely and up-to-date data for development objectives is a significant challenge in developing …
S34
Host Country Open Stage — Collaborative approaches are essential for addressing complex societal challenges in small populations Nordhaug argues …
S35
Collaborative AI Network – Strengthening Skills Research and Innovation — Garg detailed four critical requirements for AI-ready data: discoverable (through proper metadata), trustworthy (through…
S36
Regional Leaders Discuss AI-Ready Digital Infrastructure — Dr. Saurabh Garg opened the discussion by outlining four essential elements for AI-ready data infrastructure. First, dis…
S37
AI and Digital in 2023: From a winter of excitement to an autumn of clarity — At thetechnical level, data needs standards in order to be interoperable. Here, the work of standardisation and technica…
S38
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the…
S39
The Foundation of AI Democratizing Compute Data Infrastructure — “It needs to be interoperable and shareable.”[37]. “So I think two characteristics of digital public infrastructure, whi…
S40
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Adamma Isamade: Good afternoon, everyone. The question is very interesting, but I think it’s not a question that I can a…
S41
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S42
Inclusive AI_ Why Linguistic Diversity Matters — Means India has got about, means we were talking to Survey of India, and they have about 16 lakh places named, which are…
S43
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — ## Introduction and Context ### Data Governance and Collective Approaches ### Framework for Inclusive Development Abh…
S44
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — The strong consensus on key principles—particularly the need for partnerships, human-centred AI integration, and adaptiv…
S45
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S46
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S47
Artificial intelligence (AI) – UN Security Council — The discussion highlighted that open-source models enable a wide range of entities, from startups to larger corporations…
S48
WS #208 Democratising Access to AI with Open Source LLMs — Bianca Kremer: Hi, everybody hears me? First of all, I’d like to apologize for the delay and other procedures, we’re i…
S49
Open Forum #67 Open-source AI as a Catalyst for Africa’s Digital Economy — Continental Strategy and Coordination Legal and regulatory | Data governance The speaker describes ongoing policy deve…
S50
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — The participant explains that India is following the same successful approach used for DPI development, where basic buil…
S51
AI as critical infrastructure for continuity in public services — “Distributed software development.”[65]. “At Bilenium, recently we have developed as well one dedicated solution, which …
S52
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The level of consensus among the speakers was relatively high, particularly on the benefits and potential applications o…
S53
Driving Social Good with AI_ Evaluation and Open Source at Scale — High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers built upon each…
S54
Smart Regulation Rightsizing Governance for the AI Revolution — Low to moderate disagreement level. The speakers generally agreed on the problems (AI divides, need for cooperation, cap…
S55
Building Population-Scale Digital Public Infrastructure for AI — Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathwa…
S56
RESEARCH PAPERS — developing countries opened up by the adoption of ICTs and destroy the potential for increased access to knowledge. The…
S57
Diplomatic policy analysis — Digital divides:Not all countries have equal access to advanced analytical tools, perpetuating inequalities in diplomati…
S58
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S59
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Digital Public Infrastructure (DPI) is defined as society-wide digital capabilities that are essential for citizens, ent…
S60
The Foundation of AI Democratizing Compute Data Infrastructure — This connects AI democratization to broader digital infrastructure development, suggesting that individual data empowerm…
S61
WS #257 Data for Impact Equitable Sustainable DPI Data Governance — Chetty Pria: And thank you so much, Payal, and thanks for sharing also or introducing that what we are witnessing here i…
S62
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — I mean, access to compute is what makes or breaks a startup. So the way in India, the way I see it, the way we have star…
S63
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S64
https://dig.watch/event/india-ai-impact-summit-2026/fireside-conversation-01 — It’s about institutions. It’s about trust building. It’s about negotiations. It’s about guardrails, which Dario mentione…
S65
Discussion Report: AI Implementation and Global Accessibility — And when you look at deployment, the guardrails of fairness, accountability, privacy, security need to be maintained. An…
S66
Safe and Responsible AI at Scale Practical Pathways — “Deep work on working on fragmented data silos.”[5]. “It can be bridged but we have to think about how to make data inte…
S67
AI as critical infrastructure for continuity in public services — “Distributed software development.”[65]. “At Bilenium, recently we have developed as well one dedicated solution, which …
S68
AI for agriculture Scaling Intelegence for food and climate resiliance — Shankar Maruwada from EkStep Foundation provided the technical framework for scaling AI solutions through digital public…
S69
Scaling AI for Billions_ Building Digital Public Infrastructure — Absolutely, and very rightly said. So it’s becoming a fundamental part of the infrastructure that is being then used to …
S70
Building Population-Scale Digital Public Infrastructure for AI — This comment reframes AI diffusion from a technology problem to an infrastructure problem, introducing the powerful meta…
S71
AI and Data Driving India’s Energy Transformation for Climate Solutions — The speakers demonstrate strong consensus on fundamental challenges around data fragmentation, the need for standardized…
S72
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — From connected buildings to advanced AI and decarbonization efforts, companies that embrace these changes thrive in the …
S73
Nepal Engagement Session — Open architecture and interoperability are critical for long-term sustainability, avoiding vendor lock-in, and maintaini…
S74
Swiss AI Initiatives and Policy Implementation Discussion — Using open-source models with fine-tuning for public institutions to avoid vendor lock-in while maintaining quality
S75
Host Country Open Stage — Collaborative approaches are essential for addressing complex societal challenges in small populations Nordhaug argues …
S76
Empowering People with Digital Public Infrastructure — Brendan Vaughan: It’s really, really important. And I totally agree. Yeah, so I would add to that email. Pretty good…
S77
Day 0 Event #61 Accelerating progress for unified digital cooperation — The tone of the discussion was largely constructive and forward-looking. Speakers acknowledged challenges but focused on…
S78
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S79
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S80
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving…
S81
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — The tone was pragmatic and solution-oriented, with speakers expressing both frustration with past failures and cautious …
S82
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S83
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged seri…
S84
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — The discussion maintained a serious, urgent tone throughout, with speakers consistently emphasizing the critical nature …
S85
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S86
S87
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — Thank you so much, Mr. Nandan. At this point, I would love to invite our panelists up to the stage. We’ll start by takin…
S88
High Level Session 2: Digital Public Goods and Global Digital Cooperation — Thomas Davin: Thank you so much. So indeed, an alignment within that notion of DPGs, there is very much a value based sy…
S89
Opening of the session — ### Procedural Arrangements Canada: Thank you, Chair. We thank you for your efforts in seeking to devote tomorrow to th…
S90
An exciting and fearsome tool – Statement by Pope Francis at G7 Summit — Artificial intelligence is designed in this way in order to solve specific problems. Yet, for those who use it, there is…
S91
Brainstorming with AI opens new doors for innovation — AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Compa…
S92
How nonprofits are using AI-based innovations to scale their impact — This comment helped establish a key takeaway for the nonprofit audience and shifted the conversation toward practical im…
S93
TradeTech’s Trillion-Dollar Promise — Additionally, current technological interlinkages can create barriers due to excessive data requests, posing a challenge…
S94
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Arun Shetty from Cisco identified three major impediments to AI adoption: infrastructure constraints (power, compute, an…
S95
Internet Governance Forum 2024 — The discussion on moving beyond the dichotomy between data localisation and cross-border data flows was prominently feat…
S96
Discussion Summary: US AI Governance Strategy Under the Trump Administration — Regarding US-China competition, Ball emphasized that America should win through superior adoption and development of AI …
S97
Building Public Interest AI Catalytic Funding for Equitable Compute Access — I mean, let’s not torture the analogy and take something really fun and then try to, like, tie it to AI. But here’s what…
S98
https://dig.watch/event/india-ai-impact-summit-2026/collaborative-ai-network-strengthening-skills-research-and-innovation — Diffusion is not about like concentrated western LLMs all together and just deploy it. It’s about actually walking the p…
S99
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Technologies created in the West can be double biased in technology transfer and adoption in other regions
S100
Fireside Conversation: 01 — A major announcement was the initiative to create 100 AI diffusion pathways by 2030. As Matthan noted with the catchphra…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Saurabh Garg
2 arguments130 words per minute866 words397 seconds
Argument 1
AI must become a trusted, interoperable, and shareable public infrastructure, similar to Aadhaar or UPI (Saurabh Garg)
EXPLANATION
Saurabh Garg argues that for AI to deliver its potential value it must be treated as a Digital Public Infrastructure (DPI), requiring trust, interoperability, and shareability akin to existing Indian DPIs such as Aadhaar and UPI. He stresses that establishing these qualities is essential before AI can be widely adopted.
EVIDENCE
He states that AI is being considered as a possible DPI and that mechanisms are needed to ensure it becomes trusted, interoperable and shareable, drawing a parallel with Aadhaar and UPI as examples of such infrastructure [14]. He also notes that this is an early-day effort and that foundational resources like compute, data sets, talent and models need appropriate frameworks to support this vision [15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The DPI concept and the need for open-source, adaptable systems for countries are discussed in [S12]; Garg’s emphasis on trust, interoperability and shareability aligns with the description of Aadhaar-like digital public infrastructure in [S14] and [S15]; the metaphor of shared rails for AI diffusion is elaborated in [S18].
MAJOR DISCUSSION POINT
AI as Digital Public Infrastructure (DPI) and foundational resources
AGREED WITH
Speaker 1, Speaker 3, Janet Zhou
DISAGREED WITH
Speaker 2, Janet Zhou
Argument 2
AI‑ready data must be discoverable, trustworthy, interoperable, and usable, with proper metadata, quality assessment, unique identifiers, and standards (Saurabh Garg)
EXPLANATION
Garg outlines four key attributes that data must possess to be AI‑ready: discoverability through clear metadata, trustworthiness via quality assessments, interoperability using unique identifiers, and usability ensured by standardized classifications. These criteria are presented as prerequisites for effective AI model training and deployment.
EVIDENCE
He details the four requirements, explaining that discoverable data needs understandable metadata, trustworthy data requires quality assessments, interoperable data must have unique identifiers, and usable data depends on international standardization and classification [15]. He adds that access and dissemination must balance availability with privacy safeguards [16], and that locally relevant data will shape AI relevance [17-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Four essential elements for AI-ready data-discoverability, trustworthiness, interoperability and usability-are enumerated in the same wording in [S16].
MAJOR DISCUSSION POINT
Data readiness and governance for AI diffusion
AGREED WITH
Beatriz Vasconcellos, Janet Zhou
DISAGREED WITH
Beatriz Vasconcellos, Speaker 2
S
Speaker 1
1 argument120 words per minute647 words323 seconds
Argument 1
Establishing “rails” for AI use cases across borders mirrors DPI principles and enables scalable diffusion (Speaker 1)
EXPLANATION
Speaker 1 proposes that, similar to how DPI provides common ‘rails’ for services like UPI, AI should have cross‑border rails that make use cases portable and scalable. These rails would act as standards and pathways that other countries can adopt to accelerate diffusion.
EVIDENCE
During the panel, the moderator asks whether AI can have rails like DPI that other nations can follow, emphasizing the need for cross-border use-case pathways and playbooks [135-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The idea of AI ‘rails’ as a public infrastructure analogue to UPI is presented in the DPI discussion in [S12] and reinforced by the shared-rail metaphor in [S18].
MAJOR DISCUSSION POINT
AI as Digital Public Infrastructure (DPI) and foundational resources
AGREED WITH
Saurabh Garg, Speaker 3, Janet Zhou
B
Beatriz Vasconcellos
3 arguments154 words per minute1218 words474 seconds
Argument 1
Brazil is building thematic data ecosystems (e.g., early‑childhood, climate) with common standards to break silos and enable AI applications (Beatriz Vasconcellos)
EXPLANATION
Beatriz describes Brazil’s approach of creating sector‑specific data ecosystems, starting with early‑childhood and climate, to standardize data, break silos, and facilitate AI‑driven services. Ministries collaborate to produce canonical datasets and shared standards.
EVIDENCE
She explains that Brazil is cataloguing and preparing datasets for training, building shared platforms across state-owned enterprises, and creating thematic ecosystems such as early-childhood, where five ministries contribute data and standards are defined [99-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brazil’s creation of sector-specific data ecosystems and common standards is described in the collaborative AI network report in [S4].
MAJOR DISCUSSION POINT
Data readiness and governance for AI diffusion
AGREED WITH
Saurabh Garg, Janet Zhou
Argument 2
Brazil’s centralized procurement and shared‑service model streamlines AI deployment across ministries, reducing duplication and accelerating scale‑up (Beatriz Vasconcellos)
EXPLANATION
Beatriz outlines a centralized procurement mechanism where a single shared service provides AI tools (e.g., chatbots) to all ministries, cutting down procurement time and costs. This model aims to overcome fragmented implementations and speed up scaling.
EVIDENCE
She notes that the Ministry of Management created a Secretariat for Shared Services, allowing ministries to obtain AI services through a single procurement process that takes only a few hours and a simple digital transfer, eliminating separate procurement for each ministry [211-216].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The centralised chatbot procurement and shared-service approach adopted by Brazil’s Ministry of Management is detailed in [S4].
MAJOR DISCUSSION POINT
Overcoming “pilotitis” and scaling AI to production
DISAGREED WITH
Speaker 2, Saurabh Garg
Argument 3
Reliance on external vendors risks lock‑in and hampers domestic capability building; governments should nurture home‑grown AI talent and retain strategic control (Beatriz Vasconcellos)
EXPLANATION
Beatriz warns that over‑reliance on large vendors can prevent the development of national AI capabilities, likening it to outsourcing a nation’s army. She advocates for building internal expertise and avoiding vendor lock‑in.
EVIDENCE
She describes how big vendors claim superior capabilities, leading governments to outsource AI solutions, which she argues prevents the building of domestic skills and strategic control, using an army analogy to illustrate the risk [222-230].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risks of heavy reliance on external entities for critical AI infrastructure are highlighted in [S21]; concerns about vendor lock-in and capacity gaps in Latin America are noted in [S22].
MAJOR DISCUSSION POINT
Building local capacity and avoiding vendor lock‑in
AGREED WITH
Speaker 2, Saurabh Garg
DISAGREED WITH
Speaker 3
J
Janet Zhou
3 arguments154 words per minute715 words277 seconds
Argument 1
Open‑source ID platforms like MOSIP illustrate how standardised, production‑ready components and operational support accelerate AI adoption (Janet Zhou)
EXPLANATION
Janet highlights MOSIP as an open‑source, vendor‑free digital ID platform that provides reference implementations and operational support, enabling many countries to adopt a common ID infrastructure quickly. This standardisation reduces lock‑in and speeds up AI‑related services that rely on identity verification.
EVIDENCE
She explains that MOSIP, inspired by Aadhaar, offers an open-source, production-ready reference implementation, with programmatic standards, operational support, country delegations, training, and financing from the World Bank to help nations adopt the platform [179-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
MOSIP’s open-source, production-ready reference implementation and its operational support model are discussed in [S4]; the broader DPI open-source approach is covered in [S12].
MAJOR DISCUSSION POINT
Data readiness and governance for AI diffusion
AGREED WITH
Saurabh Garg, Beatriz Vasconcellos
DISAGREED WITH
Saurabh Garg, Speaker 2
Argument 2
Institutional involvement early in design, coupled with inclusive, trustworthy public infrastructure, is essential for moving pilots to production (Janet Zhou)
EXPLANATION
Janet argues that successful scaling of AI requires governments to be at the design table from the start, ensuring that both public and private actors align with the needs of vulnerable populations. Inclusive institutions build trust, which is crucial for adoption.
EVIDENCE
She notes that the problem of “pilotitis” predates AI and cites examples like vaccines and instant payment systems where governments were involved early, making it easier for local innovators to enter markets and for infrastructure to be trustworthy and inclusive [76-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Janet Zhou’s emphasis on early government involvement and inclusive institutions to avoid “pilotitis” is recorded in [S4].
MAJOR DISCUSSION POINT
Institutional involvement early in design, coupled with inclusive, trustworthy public infrastructure, is essential for moving pilots to production
Argument 3
Pilotitis is a long‑standing issue; successful scaling requires governments and markets to co‑design solutions from the outset and provide shared infrastructure (Janet Zhou)
EXPLANATION
Janet describes “pilotitis” as the tendency for projects to remain in pilot phase due to lack of coordinated design and shared infrastructure. She stresses that co‑design by governments and markets, together with common platforms, is needed to transition to production.
EVIDENCE
She references historical examples where scaling successes (e.g., vaccines, instant payments) were achieved by involving governments early and building shared infrastructure, contrasting this with many AI pilots that remain stuck [76-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The persistent challenge of “pilotitis” and the need for co-design and shared platforms are outlined in [S4].
MAJOR DISCUSSION POINT
Overcoming “pilotitis” and scaling AI to production
AGREED WITH
Beatriz Vasconcellos, Speaker 1
S
Speaker 2
3 arguments122 words per minute1028 words504 seconds
Argument 1
The “100 AI diffusion pathways by 2030” concept stresses horizontal enablers (language, compute, talent) to move AI from invention to impact (Speaker 2)
EXPLANATION
Speaker 2 introduces the ambition to create 100 AI diffusion pathways by 2030, emphasizing that horizontal enablers such as multilingual capability, compute resources, and talent are required to translate AI inventions into real‑world impact.
EVIDENCE
She recounts a discussion where the group decided on “100 diffusion pathways by 2030” as a target, linking it to the need for adoption and diffusion rather than just invention [53-54], and earlier she reflects on the gap between invention and impact, citing the book on AI diffusion and the need for adoption [38-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The target of 100 AI diffusion pathways and the role of horizontal unlocks are mentioned in the collaborative AI network notes in [S4] and reinforced by the pathway framing in [S18].
MAJOR DISCUSSION POINT
Strategic diffusion pathways and adoption framework
AGREED WITH
Saurabh Garg, Speaker 3
Argument 2
Promoting multi‑model, open‑source approaches and sharing know‑how prevents concentration of Western LLMs and supports a competitive, diverse AI ecosystem (Speaker 2)
EXPLANATION
Speaker 2 warns against vendor lock‑in and the dominance of a few Western large language models, advocating for multi‑model, open‑source solutions that give users choice and foster a more diverse AI landscape.
EVIDENCE
She cites concerns about vendor lock-in, mentions the Amul AI initiative, and stresses the importance of multi-model capability, replaceability, and domain knowledge, noting that these principles constitute a diffusion pathway and referencing a co-authored paper with Kizum on the use-case adoption framework [231-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for open-source, multi-model solutions to avoid vendor lock-in are echoed in the DPI open-source discussion in [S12] and the collaborative AI network’s emphasis on open-source models in [S4].
MAJOR DISCUSSION POINT
Building local capacity and avoiding vendor lock‑in
DISAGREED WITH
Beatriz Vasconcellos, Saurabh Garg
Argument 3
Multilingual and voice AI act as equalisers, lowering language barriers and expanding AI benefits to underserved populations (Speaker 2)
EXPLANATION
Speaker 2 highlights that adding new languages to AI models is now relatively easy, making multilingual and voice AI powerful tools for bridging digital divides and reaching marginalized groups.
EVIDENCE
She explains that multilinguality can level the playing field because new languages can be added quickly using existing data, and that this capability can help bring AI to people who previously lacked access, positioning AI as an equaliser [264-267].
MAJOR DISCUSSION POINT
Inclusion through multilingual and voice AI
AGREED WITH
Speaker 3
S
Speaker 3
3 arguments143 words per minute1512 words631 seconds
Argument 1
The G7 AI Hub and multi‑stakeholder collaboration aim to unlock compute, data, and talent for the Global South, co‑architecting sector‑specific pathways (Speaker 3)
EXPLANATION
Speaker 3 describes the G7 AI Hub as a mechanism to address resource constraints in the Global South by unlocking compute, data, and talent, and by co‑designing sector‑specific diffusion pathways.
EVIDENCE
She notes that the G7 AI Hub was created to address constraints on foundational AI resources, unlocking additional resources from friendly G7 countries, and focusing on co-architecting pathways for Africa, Latin America, and Asia [61-66].
MAJOR DISCUSSION POINT
Strategic diffusion pathways and adoption framework
Argument 2
The AI use‑case adoption framework links sectoral impact (education, health, climate) with horizontal unlocks (data, compute, multilingual capability) to guide scaling (Speaker 3)
EXPLANATION
Speaker 3 outlines a framework that connects vertical sectoral needs with horizontal enablers, arguing that scaling AI from pilot to impact requires both contextual sector data and cross‑cutting resources like compute and multilingual data.
EVIDENCE
She references the development of the AI adoption framework with partners such as the Gates Foundation, describing how it maps sectoral impact (education, health, climate) to horizontal unlocks like language data, compute, and interoperable data, and stresses co-design of pathways [240-255].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The adoption framework that maps sectoral needs to horizontal resources such as data, compute and language is described in the collaborative AI network summary in [S4].
MAJOR DISCUSSION POINT
Strategic diffusion pathways and adoption framework
Argument 3
Voice‑enabled AI, built as a public rail, can deliver safe, context‑aware services in agriculture, health, and other sectors, enhancing inclusivity (Speaker 3)
EXPLANATION
Speaker 3 argues that voice AI should be integrated as an invisible public rail, enabling safe, context‑aware interactions for diverse users such as farmers or patients, thereby expanding AI’s inclusive reach.
EVIDENCE
She mentions that AI should be invisible and part of everyday life, cites examples like Bhashani for Indic languages as a public rail, and discusses the need for safety guardrails in voice interactions for agriculture and healthcare, positioning these as potential playbooks [154-156][168-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rail analogy for AI services, including voice-enabled solutions for agriculture and health, is presented in the AI for agriculture scaling discussion in [S19] and the shared-rail concept in [S18].
MAJOR DISCUSSION POINT
Inclusion through multilingual and voice AI
Agreements
Agreement Points
AI should be treated as a Digital Public Infrastructure (DPI) with trust, interoperability, and shared “rails” to enable cross‑border diffusion.
Speakers: Saurabh Garg, Speaker 1, Speaker 3, Janet Zhou
AI must become a trusted, interoperable, and shareable public infrastructure, similar to Aadhaar or UPI (Saurabh Garg) Establishing “rails” for AI use cases across borders mirrors DPI principles and enables scalable diffusion (Speaker 1) AI should be invisible, built on public rails that make services portable across countries (Speaker 3) Open‑source ID platforms like MOSIP illustrate how standardised, production‑ready components and operational support accelerate AI adoption (Janet Zhou)
All four speakers concur that AI needs to be built on a common, trustworthy, interoperable infrastructure-akin to existing DPIs such as Aadhaar/UPI-so that services can be deployed seamlessly across jurisdictions. They cite the rail metaphor and MOSIP’s open-source model as concrete illustrations. [14-15][135-138][152-158][179-190]
POLICY CONTEXT (KNOWLEDGE BASE)
This view reflects the emerging DPI policy framework endorsed in WSIS Action Lines and recent AI-in-DPI panels, which stress trust, interoperability and shared standards for cross-border services [S38][S39][S40][S50].
Data must be AI‑ready: discoverable, trustworthy, interoperable, and usable through clear metadata, quality assessments, unique identifiers and standards.
Speakers: Saurabh Garg, Beatriz Vasconcellos, Janet Zhou
AI‑ready data must be discoverable, trustworthy, interoperable, and usable, with proper metadata, quality assessment, unique identifiers, and standards (Saurabh Garg) Brazil is building thematic data ecosystems (e.g., early‑childhood, climate) with common standards to break silos and enable AI applications (Beatriz Vasconcellos) Open‑source ID platforms like MOSIP illustrate how standardised, production‑ready components and operational support accelerate AI adoption (Janet Zhou)
The speakers agree on the necessity of establishing robust data governance-metadata, quality, identifiers and standards-to make datasets AI-ready and interoperable, as reflected in India’s four-pillar view, Brazil’s sectoral ecosystems, and MOSIP’s reference implementation. [15-16][99-110][179-190]
POLICY CONTEXT (KNOWLEDGE BASE)
Garg’s four-pillar model for AI-ready data-discoverability, trustworthiness, interoperability and usability-has been documented in multiple policy briefs and aligns with standardisation efforts highlighted by technical bodies [S35][S36][S37].
Overcoming “pilotitis” requires early government involvement and shared infrastructure to move AI from pilots to production at scale.
Speakers: Janet Zhou, Beatriz Vasconcellos, Speaker 1
Pilotitis is a long‑standing issue; successful scaling requires governments and markets to co‑design solutions from the outset and provide shared infrastructure (Janet Zhou) Centralized procurement and shared‑service model streamlines AI deployment across ministries, reducing duplication and accelerating scale‑up (Beatriz Vasconcellos) How does AI become production scale? (Speaker 1)
All three emphasize that AI projects remain stuck in pilot phases unless governments take a design-lead role and provide common platforms or procurement mechanisms that lower barriers for ministries and innovators. [76-82][211-216][71-74]
POLICY CONTEXT (KNOWLEDGE BASE)
India’s DPI rollout, which couples early public sector coordination with shared services to scale pilots, exemplifies this approach and was cited as a best-practice at the AI Impact Summit [S50][S52][S55].
Multilingual and voice AI are key equalisers that broaden inclusion and reach underserved populations.
Speakers: Speaker 2, Speaker 3
Multilingual and voice AI act as equalisers, lowering language barriers and expanding AI benefits to underserved populations (Speaker 2) Voice‑enabled AI, built as a public rail, can deliver safe, context‑aware services in agriculture, health and other sectors, enhancing inclusivity (Speaker 3)
Both speakers stress that adding language and voice capabilities makes AI more accessible, turning it into an inclusive tool for farmers, patients and other marginalised users. [264-267][154-156][168-170]
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions on AI in DPI emphasise multimodality and linguistic diversity as inclusion levers, and recent studies on language-specific glossaries underline the need for voice-enabled AI for underserved communities [S38][S42][S44][S57].
Avoiding vendor lock‑in through open‑source, multi‑model, modular platforms is essential for sustainable AI diffusion.
Speakers: Speaker 2, Beatriz Vasconcellos, Saurabh Garg
Promoting multi‑model, open‑source approaches and sharing know‑how prevents concentration of Western LLMs and supports a competitive AI ecosystem (Speaker 2) Reliance on external vendors risks lock‑in and hampers domestic capability building; governments should nurture home‑grown AI talent and retain strategic control (Beatriz Vasconcellos) A platform named METRI has been suggested to democratise AI resources on a voluntary, non‑commitment basis (Saurabh Garg)
The participants converge on the need for open, modular solutions-whether through multi-model strategies, centralized procurement reforms, or the METRI platform-to keep AI ecosystems open and locally controllable. [231-236][222-230][23-28]
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses warn against concentration of Western LLMs and promote open-source ecosystems to preserve sovereignty and competition, as reflected in UN and AU deliberations on open-source AI [S41][S47][S48][S49][S43].
Foundational AI resources (compute, data, talent, models) are horizontal enablers that must be addressed to realise the 100 diffusion pathways by 2030.
Speakers: Saurabh Garg, Speaker 2, Speaker 3
AI resources – compute, data sets, talent and models – are foundational for diffusion (Saurabh Garg) The “100 AI diffusion pathways by 2030” concept stresses horizontal enablers (language, compute, talent) to move AI from invention to impact (Speaker 2) The AI use‑case adoption framework links sectoral impact with horizontal unlocks (data, compute, multilingual capability) to guide scaling (Speaker 3)
All three highlight that scaling AI requires addressing the same set of cross-cutting resources-computing power, quality data, skilled people and efficient models-across sectors, forming the backbone of the 100-pathway agenda. [14-15][53-54][38-42][240-255]
POLICY CONTEXT (KNOWLEDGE BASE)
The notion of AI as critical infrastructure, requiring interoperable compute and data layers, is articulated in DPI literature and aligns with calls for capacity-building in emerging economies [S39][S51][S55].
Similar Viewpoints
Both see AI as a layer on top of existing digital public infrastructure, requiring seamless integration and trust. [14-15][152-158]
Speakers: Saurabh Garg, Speaker 3
AI should be invisible, built on public rails that make services portable across countries (Speaker 3) AI must become a trusted, interoperable, and shareable public infrastructure, similar to Aadhaar or UPI (Saurabh Garg)
Both advocate for standardised, ready‑to‑use platforms backed by operational support to reduce duplication and speed up scaling. [211-216][179-190]
Speakers: Beatriz Vasconcellos, Janet Zhou
Brazil’s centralized procurement and shared‑service model streamlines AI deployment (Beatriz Vasconcellos) MOSIP’s open‑source, production‑ready reference implementation with operational support accelerates adoption (Janet Zhou)
Both warn against dependence on single external vendors and call for capacity‑building and open solutions. [231-236][222-230]
Speakers: Speaker 2, Beatriz Vasconcellos
Vendor lock‑in must be avoided; promote multi‑model, open‑source solutions (Speaker 2) Reliance on external vendors risks lock‑in; need to build domestic capability (Beatriz Vasconcellos)
Unexpected Consensus
Use of open‑source, production‑ready platforms (MOSIP in ID space and Brazil’s shared AI service model) as a means to accelerate AI diffusion across sectors.
Speakers: Janet Zhou, Beatriz Vasconcellos
Open‑source ID platforms like MOSIP illustrate how standardised, production‑ready components and operational support accelerate AI adoption (Janet Zhou) Centralized procurement and shared‑service model streamlines AI deployment across ministries, reducing duplication and accelerating scale‑up (Beatriz Vasconcellos)
Although one example concerns digital identity and the other AI service procurement, both converge on the principle that open, ready-to-use, centrally managed platforms are critical for rapid, scalable diffusion-an alignment not explicitly anticipated given their different policy domains. [179-190][211-216]
Linking multilingual/voice AI (Speaker 2) with the notion of locally relevant, AI‑ready data (Saurabh Garg) as a joint pathway to inclusion.
Speakers: Speaker 2, Saurabh Garg
Multilingual and voice AI act as equalisers, lowering language barriers (Speaker 2) Local relevance of AI depends on data that reflects linguistic and cultural contexts (Saurabh Garg)
The connection between language-focused inclusion and the technical requirement for locally relevant datasets was not overtly discussed, yet both speakers implicitly agree that language-specific data is essential for effective, inclusive AI diffusion. [264-267][17-19]
POLICY CONTEXT (KNOWLEDGE BASE)
Combining Garg’s AI-ready data framework with linguistic-diversity initiatives mirrors integrated policy recommendations that link data quality with multilingual service delivery [S35][S42].
Overall Assessment

The panel demonstrates strong convergence around four core themes: (1) framing AI as a Digital Public Infrastructure with shared, trustworthy rails; (2) establishing AI‑ready data through standards, metadata and interoperability; (3) moving beyond pilot projects via early government involvement and shared service models; (4) ensuring inclusivity through multilingual/voice capabilities while avoiding vendor lock‑in by promoting open‑source, modular solutions. Horizontal enablers—compute, talent and models—are repeatedly identified as prerequisites for the 100‑pathway ambition.

High consensus – most speakers echo each other’s positions, indicating a shared understanding that scaling AI responsibly requires DPI‑style governance, data standards, institutional coordination and open, inclusive technology stacks. This broad agreement suggests that future policy initiatives can build on these common foundations to design coordinated diffusion strategies.

Differences
Different Viewpoints
Approach to building AI as a public infrastructure – government‑led DPI with standardized, trusted, interoperable components versus a multi‑stakeholder, open‑source, multi‑model ecosystem to avoid concentration of Western LLMs.
Speakers: Saurabh Garg, Speaker 2, Janet Zhou
AI must become a trusted, interoperable, and shareable public infrastructure, similar to Aadhaar or UPI (Saurabh Garg) Promoting multi‑model, open‑source approaches and sharing know‑how prevents concentration of Western LLMs and supports a competitive, diverse AI ecosystem (Speaker 2) Open‑source ID platforms like MOSIP illustrate how standardised, production‑ready components and operational support accelerate AI adoption (Janet Zhou)
Saurabh Garg frames AI as a Digital Public Infrastructure that requires trust, interoperability and government-driven standards [14-15]. Speaker 2 argues for a decentralized, open-source, multi-model approach to keep AI from being dominated by a few Western providers [231-236]. Janet Zhou points to MOSIP as an open-source, vendor-free platform that provides standardized building blocks and operational support for adoption [179-190]. The three positions differ on who should lead and how the foundational AI infrastructure should be created and governed.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates in the AU open-source AI forum and EuroDIG highlight tension between state-centric DPI models and multi-stakeholder open-source ecosystems aimed at preventing Western model dominance [S41][S43][S47][S48][S49].
Method for scaling AI solutions – centralized, government‑run procurement and shared services versus modular, open‑source rails and voluntary, non‑committal development.
Speakers: Beatriz Vasconcellos, Speaker 2, Saurabh Garg
Brazil’s centralized procurement and shared‑service model streamlines AI deployment across ministries, reducing duplication and accelerating scale‑up (Beatriz Vasconcellos) Promoting multi‑model, open‑source approaches and sharing know‑how prevents concentration of Western LLMs and supports a competitive, diverse AI ecosystem (Speaker 2) AI‑ready data must be discoverable, trustworthy, interoperable, and usable, with proper metadata, quality assessment, unique identifiers, and standards (Saurabh Garg)
Beatriz describes a top-down, centralized procurement mechanism that lets ministries obtain AI services through a single, fast process [211-216]. Speaker 2 advocates for a bottom-up, open-source, multi-model strategy that avoids vendor lock-in and relies on shared rails [231-236]. Saurabh Garg focuses on data readiness and standards as the basis for diffusion [15-16]. The disagreement lies in whether scaling should be driven by centralized government procurement or by decentralized, open-source, modular development.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on AI scaling contrast top-down procurement strategies with modular open-source pathways, noting divergent views on effectiveness (see Smart Regulation analysis) [S43][S54].
Reliance on external resources (e.g., G7 AI Hub) versus building domestic capability and avoiding vendor lock‑in.
Speakers: Speaker 3, Beatriz Vasconcellos
The G7 AI Hub was created to address constraints on foundational AI resources in the Global South by unlocking additional resources from friendly G7 countries (Speaker 3) Reliance on external vendors risks lock‑in and hampers domestic capability building; governments should nurture home‑grown AI talent and retain strategic control (Beatriz Vasconcellos)
Speaker 3 outlines the G7 AI Hub as a mechanism to bring in compute, data and talent from external partners to support AI pathways in Africa, Latin America and Asia [61-66]. Beatriz warns that over-reliance on large external vendors can prevent the development of national AI capabilities and lead to strategic lock-in, advocating for building internal expertise instead [222-230]. The tension is between leveraging external assistance versus prioritizing self-reliance.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of digital divides and regional AI strategies stress the importance of domestic capacity building over dependence on external hubs, especially for developing economies [S41][S57][S49].
Unexpected Differences
Perceived ease of creating diffusion pathways versus acknowledged operational and institutional challenges.
Speakers: Speaker 2, Beatriz Vasconcellos
It sounds easy: “go and build it” – diffusion pathways can be created without much friction (Speaker 2) There are significant operational challenges, vendor lock‑in risks, and the need for coordinated procurement and capacity building (Beatriz Vasconcellos)
Speaker 2 suggests that building diffusion pathways is straightforward and mainly a matter of implementation [197-199], whereas Beatriz highlights concrete obstacles such as vendor lock-in, the need for shared services, and capacity constraints [204-207][222-230]. The contrast between an apparently simple rollout and the complex realities on the ground was not anticipated given the overall collaborative tone of the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent workshops on diffusion pathways acknowledge implementation hurdles and highlight the gap between aspirational roadmaps and on-the-ground institutional capacity [S54][S55].
Overall Assessment

The panel shows broad consensus on the need to scale AI from pilots to production and to avoid vendor lock‑in, but diverges on the preferred architecture and governance model—government‑led DPI with strict standards versus open‑source, multi‑stakeholder rails. Additional tension exists around the role of external assistance (G7 AI Hub) versus building domestic capacity. These disagreements reflect differing national experiences and strategic priorities, suggesting that a one‑size‑fits‑all roadmap may be difficult to achieve without flexible, context‑specific pathways.

Moderate – while all participants share the overarching goal of AI diffusion, they propose distinct routes (centralized government standards, open‑source modularity, external resource hubs). The implications are that policy coordination will need to accommodate multiple models and negotiate trade‑offs between standardisation, sovereignty, and openness.

Partial Agreements
All speakers agree that AI must move beyond pilots to production at scale, but differ on the primary lever: Janet stresses early government co‑design and shared infrastructure; Beatriz promotes centralized procurement; Saurabh focuses on data readiness and standards; Speaker 2 highlights horizontal enablers and a target‑driven pathway framework. The common goal is scaling AI, yet the routes proposed vary. [76-82][211-216][15-16][53-54]
Speakers: Janet Zhou, Beatriz Vasconcellos, Saurabh Garg, Speaker 2
Institutional involvement early in design, coupled with inclusive, trustworthy public infrastructure, is essential for moving pilots to production (Janet Zhou) Brazil’s centralized procurement and shared‑service model streamlines AI deployment across ministries, reducing duplication and accelerating scale‑up (Beatriz Vasconcellos) AI‑ready data must be discoverable, trustworthy, interoperable, and usable, with proper metadata, quality assessment, unique identifiers, and standards (Saurabh Garg) The “100 AI diffusion pathways by 2030” concept stresses horizontal enablers (language, compute, talent) to move AI from invention to impact (Speaker 2)
Both agree that vendor lock‑in is a problem and that AI ecosystems should remain open and diverse. Beatriz proposes building domestic capacity and limiting external vendor dependence, while Speaker 2 calls for multi‑model, open‑source solutions and knowledge sharing to achieve the same end. [222-230][231-236]
Speakers: Beatriz Vasconcellos, Speaker 2
Reliance on external vendors risks lock‑in and hampers domestic capability building; governments should nurture home‑grown AI talent and retain strategic control (Beatriz Vasconcellos) Promoting multi‑model, open‑source approaches and sharing know‑how prevents concentration of Western LLMs and supports a competitive, diverse AI ecosystem (Speaker 2)
Takeaways
Key takeaways
AI should be treated as Digital Public Infrastructure (DPI), requiring trust, interoperability, and shareability similar to Aadhaar and UPI. Four foundational AI resources—compute, data sets, talent, and models—must be democratized; data readiness is critical and includes discoverability, trustworthiness, interoperability, and usability. The “100 AI diffusion pathways by 2030” initiative emphasizes horizontal enablers (language, compute, talent) and sector‑specific use‑cases to move AI from invention to impact. Early involvement of governments and institutions in design, along with shared public rails (identity, payments, data exchange), is essential to overcome “pilotitis” and achieve production‑scale deployment. Open‑source, multi‑model approaches (e.g., METRI, MOSIP) are preferred to avoid vendor lock‑in and to build domestic AI capability. Multilingual and voice AI are seen as key equalizers that can broaden inclusion for underserved populations. Co‑architecting pathways through public‑private partnerships and multi‑stakeholder collaboration (G7 AI Hub, XTEP, Gates Foundation) is necessary for sustainable diffusion.
Resolutions and action items
Proposal to develop the METRI platform (Multi‑stakeholder AI for Resilient and Trustworthy Infrastructure) as a voluntary, modular framework for sharing compute, data, models, and talent. Commitment to continue the G7 AI Hub effort to unlock compute, data, and talent resources for the Global South. Adoption of the AI use‑case adoption framework (linking sectoral impact with horizontal unlocks) to guide scaling of pilots. Brazil to proceed with centralized procurement and shared‑service model for AI applications across ministries. Encourage nations to adopt open‑source ID platforms like MOSIP and to provide operational support and training for implementation. Promote multi‑model, open‑source AI solutions to mitigate vendor lock‑in and foster local capability building.
Unresolved issues
Specific governance mechanisms and standards for ensuring AI data trustworthiness and privacy across jurisdictions. Detailed financing and business models for building compute infrastructure (e.g., data centers) in the Global South. Concrete timelines, metrics, and accountability structures for achieving the “100 AI diffusion pathways” target by 2030. How to create and operationalize safety guardrails and playbooks for voice/health AI interactions. Mechanisms for cross‑border sharing of AI use‑cases while respecting data sovereignty. Extent of required regulatory reforms to support AI DPI and prevent vendor lock‑in.
Suggested compromises
Adopt a modular, voluntary approach for METRI rather than a mandatory, one‑size‑fits‑all solution. Use centralized procurement and shared services to reduce duplication while allowing ministries to retain flexibility in implementation. Combine public‑sector standards with private‑sector innovation, encouraging co‑design of pathways rather than imposing top‑down solutions. Promote open‑source, multi‑model ecosystems to balance the need for advanced capabilities with the desire to avoid dependence on single vendors.
Thought Provoking Comments
AI is perhaps something like a solution in search of a problem… we need to ensure it becomes a trusted, interoperable and shareable Digital Public Infrastructure (DPI) like Aadhaar or UPI.
Frames AI not just as a technology but as a public infrastructure that must meet standards of trust, interoperability and scalability, shifting the conversation from isolated use‑cases to systemic foundations.
Set the agenda for the rest of the panel, prompting others to discuss foundational resources (data, compute, talent, models) and how to democratize them. It led directly to the detailed discussion of data‑readiness criteria and the METRI platform.
Speaker: Saurabh Garg
We read about AI diffusion – the invention happened in the West, but the impact must happen in the Global South. That’s why we aim for 100 diffusion pathways by 2030.
Introduces the central metaphor of ‘diffusion pathways’ and explicitly positions the Global South as the engine of impact, reframing the problem from technology creation to equitable adoption.
Triggered the round‑table on how different regions (Kenya, Italy, India) can contribute, leading to Kizom’s explanation of G7 AI Hub and the subsequent focus on cross‑border collaboration.
Speaker: Speaker 2 (Shalini)
Data is the raw material for AI models. For it to be AI‑ready it must be discoverable, trustworthy, interoperable and usable across systems while preserving privacy.
Provides a concrete, four‑point framework for data readiness, moving the discussion from abstract resource needs to actionable criteria.
Guided later speakers (Kizom, Janet, Beatriz) to reference data interoperability, data ecosystems, and standards as essential steps, deepening the technical layer of the conversation.
Speaker: Saurabh Garg
The problem of ‘pilotitis’ predates AI. Scaled impact comes when governments are at the design table from the start, making infrastructure trustworthy and inclusive.
Links the recurring issue of pilots stuck in limbo to a systemic solution—early government involvement and institutional trust—offering a clear remedy rather than just diagnosing the problem.
Shifted the tone from lamenting pilots to proposing concrete governance mechanisms, influencing Beatriz’s description of Brazil’s centralized procurement and Janet’s later MOSIP analogy.
Speaker: Janet Zhou
In Brazil we are building a ‘one government for each person’ vision: shared data platforms, thematic data ecosystems (early childhood, environment), and centralized chatbot services built on a common digital ID.
Illustrates a real‑world, nation‑scale implementation of the DPI concepts discussed earlier, showing how data interoperability and shared services can move from pilot to production.
Provided a concrete case study that other panelists referenced when talking about rails, modularity, and the need for centralized services, reinforcing the DPI narrative.
Speaker: Beatriz Vasconcellos
Digital public infrastructure should be invisible. AI should sit on existing rails (UPI, DigiLocker, identity) so that users never notice the AI layer, and we must create new rails (multilingual voice stacks) that converge globally.
Reframes the goal of AI diffusion as seamless integration rather than a visible add‑on, emphasizing the importance of modular, interoperable rails and multilingual accessibility.
Prompted a discussion on the emergence of new rails (voice, language) and the need for convergence, leading to the mention of Zindi’s data‑science network and the broader conversation about cross‑border standards.
Speaker: Speaker 3 (Kizom)
The MOSIP analogy: even after you have a road, you need agreed signs, side‑of‑the‑road rules, operational support, and financing to make it usable for everyone.
Uses a familiar infrastructure metaphor to explain the layers of standards, capacity‑building, and financing needed beyond the technical platform, making the abstract concept tangible.
Reinforced the earlier point about rails and added a practical roadmap for implementation, influencing the later discussion on vendor lock‑in and capacity building.
Speaker: Janet Zhou
Vendor lock‑in is a major risk. Governments must build their own AI muscles instead of repeatedly outsourcing to large vendors, otherwise strategic capabilities are lost.
Highlights a systemic challenge that could undermine the whole diffusion effort, shifting the conversation toward sustainable capability development and procurement reform.
Triggered a response from Speaker 2 about multi‑model approaches and the need for choice, and reinforced the panel’s consensus on building domestic capacity.
Speaker: Beatriz Vasconcellos
The AI adoption framework (use‑case adoption framework) shows that vertical impact (education, health, climate) depends on horizontal unlocks (language data, compute, interoperable data), and we must co‑design pathways that fuse both.
Synthesizes earlier points into a structured framework, providing a roadmap for moving from pilots to scale and linking sectoral needs with foundational resources.
Served as a concluding turning point, aligning all previous contributions into a unified strategy and giving the audience a concrete tool to think about diffusion pathways.
Speaker: Speaker 3 (Kizom)
Overall Assessment

The discussion evolved from a high‑level framing of AI as a nascent technology to a nuanced blueprint for turning AI into a trusted, interoperable public infrastructure. The most pivotal moments were Saurabh Garg’s articulation of AI as DPI and the data‑readiness framework, Janet Zhou’s ‘pilotitis’ diagnosis with a governance solution, and the concrete national examples (Brazil’s shared data ecosystem and the MOSIP analogy). These comments redirected the conversation from abstract possibilities to actionable standards, cross‑border collaboration, and capacity‑building, ultimately converging on a unified AI adoption framework that ties vertical sectoral impact to horizontal foundational resources. The panel’s flow was repeatedly reshaped by these insights, moving the tone from problem‑identification to solution‑design and setting a clear agenda for future diffusion pathways.

Follow-up Questions
How will the 100 AI diffusion pathways to 2030 pan out for the Kenya‑Italy‑India partnership and what does it mean for each partner?
Understanding the concrete implementation steps and expected outcomes for this tripartite collaboration is essential to gauge feasibility and scalability.
Speaker: Speaker 2 (Shalini)
How can AI move from pilot projects to production‑scale deployment across multiple geographies? Is funding the only barrier or are additional diffusion pathways needed?
Identifying the systemic factors beyond financing that prevent pilots from scaling is crucial for achieving widespread impact.
Speaker: Speaker 1
How can trust be established in AI advisory outputs versus the institutions delivering them? What governance models enable institutions to adopt and trust AI advice?
Trust is a prerequisite for adoption; clarifying the relationship between institutional credibility and AI outputs will inform policy and design.
Speaker: Speaker 1
What is the current state of AI adoption in Brazil, are pilots stuck in the pilot‑to‑production gap, and how can that gap be bridged?
Brazil’s experience can provide lessons for other regions; understanding barriers to scaling will help design effective interventions.
Speaker: Speaker 1
Are there ‘rails’ analogous to DPI that can guide cross‑border AI use‑cases? What playbooks or pathways can different countries adopt to leverage shared AI resources?
Defining reusable frameworks and standards can accelerate diffusion and ensure interoperability across nations.
Speaker: Speaker 1
What is the single hardest challenge in operationalising AI diffusion pathways, considering human factors and implementation realities?
Pinpointing the top obstacle helps prioritize interventions and allocate resources efficiently.
Speaker: Speaker 2
Can you describe the ‘use‑case adoption framework’ and how it can act as a friction‑remover for AI diffusion?
A clear articulation of this framework could provide a practical tool for stakeholders to move from pilot to scale.
Speaker: Speaker 2
What mechanisms are needed to make datasets ‘AI‑ready’ (discoverable, trustworthy, interoperable, usable) and how can standards be developed and adopted internationally?
Standardizing data readiness is foundational for trustworthy AI and requires coordinated research and policy work.
Speaker: Saurabh Garg
What is the detailed structure and governance model of the proposed METRI platform, and how will it operationalise multi‑stakeholder AI resources?
Clarifying METRI’s design is necessary to assess its potential as a modular, voluntary infrastructure for AI democratization.
Speaker: Saurabh Garg
What is the business case for building data‑center and GPU capacity on the African continent, and how can data silos be broken despite abundant local data?
Understanding economic incentives and technical solutions for local compute infrastructure is key to reducing reliance on external providers.
Speaker: Speaker 3 (Kizom)
How can public‑private partnerships and digital public goods be structured to ensure sustainable, inclusive AI services without vendor lock‑in?
Research into governance and procurement models can help nations retain strategic AI capabilities while leveraging external expertise.
Speaker: Beatriz Vasconcellos
What are effective strategies for building national AI talent and capability rather than over‑relying on external vendors?
Developing domestic expertise is critical for long‑term sovereignty and resilience of AI deployments.
Speaker: Beatriz Vasconcellos
How can multilingual voice AI be developed as a public good and integrated into existing DPI to promote inclusion and equity?
Voice interfaces can bridge language gaps; research is needed on scalable, open‑source voice models and their deployment.
Speaker: Speaker 3 (Kizom)
What metrics and evaluation frameworks should be used to monitor progress of the ‘100 AI diffusion pathways’ initiative toward 2030?
Measuring impact is essential to validate the approach, adjust strategies, and demonstrate value to stakeholders.
Speaker: Speaker 2 (Shalini)
What operational support, training, and financing mechanisms are required post‑rail construction to ensure effective AI adoption, similar to MOSIP’s model?
Even with technical standards, practical implementation support is needed; studying MOSIP’s experience can inform AI rollout.
Speaker: Janet Zhou

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.