The Foundation of AI Democratizing Compute Data Infrastructure
20 Feb 2026 17:00h - 18:00h
The Foundation of AI Democratizing Compute Data Infrastructure
Summary
The panel opened by highlighting that AI democratization is hampered by limited access to compute and skewed data, with over 80 % of global datasets concentrated in high-income countries and less than 2 % in sub-Saharan Africa, creating a stark gap that must be addressed now [5-11].
Panelists identified different primary obstacles: the sheer breadth of undocumented African languages makes data collection a massive task [32-33]; lack of open, usable models and AI literacy are seen as more critical than raw infrastructure, since hardware can improve over time while model access remains essential [34-37][38-40]; and the concentration of digitized data in the developed world further entrenches inequities, a point reinforced by calls for open-weight, open-source models and federated learning to let regions contribute without relinquishing data ownership [41-44].
Several solutions were proposed. Digital public infrastructure (DPI) must be trusted, interoperable, reusable and give people agency, with a federated rather than centralized design to preserve data sovereignty while enabling shared AI development [101-108][117-122]. Community-driven initiatives such as Masakane demonstrate that participatory data collection and gender-responsive projects build trust and ownership, while talent development and open-model ecosystems are deemed vital for sustainable innovation [158-166][173-180][300-304].
Regarding investment, Sanjay suggested directing funds toward building DPI systems that give citizens control of their data, and Sangbu emphasized creating concrete use cases in agriculture, health, education and government services to inspire low-income users and change mindsets [289-298]. Saurabh added that strengthening AI capability and developing domain-specific niche models can reduce compute demands [300-303]. Yann warned that today’s compute-heavy LLM training is a temporary phase and that the next AI revolution will focus on world-models that understand real-world sensory data, a shift that will require academic research support and new funding mechanisms [222-236][267-274].
Overall, the discussion concluded that democratizing AI will require coordinated investment in data sovereignty, open models, community participation, talent, and targeted use cases, with a clear signal that progress depends on both technical breakthroughs and inclusive governance structures.
Keypoints
Major discussion points
– Data and compute inequities hinder AI democratization.
The panel highlighted that most global datasets are concentrated in high-income countries, with Africa receiving less than 2 % of the data, and that access to computing power and large-scale data remains a major bottleneck for low-income regions [5][7-10][38].
– Open-source models, federated learning, and new architectures can lower barriers.
Yann Le Cun argued that releasing top-performing open-weight models and using federated learning to keep data local are essential steps, while also noting that the current compute-intensive LLM era is temporary and that research on smaller, smarter models is already underway [41-44][65-71][117-119].
– Digital public infrastructure (DPI) is key to trustworthy, sovereign AI ecosystems.
Saurabh Garg described DPI as needing trust, interoperability, and agency for users; Sanjay Jain explained how consent-based data layers and open-source ID platforms (e.g., MOSIP) enable countries to build their own AI-ready systems without creating new dependencies [101-108][128-138][205-214].
– Community-driven language initiatives illustrate a participatory path forward.
Chenai Chair emphasized the sheer number of African languages and the need to document them, citing Masakhane’s grassroots, multilingual data collection, gender-responsive projects, and local ownership models as examples of building trusted data infrastructure [32-33][158-169][174-179].
– A shift from “knowledge-storage” LLMs to world-model, intelligence-focused AI will change compute demands.
Yann Le Cun explained that today’s massive LLMs are a temporary solution for storing facts, whereas future AI will learn from multimodal, real-world data (world models) and become more intelligent with potentially lower training compute, though inference may remain costly [65-69][222-236][244-252].
Overall purpose / goal
The discussion aimed to diagnose the structural barriers that prevent low- and middle-income countries from both consuming and building AI, and to explore concrete strategies-ranging from open models and federated learning to DPI, community-led data collection, and talent development-that could democratize AI compute, data, and expertise worldwide.
Overall tone
The conversation began with a concerned and problem-focused tone, emphasizing data skew and resource gaps. As participants offered solutions, the tone shifted to optimistic and collaborative, highlighting ongoing initiatives, open-source collaborations, and future technological breakthroughs. Toward the end, the tone became pragmatic and forward-looking, balancing enthusiasm for new paradigms with realistic acknowledgment of funding, policy, and implementation challenges.
Speakers
– Sanjay Jain – Leads the Digital Public Infrastructure team at the Gates Foundation; focuses on DPI, data empowerment, and digital identity systems.
– Arun Sharma – Works with the World Bank; moderator asked about lag between physical and virtual worlds. [S3]
– Sangbu Kim – World Bank representative discussing democratizing AI, indicators of moving from AI consumption to building.
– Chenai Chair – Director of the Masakane African Languages Hub, a grassroots community for African language NLP. [S6]
– Saurabh Garg – Secretary in the Ministry of Statistics and Programme Implementation, Government of India. [S8]
– Faith Waidaka – Panel moderator; builds electrical and mechanical infrastructure in African data centers and serves as Board Chair of the Africa Data Center Association. [S10]
– Yann LeCun – Executive Chairman of AMI Labs; former Chief AI Scientist at Meta; professor at New York University. [S12]
– Audience – General audience members; includes participants such as Daniel Dobos (particle physicist, CERN; research director, Swisscom), Yuv (individual from Senegal), Professor Charu (Indian Institute of Public Administration), and Dr. Nazar. [S15][S16][S17]
Additional speakers:
– Daniel Dobos – Particle physicist from CERN and research director for Swisscom; asked about federated learning coordination. [S15]
– Yuv – Audience member from Senegal (role not specified). [S15]
– Professor Charu – Audience member, professor at the Indian Institute of Public Administration. [S16]
– Dr. Nazar – Audience member, participant in collaborative session on cyber threats. [S17]
– Jan – Referenced in the discussion (e.g., “Jan mentioned about training data sets”); identity unclear, not listed among primary speakers.
1. Opening & framing (Sangbu Kim) – Sangbu Kim opened the session by outlining five pillars for responsible AI – access to energy, compute power, data, talent, and a credible policy framework – and highlighted the most acute short-term constraints: limited compute capacity and a severe skew of data sets toward high-income nations, where over 80 % of global data resides while sub-Saharan Africa holds less than 2 % [5-11][38-40]. He framed the discussion as a timely effort to “democratise the competing compute power access” [11-13].
2. Panel introductions – Faith Waidaka introduced the panel: herself (infrastructure specialist), Yann Le Carin, executive chairman of AMI Labs [14-27]; Sanjay Jain, lead for digital-public-infrastructure; Saurabh Garg, secretary in the Ministry of Statistics and Programme Implementation, Government of India [34-37]; and Chenai Chair, director of language-NLP initiatives [32-33].
3. Identifying the biggest barrier to AI-compute democratisation
– Chenai Chair emphasized the breadth of African linguistic diversity (over 2 000 documented languages) and the massive effort required to document them [32-33].
– Saurabh Garg argued that open-access models and AI literacy are more critical than raw hardware, because infrastructure can be acquired over time but model availability is a prerequisite for impact [34-37].
– Sangbu Kim pointed to the concentration of digitised data in the developed world as a structural inequity [38-40].
– Sanjay Jain added that AI will only scale when “data for everyone is available” and personal data can be accessed securely for personalised services [39-40].
– Yann Le Carin echoed these points, insisting that top-performing open-weight, open-source models are a necessary condition for equity and proposing federated learning as a way for regions to contribute data without surrendering ownership [41-48].
4. World Bank indicator of AI-building capacity – When asked how the World Bank measures a country’s shift from AI consumer to AI builder, Sangbu responded that the key indicator is the ability of a nation to “fully manage and harness the data set locally” – i.e., local data ownership and control – because demand for compute only materialises when clear, locally relevant applications exist [45-60][55-59].
5. Compute intensity: temporary vs. structural – Yann Le Carin clarified that the current compute-intensive era of large language model (LLM) training is temporary. He described LLMs as “knowledge-storage systems” that require massive memory, but argued that the next AI revolution will involve smaller, smarter models that reason at inference time, shifting the compute burden from training to inference [65-71][72-78][80-88][89-92]. He noted industry efforts in model distillation, mixture-of-experts, and other efficiency techniques, while stressing that breakthroughs in hardware beyond incremental CMOS improvements remain years away [85-92]. He later introduced the concept of “world-model AI” – systems that learn from multimodal sensory data and perform reasoning rather than rote memorisation [220-225], and compared the data-size requirements of such models (≈10¹⁴ bytes) to the visual experience of a child (≈10⁹ bytes) [230-235].
6. Small-AI playbook (Sangbu Kim) – Sangbu outlined a user-centred approach for scaling small AI: develop concrete, high-impact use cases that inspire low-income users and change mind-sets, rather than merely supplying raw compute [190-197]. (Sector examples such as agriculture, health, education, and government services are discussed later in the funding allocation section.)
7. Digital public infrastructure (DPI) proposals
– Saurabh Garg described DPI as needing trust, interoperability, reusability, and citizen agency, and presented the METRI “Friendship” platform – a modular, multi-stakeholder architecture that can plug in compute, data, models and talent while preserving local governance [101-108][111-115][117-122].
– Sanjay Jain illustrated how consent-based DPI layers (e.g., MOSIP for digital ID, OpenG2P for payments) enable countries to build AI-ready systems without creating new dependencies, citing India’s Aadhaar and Ethiopia’s FIDA as examples [128-138][205-214].
8. Community-driven data infrastructure – Chenai Chair detailed the grassroots Masakhane network, which has documented African languages through participatory workshops, won a Wikimedia award [166-169], and is now launching “Project Echo”, a gender-responsive initiative that couples language data with AI tools for women’s economic empowerment and health [174-179][180-189]. She argued that trust is earned when communities own the data lifecycle and when local content creation is supported, echoing the broader call for federated, non-extractive architectures [160-169][173-176][190-189].
9. Funding allocation of a hypothetical $500 M – Panelists offered divergent priorities:
– Sanjay Jain advocated directing the money to global DPI deployment, giving citizens control over their digital records and thereby “empowering them” to participate in the AI revolution [289-292].
– Sangbu Kim suggested investing in sectoral pilots (agriculture, health, education, government services) that demonstrate value and inspire users [291-298].
– Saurabh Garg urged a focus on capability development and domain-specific niche models that reduce infrastructure demands [300-303].
– Chenai Chair called for funding open-model ecosystems and talent pipelines, citing the “Crane AI” offline-first stack that emerged from Masakhane [304-307][304-306].
– Yann Le Carin emphasized the need to support academic research on non-LLM paradigms (e.g., world-model approaches) because industry is currently locked into a monoculture of LLM development [267-274], and highlighted practical examples such as smart-glasses for Indian farmers that use multilingual assistants [279-283].
10. Future outlook & AGI discussion – Yann Le Carin later addressed audience questions about AGI, noting that the notion of a single “AGI event” is misleading and calling for incremental progress toward more capable, multimodal systems [345-352]. He reiterated that hardware breakthroughs (e.g., carbon-nanotube or photonic computing) are necessary but lack a clear horizon [85-92][89-92].
11. Audience questions & unanswered gaps – Arun Sharma asked about the lag between virtual AI recommendations and physical delivery of inputs (seeds, fertilizer); the panel did not provide a concrete answer. Additional gaps included: (a) lack of defined governance and technical standards for federated-learning collaborations across jurisdictions; (b) absence of metrics beyond “local data ownership” to signal a country’s transition to AI building; (c) no clear timeline for the required hardware breakthroughs.
12. Key take-aways & action items
– Open-weight, open-source models combined with federated learning provide a technical pathway to democratise AI without compromising data sovereignty.
– Trusted, interoperable, agency-granting DPI is a prerequisite for local AI ecosystems.
– The present compute-heavy LLM era is expected to give way to smaller, reasoning-centric models and world-model AI, shifting compute burden toward inference [65-71][220-225][230-235].
– A holistic investment strategy should simultaneously fund high-impact use cases, domain-specific niche models, DPI deployment, open-model development, and talent pipelines [291-298][300-307][111-115].
– Community-led, gender-responsive projects such as Masakhane’s initiatives are essential for building trust and avoiding extractive dynamics [166-169][174-179].
Proposed action items
1. Develop the METRI “Friendship” platform as a modular global AI infrastructure [101-108][111-115].
2. Scale open-source ID platforms (e.g., MOSIP) and other DPI tools worldwide [128-138][205-214].
3. Allocate funds to both sectoral pilots and open-model/talent ecosystems [291-298][304-307].
4. Establish international coordination bodies (UNESCO, AI Alliance, SEM) to manage federated-learning collaborations [117-122][345-352].
5. Adopt participatory, gender-responsive design principles for community data infrastructures [160-169][173-176].
In conclusion, the panel agreed that democratising AI will require coordinated investment in open models, federated DPI, community-owned data, and talent development, while recognising divergent views on compute priorities and funding allocations. The discussion moved from diagnosing entrenched inequities to proposing concrete, multi-layered solutions that blend technical innovation, policy frameworks and participatory governance, outlining a roadmap for inclusive AI advancement over the next one, five and ten years [308-315].
access and energy. Number two, computing power. Number three, data access. Number four, talent building. And number five, credible, responsible AI framework and policy. Among those five, everything is very important, but we are currently struggling with some lack of access to computing power and data sets. So that’s why today’s discussion is very important. Unfortunately, more than 80 % of our data set in the world are very heavily skewed to the developed world, high -income countries. Less than 2 % in Africa, sub -Saharan Africa. If we just carve out South Africa, less than zero -something percent, only for the other sub -Saharan Africa. So we see the big gap in this space. So this is a pretty important time to talk about how we can really democratize the competing power access in this space.
So thank you for joining us, and then I look forward to really good discussion with all of our panels. Thank you.
Thank you, Sangbu, for that opening. So I will start by asking the panelists to introduce themselves in a very short way, and I’ll start with myself. I’m Faith Waidaka. I build the infrastructure that makes AI possible. So I build the electrical, mechanical infrastructure in data centers in Africa, and I’m also the board chair of the Africa Data Center Association. So we’ll go this way. Yann, please tell us who you are.
So I’m Yann Le Carin. I’m the executive chairman of AMI Labs, Advanced Machine Intelligence Labs, which is a new company. I’m building. to build a next generation AI system. I’m also a professor at New York University still. And just a month ago, I left my position as chief AI scientist of Meta after 12 years at Meta.
I’m Sanjay Jain. I lead the digital public infrastructure team at the Gates Foundation.
I’m Saurabh Garg. I’m secretary in the Ministry of Statistics and Program Implementation in the Government of India.
And I am Chennai Che, the director of Masakane African Languages Hub, which emerged from a grassroots community called Masakane, focusing on African language NLP.
Good. So, Chennai, and coming back this way to all my panelists, what is the single biggest barrier? And I can imagine that we’re all coming from different segments from the introductions we just did. But what do we feel is the single biggest barrier today to democratizing AI compute? Chennai?
Thanks, Faith. So there are over 2 ,000 documented languages on the African continent. So our single biggest barrier is the breadth of work we actually have to do to document these languages to ensure they’re well represented and also focus on the communities that actually speak them.
I would say access to models, open models, and AI literacy to be able to utilize those models. And the reason I say that is perhaps infrastructure is something which might get acquired over time. And hopefully the… the requirement of the size of that infrastructure may also change. And the focus, we probably need to focus much more on the models.
I would say too much concentration of digitized data only for developed world.
I should also go on the data point because we believe that AI will scale effectively only when data for everyone is available. So when I can get a personalized service because my personal data is accessible through some protected means to a model, so then that will allow AI to reach everyone.
I’ll just echo some of the things that were said earlier. Certainly, the availability of top -performing open models, open -weight but also open -source, would be a way to remove the barrier. or at least if not a sufficient condition at least a necessary condition and the problem is that today there is no such thing the open models are behind but there is a way to get them to surpass the proprietary system and it’s through data so the access to data was mentioned if various regions of the world collect or digitize their cultural data whatever it is and then contribute to training a global model that would constitute eventually a repository of all human knowledge then those models would be much better quality than all the proprietary system because the proprietary system would not have access to that data and this can be done technically in a way in which regions don’t need to actually communicate that data they can keep ownership of that data and then contribute to training a global model by exchanging parameter vectors I don’t want to get into the weeds of technicalities there but it’s a form of federated learning and I think this is a way to open up access to AI and it’s absolutely crucial for the future because we’re going to need a wide diversity of AI assistance for the reason that there’s a wide diversity of linguistic, cultural differences value systems, political opinions and philosophies and if our AI assistance comes from a handful of companies on the west coast of the US or China, we’re in big trouble so we absolutely need this
Okay, so we’ve had the challenges and there are a wide range of them from inclusion to compute to data sets what we’re going to discuss today is how do we overcome those barriers from the different perspectives and the different angles that we have on this team So coming to you, Sangbo, from a World Bank perspective, what does it mean to democratize AI? And would you please give us one indicator that signals that a country is moving from consuming AI to actually building it?
From the World Bank point of view, democratizing data computing is very important. But let’s think about this. So many people very easily talk about building data centers physically and securing more GPUs and servers from the beginning. I agree that the fundamental infrastructure is very crucial and very important. But the more important thing is how can we use that computing power for what? So we need to really think about… what would be the best way which can create demand for computing power. That is more crucial part. So without having very clear application and some solutions, nobody can really run their own computing data center business in Africa. So it is very crucial part. So I would like to say we need to think differently from even though computing power is very important, how can we really create the data demand.
So in this regard, so the clear indicator is that how can we really fully manage the data in the local. So one good thing, one good news is that anyhow local data, local context can be fully owned, controlled, managed and managed. by local country and local people. That is a very good news. Even though we see a lot of inequality in the computing infrastructure and resources, but what cannot change, even in this AI era, is that people and the local country and local community can strongly hold their context and then hold their data set. So it is a really important signal and opportunity. So I would say measuring the fully utilizing and harnessing the data set in the local will be the key indicator for this.
Okay. Yan, you spoke about compute a few minutes ago, open compute. And I would really like, I would like to know, Is the concentration of frontier compute a temporary scaling phase or a structural feature of AI? And where do you see the biggest technical opportunity to reduce compute intensity? It’s something that Sang -Boo as well touched on.
Okay, so first of all, I think the computing requirements for training modern AI systems is temporary. It’s temporary because the type of AI systems that we build at the moment, LLMs, essentially are knowledge storage systems, right? They accumulate factual knowledge, and therefore they need enormous amounts of memories. The reason why the models are so big in terms of number of parameters, we’re talking hundreds of billions of parameters, which make them really expensive to train and to run, is the fact that they just accumulate knowledge so that it can be easily retrieved. Subtitles by the Amara .org community but there’s another way to be useful in terms of AI it’s not accumulating knowledge but actually being smart and you can replace knowledge by intelligence so current systems are not particularly intelligent but they store knowledge there is another revolution of AI coming which actually my new company is built around which intends to build systems that are smarter even if they don’t necessarily accumulate as much knowledge so those models will be smaller now the bad news with this is that perhaps at inference time they will be more expensive because they’ll reason more than current systems so we’re going to see maybe a shift in the requirements for training but the requirements for inference which is really where most of the computation goes is still going to be quite significant now to answer your second question The incentives are there for the industry to reduce the power consumption of AI system.
A lot of engineers working on AI in industry these days, even in academia, are actually focusing on how can I make this model smaller? How can I distill it in a smaller model? How can I use a mixture of experts so I have sort of a ladder of models that are more and more complex? So that to answer simple questions, I can use a simple model, et cetera. All of it is to optimize power consumption. Why? Because that’s where the money goes. That’s where you spend all the money when you operate an AI system. It goes into power and maintaining your hardware. So the incentives are there. So that’s the good news. You don’t need to have laws or regulations or anything.
They are working on it because they need to. The bad news is that it’s progressing. It’s progressing as fast as it can, and it’s not fast enough. But we’re not going to be able to make it faster unless we find some technological breakthrough at the fabrication level or the architecture or technology. There’s a lot of mileage to be had in those things still. The power efficiency is actually making progress really quickly, much faster than Moswell, but it’s still too slow. So I’m not expecting some big revolution in hardware design until we start building something else than CMOS transistors and silicon. That’s not happening for another 10 or 20 years. 10 or 20 years? Well, I mean, there’s going to be progress in the meantime.
It’s not what I mean. But if you want a real breakthrough, like some completely new way of building computing systems, there’s nothing on the visible horizon. There’s no horizon that really will allow this, whether it’s carbon nanotubes, Pintronics, or whatever it is.
Okay, that’s very interesting to think that the training models will become smaller, yet the inference might be the one that will take up the compute. Yet we’re also looking at bringing inference to devices as close as possible to the people using it. So there’s a bit of a balance to be done in that 10 -year period. I think 10 years is a lot of time, considering what AI has shown us over the past decade. And I think in terms of research, we might see it sooner. Yeah, so Rob, you led the other digital ID, and now in statistics. How do you see digital public infrastructure enabling AI innovation? And how can countries expand access to shared AI infrastructure without creating new dependencies or compromising data sovereignty?
Thank you. So I think two characteristics of digital public infrastructure, which are key, are to ensure that not only there is access, but also agency of the people. So most people would not like to be just consumers, but also be co -creators. And I think that’s the real issue going forward. For any system to be a DPI, I think there are a few essential characteristics. It needs to be trusted. It needs to be interoperable and shareable. And obviously. Reusable is part of it because and that’s what. is it’s able to bring these characteristics onto this. And this is what will also ensure that innovators focus on solutions rather than trying to get together the infrastructure together.
And in the democratizing AI working group, which was one of the seven working groups of this AI summit setup, which I had the privilege of chairing along with representatives from Kenya and Egypt, one of the outcomes of this, of course, there was a charter on AI diffusion. But one of the outcomes of that is what we are suggesting building initially, which might be a digital public good, but modularly it will become an infrastructure as we move ahead, is the METRI platform, which we’ve called Friendship. METRI standing for multi -stakeholder AI for a trusted and resilient infrastructure. and how we can, in a modular manner, add on the four, which I think my fellow panelists have also mentioned, components of AI, compute, data, models, and talent.
These are the four aspects, and, of course, governance mechanisms would, of course, be there. So how we can ensure that different countries are able to contribute in whatever manner to build this, if I can call it a global platform, which is, in a way, owned by all and yet looks at what are the issues of real criticality. And I’m sure there’s a major role for not only countries, for private sector and philanthropies to be able to build. So how we can build this structure together, which will meet the requirements of of countries, private sector and the philanthropies because each of them have different motivations to it and the private sector would have a profit motive and that has to be kept in view.
As far as the dependencies, that’s the second part of the question that you asked me. I think one of the areas is that we need to ensure that we follow a federated structure rather than a centralized structure. I think that would be key and that would also ensure that the variety of languages and cultural contexts that the data sets carry and which will also ensure that ownership remains wherever is contributed with the data. And yet technology and open systems exist now to be able to ensure that sharing can be done in a safe and trusted manner. So how we are able to ensure that this collaboration and cooperation is done based on trust. and what kind of mechanisms we can develop.
And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensure that we don’t generate new dependencies. Thank you.
Sanju, when I said DPI, you nodded your head. So in terms of digital public infrastructure, we’ve seen it scale because it was interoperable. How can we ensure that data and AI systems that we build now are interoperable and open by design so that even small startups or governments, like we’ve just spoken about, can plug in and benefit? I actually
want to go off what Dr. Goerg said. Broadly, DPI provides a way for data of all individuals, so their records, their ID, their transactions. are sort of a system of record on top of which DPI sits. So DPI provides a management layer on that and provides consented access. And so that’s something which we have seen around the world, particularly, for example, in India we see this a lot, is that now that you have access to all of this data, you can actually build on top of that through consented access lots of applications. And that’s really where a lot of the value comes in. And I think Jan mentioned about training data sets. That’s, again, the same model can be applied to allow either consented access or anonymized access so that you can do a federated learning so that the data never goes to the model, but the model comes to the data.
And so with, and India has been looking at this data empowerment and protection architecture, which is on that lines. And that, I think we are now starting to see the structural building blocks come together, which would allow for this underlying data layer to be built, but that requires strong DPI. And so we do think that there’s a lot of reason for countries around the world to adopt DPI systems so that citizens’ data can be managed in a very trusted way, access with consent. And then we have things like MCP coming up, which then allow users’ context to be taken, which then allows AI to be safe. Of course, as long as the data is, the rights on the data are quite clear that they’re not going to be stored.
So overall, I think we are moving towards this world where we are seeing the underlying pieces come together. They have to come together at a global scale. I think that’s the point that Dr. Gerg was making. And so from that perspective, I think we are in a fairly good place. But then to make sure this happens, we have to, I think, act in a unified manner. I mean, for example, we have to work together to fund efforts at the grassroots. So, for example, what you’re seeing with Masakhane, where you’re working with… With countries, with communities, so that their languages can be represented. so that that context becomes very important because finally we are going to have to serve users in their languages.
So I do think, you know, I’m very positive that we’re moving in the right direction. I just think that there’s still some ways to go. I think there are other barriers as well. But on this aspect, I think DPI provides a way for us to get past the data hurdle as long as, of course, DPI is implemented in a responsible manner in the countries and in the right way. Thank you. Chenai, you’ve
cautioned against technology becoming extractive. How should we build data infrastructure that is trusted by communities? And would you please give us an example of what principles would make an AI project in a village or in a community, in some rural place, place in Africa, for example? Thank you. feel empowering rather than extractive? Thank you so much
Faith for that question. And I think I have the pleasure of sitting here as a representation of what it means when community is involved in building something. Masakana basically means we build together loosely translated in Isizuru. And that was then a creation of a participatory approach in knowledge building as a result of being excluded in spaces. So if we’re going to build data infrastructure that community trusts is to respond to the realities that they live in and to be participatory. So that’s the first example. And just to prove how important for something to be participatory is that 2019 -2020 there were not as many data sets around African languages. I think a source of data was the Jehovah’s Witness 300 Bible.
And they had translated the languages for their own purpose. And then so the community came together, the Masakane community came together and brought in everyone, linguists, NLP people, machine learning people, anyone who spoke the language to actually develop the scripts and do the machine translation work on top of that. And this community that was unfunded, doing everything by the bootstraps, actually won a Wikimedia Award in 2021 for their participatory action work. And I think that is then crucial to actually show that if you’re going to build trust, people have to see what the end value is and also be recognized. So this paper actually has, I think, about 20 people on it, a lot of people on it, which some people could never have been authors, but they contributed to it and they’ve got a paper published and that’s significant.
And then secondly, it’s really thinking about meeting communities where they are, regardless of what their location is. It’s realizing the inequity that we exist with. So one of the projects that we will be doing at Masakane is called Project Echo. It’s designed to be a gender -responsive project because gender transformative is also the North Star that we’re hoping to get to one day. And in that instance, it understands the realities of gendered inequality on the African continent, regardless of any technological innovation. And what we’re doing in partnership with Gates Foundation and also working with IDRC, who are working on this as well as a gendered intervention, is to actually then create, work with tech entrepreneurs developing gender -responsive use cases that focus on women’s economic empowerment as well as health to then think about how we’re creating an impactful tool when you add African languages on top that will result in better economic outputs for them or better information when it comes to health.
So again, it is thinking about designing with the communities and meeting the needs of the communities and where they are. And then lastly… And this is to say that this is, we love to say this on our team, that what we’re not doing is new. The technology may be new, but there are practices that we can borrow from other spaces to actually then ensure this is done. So I would like to reference the community network models. Last mile connectivity is a significant issue across the continent. We’ve had universal service access funds as an incentive for mobile network operators to do this. But sometimes some communities are not served well enough. And so then there have been interventions to actually result in internet connectivity that’s localized, being developed by the communities.
They’re in charge of building the mass for their community networks. They’re in charge of creating the content that people are going to need, figuring out what the necessary power is. Do you then, you know, create and have a transformative booster in one person’s home? And then people go and charge their phones there because it’s the whole life cycle of this. So if we’re going to build infrastructure that people trust, we have to borrow from what’s already been done and then ensure that people are part of the whole life cycle so that they see ownership and also it allows for sustainability because they are like, that’s my resource and I’m not going to wait for anyone else to support it but I’m going to be in charge of making sure that it continues to exist.
Interesting.
I like that. Community ownership. And I don’t think we can do that if we don’t build small AI. So Sangbu, you’ve written a lot on small AI. What would be your playbook for scaling small AI responsibly?
user can, you know, can, restrictions, so user cannot fully utilize some technology without get trained and learn. So, 20, 30 years ago, we talked a lot about digital literacy and some basic digital skills and how to use window and explorer, et cetera. That mean, that meant it is not very user -centric because user had to do a lot of things. But now, AI is going towards very user -centric services. So, users doesn’t need to do that much. They can only control and ask verbally about what they are curious, what they need. And then it can be automatically provided to the users. That is the philosophical concept of AI in my mind. So, in that sense, our focus is how to more bring more user centric mindset to this field along with our client because you know we have compared to develop the world we have pretty much big you know context base ground and local data and so many user interest so that’s our approach how that’s how we but are fully harness and utilize for this area
thank you for that now that we’re speaking about communities and users Sanjay you’ve spoken about moving from digital age to digital empowerment in the context of AI what would digital empowerment look like and what should development partners like gates while bank sitting in this forum prioritize so that countries are not just consumers of AI but co -creators.
So the thread I’m going to pick up back is the DPI thread. And broadly what we have done in that space is to look at how instead of building systems for countries, we sort of have open source systems which countries can then adopt to build systems which are adapted to their needs. So when we look at Aadhaar in India, that’s one thing, but then for the rest of the world we’re looking at MOSIP. And MOSIP is a modular open source ID platform that we have supported, which countries are taking and building with their own policy layers, building their own application versions of it. And so in Ethiopia you have FIDA, which is based on MOSIP, and it’s actually very much customized to what they need.
So the idea is you build these pieces of technology which then countries can adopt and build in a way that suits their needs, is governed by them, is local laws work on that, so all of that institutional infrastructure. legal infrastructure is then sits on top of the technology layer to do that. Similarly we have supported other open source efforts like OpenG2P for government payments, we have supported Digit for Healthcare campaigns and so the whole idea is you build open source, let countries and communities take that and adopt it. Similarly with Masakhane again the same idea is that if you have a way by which local communities can come together and collect data but then make that available for global needs.
So we have funded those kinds of efforts in India and in Africa as well so that these efforts are now there where local communities are empowered to make sure that AI systems can understand and speak their language and that is again a form of empowerment. So broadly that’s sort of the way we think about it is how do we build open standards, open source products that countries and communities can use and contribute back to and co -create essentially their versions of their systems. that then work in a unified way across the world. And so that is really empowering them to be a part of the community, and that is what we would love to see more happen.
Thank you for that. Now, Jan, I can’t help but come back to these world models. That in my mind, I was thinking they would increase the compute power necessary so the infrastructure would be bigger. But from your explanation, it looks like being more intelligent means less compute, and we now move the power not on the grid side for the training models, but on the infant side, on the devices. So what does that actually mean for the government people, the AI ecosystem, the startups that are in this room? What does that actually mean for the government people, the AI ecosystem, and what should be their focus over the next 1, 5, 10 years? if these changes are to happen, and I do believe they will happen.
Wonderful question. Thank you. So there’s going to be another AI revolution, right? We’ve seen in recent years the deep learning revolution and the LLM revolution. And unfortunately, the type of AI systems we have access to at the moment manipulate language very well, and it fools a lot of people into thinking that we have it made, that we have systems that are as intelligent as humans because we think of language abilities as properly human. But it’s a mistake that generations after generations of computer scientists and people around them have made in AI. for the last 70 years of discovering a new paradigm for AI and assuming that this paradigm will lead us to systems that have human -level intelligence.
And it’s just false, and it’s false today as well. Our current technology is limited. It’s useful. There’s no question it’s useful. It should be deployed, developed. It’s going to help people use it all the time. But it’s limited, like previous generations of computer technologies and AI systems. So what is the next revolution? It’s the revolution of AI systems that understand the real world. And I think there is a lot of applications of that throughout the world for all kinds of domains, of market segments, if we’re talking about commercial systems, or just helping people in their daily lives. Now, it turns out that, and we’ve known this for a long time, that understanding the real world is much, much more complicated than understanding language and manipulating language.
It’s because language is a sequence of discrete symbols and it turns out that makes it easy for computers to handle. But the real world is messy, it’s high dimensional, it’s continuous, it’s noisy, and it’s just much more complicated. So I’ve been making that joke for many years to kind of try to explain this to everyone that your house cat is smarter than the biggest LLMs. And in many ways that’s true, certainly in the understanding of the physical world, your cat is way smarter than the biggest LLMs. It doesn’t mean the LLMs cannot accumulate knowledge about the real world, but they don’t really understand the underlying nature of it. So the next revolution are systems that really understand how the world works and sort of learn how the world works, a little bit like children who open their eyes.
And let me give you a… Interesting number. LLMs are pre -trained today. on basically all the text available on the internet publicly, which mostly is English or languages spoken in developed countries, which of course, as this panel has pointed out, is an issue. But it represents roughly 10 to the 14 bytes. Okay, a one with 14 zeros. That seems like a lot of data, and it is, because it would take us, any of us, about half a million years to read through it. But then compare this with the amount of data that gets to the visual cortex of a young child. In four years, a young child has been awake a total of 16 ,000 hours. And if we put a number on how much data gets to the visual cortex, it’s about 2 megabytes per second.
Do the arithmetics, that’s about 10 to the 14 bytes in four years, instead of half a million years. And so it tells you we’re never going to get to human -level intelligence or anything like that by just training on text, which is human -produced. we’re going to have to have systems that understand the real world and are trained to understand the real world through sensory input, it can be video it can be all kinds of stuff and by the way, 16 ,000 hours of video is not a lot of video, it’s about 30 minutes of YouTube uploads if you get a day of YouTube uploads, it’s about a million hours, and that’s about 100 years of video, and we have video systems that we’ve trained that have been trained with that kind of data they understand a lot more about the real world than any LLM they can tell you if something impossible happens in the video that they watch so they’ve acquired a little bit of common sense so my guess is that this is going to make a lot of progress in the future and from those kind of techniques, we can build world models, what is a world model given there’s an idea or representation of the state of the world at time t and an action or intervention that you imagine taking, a world model would predict the state of the world at time t plus one resulting from this action or intervention.
And this is how you can build an intelligent system because they would be able to predict the consequences of their actions before taking the action. And they would be able to plan and reason because reasoning is like planning. So everybody is talking about agentic systems in the industry. The way agentic systems are built today is not this way. Anyway, agentic systems today are not able to predict the consequences of their actions. And this is a terrible way of planning actions. So I think, you know, again, we’re going to see a revolution over the next few years based on world models, based on systems that can learn from the real world, messy data. And I’m not very popular in Silicon Valley when I say this, but those are not generative models.
They’re kind of a different type. And so, yeah, my colleagues who work on LLM and generative AI… don’t like me very much. For me, I’m really liking this.
So I’m going to ask you a number question. What would it take? What kind of money would it take to make this faster?
Okay, so there’s a number of different things that need to happen. The first thing is there’s a lot of research to be done, like academic research, right? And in fact, what’s interesting as a phenomenon is that this idea of world model and this non -generative architecture, which I call JEPA, but there’s sort of various incarnations of it, are mostly worked on by academic groups who are interested in applying AI to science and mostly ignored by industry. Industry, particularly Silicon Valley, which is, you know, dominant players, is entirely focused on LLM and everybody is working on this. It’s the same thing. everybody is stealing each other’s engineers and working on the same thing because nobody can afford to do something slightly different and then run the risk of falling behind.
And so that creates kind of a monoculture that makes the industry a little blind. And so right now it’s in the hands of academia. So basically kind of propping up this kind of research in academia and preventing LLMs from sucking the oxygen out of every room you get into, I think is the first step. Second step is, of course, there is a role for governments and industry to play there in sort of pushing those models once they work. And that’s what I’m working on. That’s why I left Meta and created this company, because I think the time is right for trying to make this, make it real. And then, you know, obviously there’s going to be a lot of applications of this everywhere in the world.
There was an experiment that was run a few years ago, a couple years ago by some of my colleagues at MITA where they gave smart glasses to farmers in India, rural India. And you could talk to the assistant in, you know, Indic languages, asking them, what’s this disease on my crop? Or, you know, should I harvest now or wait a little bit? What’s the weather tomorrow? So there’s a lot of things like this that could be useful if the price, you know, could be brought down with systems that really understand the world better than we currently do. And in the future, all of us will be walking around with an AI assistant that will, you know, essentially amplify our own intelligence.
It’s like, you know, all of us will be sort of, you know, the leader, manager of a staff of virtual people who are smarter. Which is a great thing to do, by the way, working. I’m very familiar with the concept of working with people who are smarter than you. it’s the greatest thing that can happen to you so we shouldn’t feel threatened by that so it’s going to allow people to get more knowledgeable, more educated make more rational choices but we need systems that basically approach or surpass human intelligence in certain domains and understand the real world
Thank you Yann, so we know where Yann is putting his money coming back to all my panelists not just your money if I had 500 million dollars to give and I’m not asking you for a P &L I’m not asking you to give me a profit I’m just asking you to help me democratize AI and make it accessible for everyone where would you each put your money let’s start with sanjay
incidentally 500 million is the amount that you’re looking at as raising capital capital to get dpi everywhere in the world because we think that you know getting those underlying systems of record getting people access to their data in a digital form can actually empower them so much that they can then participate in the ai revolution in the right way with the right controls and structures in place so you know you’ve kind of just made my case that we would want to think about how we can take that money deploy it and bring everyone up to the same level in terms of digital infrastructure getting the data getting their ledgers getting the health records all of those digitized so that then they can take benefit of ai for their needs so that’s actually what we would want to do
okay okay okay again again i would say i’ll spend that big money to develop some more use cases again and again. So we are identifying agriculture, education, healthcare, and some more. The government service can be a really promising use case field. So developing some more practical and profitable use case and which adds so much value will be the really critical one. On top of that, maybe why we are developing the use cases, more important thing is that some change user mindset and inspire users. Because one typical problem we are facing is that our low -income users and clients and people are not… do not really know what they don’t know. So inspire, even though they can do something with this type of technology, but they…
don’t clearly understand what they can do. So inspire them that they can really do this with higher productivity, with low cost. That would be very important things to remind them. Thank you.
Given the volume of funds available, I would focus a lot more on capability development of people to be able, their ability to use AI for improving productivity. And maybe if I can add to it, just to again stress on models on the need for small domain specific niche models. Small may not be the right word to use. But domain specific and niche models, which will ensure that they use lot less power, lot less infrastructure and not have the problems of large language model.
so I’m assuming each one of us is getting 500 million yes so I co -sign on everything in addition I would say for us what is critical given the point I mentioned about the breadth of work that needs to be done is actually having open models and also investing in talent so the open models do allow for people to innovate on top of them and an example of this is crane AI which actually developed a offline first AI stack focusing on health education and agricultural services and they emerged from the Masakana community so what happens when we actually can fund a lot of people to think about this and build on top of open models and then lastly talent, talent is very important across the whole value chain, talent that actually looks at the building of the models, the uptake the business cases to motivate for people to allow for sustainability but also the talent to actually build capacity of the end users to understand so that we create an ecosystem where people are excited for these new technological innovations instead of afraid.
And that’s sort of been the biggest narrative of you’re either very excited or you’re very afraid. And coming from a South African context, everyone is afraid to lose their job to AI. So how do we ensure that we’re creating that ecosystem that’s favorable for innovation?
So as we come to the end of our panel, with everything that’s been said, even with all the money on the table, free money, we see that it’s not a one -size -fits -all. We simply can’t just focus on one area and leave the rest. We need the talent. We need the compute. We need the data centers. We need the regulatory framework. We need the reforms. We need everything to come together to make this possible. And with that… I’m done with my questions. I have five minutes. Before I even finish my question. So would someone help me with a mic? What I’ll do, I’ll take three questions, hopefully to three different people from you guys.
And then since I see no one, I’m quite good. Thank you. Let’s start here.
Thanks, Faith. Thank you all for such a brilliant session. My name is Arun Sharma. I work with the World Bank. My question to anyone, Jan specifically, what is the lag that we have in the physical and the virtual world? It’s dominated a lot by the machinery. I mean, you gave the example of a farmer wearing glasses. But then the seeds or the fertilizer, anything that he orders still run on archaic systems. So obviously there is a lag between the hardware and the software. The software is evolving much faster. where do you see that happening and going and I ask this specifically because in the Indian system where we have not been able to deploy our resources is in the education space or in the healthcare space where we still lag in those areas so thanks
let me take the three questions I would prefer that you throw the next question to someone else I’ll take a question from the back there
thanks a lot Daniel Dobos particle physicist from CERN originally and then a research director for Swisscom you mentioned federated learning technologically this is easy the architecture of collaboration might be difficult for that So do you have some ideas which kind of organization could coordinate this kind of collaboration? Thank you.
Okay, and one last question, let me get from him. The guy with the red flag.
Hi, thank you. Thank you, sir. My question is to you. Like, you have said that we have the data like 10 to the power of 14 bytes and the same data that a boy consumes, likely four to five years of age. So do you think that data is the only bottleneck, despite of compute and the architecture, to get the AGI, or maybe the humans, the superintelligence, artificial superintelligence? And the next question is, when we will achieve AGI, what was the benchmark? Like, what was the benchmark? Like, how we benchmark AGI that, like, it will be definitely smarter than humans. So how humans will evaluate that? so yeah that’s it
quick answers I’ll go in reverse order so there’s no such thing as a GAI there is human level AI perhaps but human intelligence is extremely specialized and so calling this general intelligence is complete nonsense but we will get to we will build systems that are as intelligent as humans in all domains where humans are intelligent it’s just not going to be next year unlike what you know some some colleagues in the industry are claiming this is going to take a lot longer it’s not going to be an event it’s not like we’re going to discover one secret that’s going to just you know unlock intelligence it’s going to be you know progress it’s going to be much more difficult than we think it’s always been more difficult than we thought in the past and it’s still the case so no event for AGI and no AGI human level AI yet super intelligent AI yes we should call it ASI artificial super intelligence yeah well it depends so that’s the first thing and you had a second part to your question I can’t remember what it was so I’m going to answer the other one there is a number of organizations that could so first of all the thing that’s needed for this federated learning idea for an open source model should be bottom up it should be people actually kind of putting up a github and then collaborating on sort of building the infrastructure for this of course we can get help from governments and organizations and that’s required too but I think it’s going to ultimately people need to build code, write code so there’s a number of groups that have already built their own LLMs that are pretty good quality, there’s a group in Switzerland centered at EPFL and ETH so you probably know it there is a group in the UAE centered on MBZ UAI there is similar models in Korea in various other countries and they all would like they should all get together and basically join forces and then bring in other countries as well I think SEM can play a role I think UNESCO can play a role I think Switzerland should play a role they have all those organizations in Geneva maybe that’s where and the next summit is going to be there so maybe that’s the right thing to do and have a bottom up and top down one big organization that can play a role is the AI Alliance which is a group that promotes open source AI
Jan let me cut you short we’ve run out of time and we would like to thank you all for coming yes thank you so much for all the speakers we just have a small memento from the government side to make this a memorable event. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
India is proving that you can design AI ecosystems that are both globally competitive and globally competitive. And locally grounded. At the Rockefeller Foundation, we believe this moment requires mov…
EventLee Tiedrich raised another challenge: the lack of data standardisation and voluntary sharing frameworks necessary for AI customisation across different regions. Data exchange faces significant fricti…
EventFactors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the continent portrays a not so bleak narrative in AI advancements. Projects focused on …
EventAt theInternet Governance Forum 2024 in Riyadh, the sessionDemocratising Access to AI with Open-Source LLMsexplored a transformative vision: a world where open-source large language models (LLMs) demo…
UpdatesThis could lower barriers for new contributors and help with onboarding in both open source and industry contexts
EventRegarding power dynamics, Gomez supports the devolution of power from large tech companies. However, he acknowledges the validity of both open source and closed-source models. Gomez believes that crea…
EventBianca Kremer: Hi, everybody hears me? First of all, I’d like to apologize for the delay and other procedures, we’re in workshops room one, so thank you so much for the invitation. I’m sorry for a…
EventRyan Budish :I’m coming from Boston, Massachusetts, where it is quite late at night. So I’m going to try not to speak too loudly because my kids are sleeping in the room next to me, but let me know if…
EventCountries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wide functions. These foundational digital platforms include digital identificatio…
EventI believe so can governments and the sovereign use it or should they? Definitely but we need to be conscious of those 4 dimensions that I just mentioned of inclusion, integrity, safeguards, sovereignt…
EventFrance has been at the forefront of developing digital public infrastructure (DPI), even before the term was officially coined. Their focus has been on aspects such as digital identity and public APIs…
EventIndigenous peoples, who are often located in remote areas, are particularly affected by this disparity, exacerbating their marginalisation. Community networks have emerged as innovative solutions to b…
EventThis fireside chat demonstrated how AI can serve as a democratising force when designed with inclusion and accessibility at its core. By addressing language barriers and documentation challenges throu…
EventIn conclusion, the internet can serve as a powerful tool in supporting local languages, helping to overcome barriers and distribute information in these often unrecognized languages. However, the succ…
EventCommunity-led initiatives are most impactful when culturally grounded and supported by long-term partnerships Speakers agree that the most effective language preservation and technology development o…
EventSo now what’s next? Next steps, actually we are trying to expand this to more languages. We have some collaboration, for example, in North America with Paraguay to develop LLM for Guarani, and we want…
Event_reportingLeading AI companiesare rethinkingtheir approach to large language models as scaling existing methods faces diminishing returns. OpenAI’s latest model, o1, represents a pivotal shift towards human-lik…
UpdatesThe Q&A session revealed ongoing challenges around coordination mechanisms for global-scale federated learning, particularly balancing different countries’ interests while maintaining technical cohere…
EventYann LeCun: and not only that, you think they will never get there. Well, something will get there, and at this point, I think we will not be able to call them LLMs anymore. So here is the two things …
EventInterest in artificial intelligence (AI) surged in 2023 after the launch of Open AI’s Chat GPT, the internet’s most renowned chatbot. In just six months, the popularity of the topic ‘artificial intell…
UpdatesBut certainly we are working across different states in India like we’re doing elsewhere in the world. And we do prioritize literacies, skilling. STEM and applications in priority sectors like agricul…
Event“Yann Le Carin is the executive chairman of AMI Labs.”
The transcript of the session identifies Yann Le Carin as the executive chairman of AMI Labs, which is confirmed by the speaker introduction in the knowledge base [S3].
“Sanjay Jain leads the digital public infrastructure team at the Gates Foundation.”
Sanjay Jain’s role as the lead for digital-public-infrastructure is corroborated by the knowledge-base entry that states he heads the digital public infrastructure team at the Gates Foundation [S3].
“Dr. Saurabh Garg outlined India’s approach to equitable compute access as part of a collaborative framework.”
The knowledge base describes Dr. Saurabh Garg presenting India’s “Maitri” platform and its six foundational pillars for shared compute, data and AI models, confirming his role and focus [S33].
“Over 2 000 languages have been documented on the African continent.”
The statement matches the figure given in the knowledge base, which notes that more than 2 000 African languages have been documented [S6].
“Digitised data is heavily concentrated in the developed world, creating a structural inequity.”
The knowledge base highlights a global data divide, where a few entities control most data and developing countries act mainly as data providers, confirming the reported inequity [S101].
“AI will only scale when “data for everyone is available” and secure personal data flows enable personalised services.”
The need for universal data flow to support services for all is explicitly stated in the knowledge base discussion on operationalising data free-flow with trust [S104].
“Top‑performing open‑weight, open‑source models are essential for equity, and open‑weight models differ from merely open‑source models.”
The distinction between open-source and open-weight models, and the importance of open-weight models for reproducibility and equity, is detailed in the knowledge base [S65].
The panel shows strong convergence on three core themes: (1) the need to break the concentration of data and compute by promoting open‑source models; (2) the importance of federated, community‑owned data infrastructures (DPI) to preserve sovereignty and build trust; (3) the allocation of funds toward high‑impact, locally relevant use cases together with talent and capacity development.
High consensus across technical, policy and development perspectives, suggesting that future initiatives can be jointly designed around open models, federated DPI and targeted use‑case funding, thereby increasing the likelihood of coordinated action on AI democratization.
The panel shows considerable convergence on the need for open models, digital public infrastructure, and community participation, but diverges sharply on where limited resources should be directed—whether toward building compute demand via sectoral pilots, investing in open‑source model and talent ecosystems, scaling DPI worldwide, or supporting academic research on new AI paradigms. The most pronounced disagreements revolve around the nature of the compute barrier and the optimal funding strategy.
Moderate to high disagreement; while participants share common goals, the lack of consensus on priority actions could impede coordinated policy and investment decisions, leading to fragmented efforts in AI democratization.
The discussion began with a broad framing of compute and data scarcity, but pivotal comments—especially from Yann LeCun, Saurabh Garg, and Chenai Chair—reoriented the conversation toward governance, federated architectures, community ownership, and a shift from brute‑force compute to smarter, more efficient models. These insights introduced new frameworks (METRI, federated learning), highlighted the importance of trust and participation, and challenged the prevailing narrative that hardware alone will democratize AI. As a result, the panel moved from identifying problems to proposing concrete, multi‑layered solutions that blend technical, policy, and social dimensions, ultimately shaping a more nuanced and actionable roadmap for AI democratization.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

