The Foundation of AI Democratizing Compute Data Infrastructure

20 Feb 2026 17:00h - 18:00h

The Foundation of AI Democratizing Compute Data Infrastructure

Session at a glance

Summary

This discussion focused on democratizing AI access and compute power, particularly for developing countries and underrepresented communities. The panel, moderated by Faith Waidaka and featuring experts from various organizations including the World Bank, Gates Foundation, and academic institutions, identified five key barriers to AI democratization: energy access, computing power, data access, talent building, and responsible AI frameworks.


The panelists highlighted significant data inequality, with over 80% of global datasets skewed toward developed countries and less than 2% representing sub-Saharan Africa. Yann LeCun emphasized the importance of open-source AI models and proposed federated learning approaches that would allow regions to contribute training data while maintaining ownership. He also discussed the future evolution from current large language models to more intelligent “world models” that understand the physical world rather than just accumulating knowledge.


Saurabh Garg and Sanjay Jain advocated for digital public infrastructure (DPI) as a foundation for AI democratization, emphasizing the need for trusted, interoperable systems that give users agency rather than just access. They proposed building modular platforms that countries can adapt to their specific needs while maintaining data sovereignty.


Chenai Chair stressed the importance of community participation and ownership, drawing from the Masakhane project’s success in African language processing through grassroots collaboration. The discussion emphasized that democratizing AI requires simultaneous investment in multiple areas: talent development, use case creation, open models, computing infrastructure, and community empowerment. The panelists agreed that sustainable AI democratization must be participatory, meeting communities where they are and addressing their specific needs rather than imposing top-down solutions.


Keypoints

Major Discussion Points

Barriers to AI Democratization: The panel identified key obstacles including lack of access to computing power, heavily skewed datasets (80% from developed countries, less than 2% from sub-Saharan Africa), limited access to open models, and insufficient AI literacy in underserved regions.


The Critical Role of Data Sovereignty and Local Context: Speakers emphasized that while computing infrastructure may be unequally distributed, local communities can maintain ownership and control of their cultural data and context, which represents a significant opportunity for creating more inclusive AI systems.


Digital Public Infrastructure (DPI) as an Enabler: Discussion focused on how DPI can provide trusted, interoperable systems that allow countries to be co-creators rather than just consumers of AI, with examples like India’s Aadhaar system and open-source platforms like MOSIP.


Community-Centered Approaches: Using examples like the Masakhane African Languages Hub, panelists highlighted the importance of participatory, community-driven AI development that meets people where they are and addresses their specific needs rather than imposing external solutions.


The Future of AI Architecture: Yann LeCun presented a vision of the next AI revolution moving beyond large language models to “world models” that understand the physical world, potentially requiring less training compute but more inference power, fundamentally changing the infrastructure requirements.


Overall Purpose

The discussion aimed to explore practical strategies for democratizing AI access globally, particularly for underserved regions and communities. The panel sought to move beyond theoretical concepts to identify concrete approaches for ensuring that AI development is inclusive, community-driven, and empowering rather than extractive.


Overall Tone

The tone was collaborative and solution-oriented throughout, with speakers building on each other’s ideas rather than debating. There was a sense of urgency about addressing current inequalities in AI access, but also optimism about emerging opportunities. The conversation maintained a balance between technical depth and accessibility, with speakers drawing from diverse perspectives (academic, governmental, community-based, and industry) to create a comprehensive view of the challenges and potential solutions.


Speakers

Speakers from the provided list:


Sangbu Kim: World Bank representative, focuses on AI democratization and computing power access


Faith Waidaka: Panel moderator, builds electrical and mechanical infrastructure in data centers in Africa, Board Chair of the Africa Data Center Association


Yann LeCun: Executive Chairman of AMI Labs (Advanced Machine Intelligence Labs), Professor at New York University, former Chief AI Scientist at Meta (12 years)


Sanjay Jain: Leads the digital public infrastructure team at the Gates Foundation


Saurabh Garg: Secretary in the Ministry of Statistics and Program Implementation in the Government of India


Chenai Chair: Director of Masakane African Languages Hub, focuses on African language NLP


Arun Sharma: Works with the World Bank


Audience: Various unidentified audience members who asked questions


Additional speakers:


Daniel Dobos: Particle physicist from CERN, Research Director for Swisscom


Full session report

This comprehensive discussion on democratising AI access and computing power brought together diverse stakeholders to address one of the most pressing challenges in global technology development. The panel was moderated by Faith Waidaka, board chair of the Africa Data Center Association, and featured Yann LeCun (who left Meta just a month ago after 12 years as chief AI scientist and now leads Advanced Machine Intelligence Labs), Sangbu Kim from the World Bank, Saurabh Garg from India’s Ministry of Statistics and Program Implementation, Sanjay Jain from the Gates Foundation, and Chenai Chair from the Masakhane African Languages Hub.


The Scale of the Challenge

Sangbu Kim from the World Bank opened by outlining five critical barriers to AI democratisation: access to energy, computing power, data access, talent building, and credible responsible AI framework and policy. He presented stark statistics on global data inequality, noting that over 80% of the world’s datasets are concentrated in developed, high-income countries, while less than 2% represents sub-Saharan Africa. When South Africa is excluded, the representation drops to virtually zero for the rest of sub-Saharan Africa.


The panellists identified different primary barriers reflecting their diverse perspectives. Chenai Chair emphasised linguistic diversity, noting over 2,000 documented languages on the African continent alone. Saurabh Garg focused on access to open models and AI literacy as fundamental barriers, arguing that while infrastructure might be acquired over time, the focus should be on models and capabilities. Sanjay Jain stressed the importance of personal data accessibility through protected means, while Yann LeCun highlighted the concentration of high-quality data in proprietary systems.


Rethinking AI Architecture and Compute Requirements

Yann LeCun provided a fundamental critique of current AI approaches, arguing that today’s large language models are essentially “knowledge storage systems” that accumulate factual information, requiring enormous computational resources because they store rather than process information intelligently. As he memorably put it, “your house cat is smarter than the biggest LLMs” in terms of understanding the physical world.


LeCun outlined his vision for the next AI revolution centered on “world models” – systems that understand the real world through sensory input rather than just manipulating text. These systems would predict consequences of actions before taking them, enabling genuine planning and reasoning. He explained that such systems could be smaller in training requirements while potentially being more computationally intensive during inference, shifting demands from centralized training facilities to distributed inference systems.


He provided a compelling comparison: LLMs are trained on approximately 10^14 bytes of text data, representing roughly half a million years of human reading. In contrast, a four-year-old child receives the same amount of visual data in just 16,000 hours of being awake through their visual cortex processing 2 megabytes per second, yet develops superior understanding of the physical world. LeCun gave the example of smart glasses for farmers in rural India that could identify crop diseases – something requiring real-world understanding rather than text manipulation.


Community-Driven Development Models

Chenai Chair presented the Masakhane model, demonstrating how community-driven approaches can succeed without formal funding. Masakhane, meaning “we build together” in isiZulu, emerged as a grassroots community focused on African language natural language processing. This volunteer-driven initiative won a Wikimedia Award in 2021, showing that community ownership can produce significant results.


Chair explained that Masakhane started in 2019-2020 using data from Jehovah’s Witness Bible translations, one of the few available sources for African languages at the time. The approach emphasizes participatory design responding to community realities, ensures contributors are recognized as co-authors on research papers, and meets communities where they are.


She outlined Project Echo as an example of gender-responsive AI development, created in partnership with the Gates Foundation and IDRC. This initiative focuses on AI tools addressing women’s economic empowerment and health needs while incorporating African languages, explicitly acknowledging gendered inequalities and designing interventions to improve outcomes.


Chair also described community network models where residents build their own internet infrastructure, including transmission masts, local content creation, and power management systems, with community members establishing charging stations in their homes.


Digital Public Infrastructure as an Enabler

Sanjay Jain and Saurabh Garg presented digital public infrastructure (DPI) as a foundational layer for AI democratisation. Garg mentioned participating in a “democratizing AI working group” and introduced the METRI platform (Multi-stakeholder AI for Trusted and Resilient Infrastructure), designed as a digital public good supporting the four key AI components: compute, data, models, and talent.


Their approach emphasizes creating systems that provide agency, enabling people to be co-creators rather than consumers. The Indian experience with systems like Aadhaar provides one model, but the global approach focuses on open-source platforms like MOSIP (Modular Open Source Identity Platform) that countries can adapt. Ethiopia’s FIDA system, based on MOSIP but customized locally, demonstrates this modular approach. The Gates Foundation has supported OpenG2P for government payments and Digit for healthcare campaigns, following the principle of building open-source tools for community adoption.


Jain described systems where “the data never goes to the model, but the model comes to the data” through federated learning approaches, combined with India’s proposed Data Empowerment and Protection Architecture to maintain individual and community control over personal information.


Economic Realities and Infrastructure Challenges

Faith Waidaka, despite her role building physical data center infrastructure across Africa, acknowledged that democratising AI requires addressing multiple interconnected elements simultaneously. Sangbu Kim provided an economic perspective, arguing that creating demand for computing power through practical applications is more important than simply building physical infrastructure. Without clear use cases adding value, he argued, nobody can sustainably operate data center businesses in Africa.


Yann LeCun offered a realistic assessment of efficiency improvements, noting that while industry has strong incentives to reduce power consumption, progress is happening “as fast as it can, and it’s not fast enough.” He suggested dramatic breakthroughs won’t occur until moving beyond CMOS transistors and silicon-based computing, which he estimated won’t happen for 10-20 years. However, Waidaka challenged this timeline, suggesting ten years represents significant time given AI’s rapid recent evolution.


Investment Priorities: The $500 Million Question

When Waidaka asked how they would deploy $500 million for AI democratisation, panellists revealed different priorities. Sanjay Jain noted this represents approximately what’s needed to deploy DPI globally, focusing on digitizing health records, identity systems, and foundational data infrastructure enabling people to participate in AI with appropriate protections.


Sangbu Kim emphasized developing practical use cases and changing user mindsets, particularly among populations who “don’t know what they don’t know” about AI capabilities. Saurabh Garg prioritized capability development and domain-specific models requiring less power than large language models. Chenai Chair advocated for investment across the entire value chain: open models, talent development from technical builders to end-user capacity building, and creating ecosystems where people are excited rather than fearful about technological innovation.


Technical Pathways and Collaboration

The discussion explored federated learning as a pathway for regions to contribute cultural and linguistic data to global AI models while maintaining ownership. LeCun explained this involves exchanging parameter vectors rather than raw data, allowing regions to contribute to model training without sharing sensitive information.


Several existing country-specific AI initiatives could form collaboration foundations. Groups in Switzerland (EPFL and ETH), the UAE (MBZ UAI), Korea, and other countries have developed language models and could potentially join forces for more comprehensive global systems. However, organizational challenges remain significant, as highlighted by an audience member from CERN who noted that while federated learning is technically feasible, collaboration architecture between countries requires careful design.


Addressing Fears and Building Trust

Chair noted that in South African contexts, dominant AI narratives focus on job displacement, creating fear rather than excitement. This highlights the importance of designing AI systems that demonstrably improve lives rather than threatening livelihoods. The community-driven approach offers a model for building trust through participation and ownership, while gender-responsive design explicitly acknowledges existing inequalities and designs interventions to improve outcomes for marginalized groups.


Audience Engagement and Future Directions

The Q&A session revealed ongoing challenges around coordination mechanisms for global-scale federated learning, particularly balancing different countries’ interests while maintaining technical coherence. The transition from current LLM-based systems to world model architectures will require significant research investment, much currently happening in academic rather than industry settings.


LeCun joked about being “very unpopular in Silicon Valley” for his critiques of current approaches, but emphasized that his proposed federated learning model could create AI systems superior to current proprietary models by accessing diverse global data no single company could obtain.


Conclusion

This discussion moved beyond simple questions of access to more sophisticated considerations of agency, ownership, and sustainable development. The convergence of perspectives from infrastructure builders, community organizers, government officials, researchers, and international development practitioners created a comprehensive framework for understanding both challenges and opportunities in democratising AI.


The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI development need not follow centralized patterns. However, realizing this potential requires sustained effort across technical research, policy development, community organizing, and international cooperation. The path forward requires simultaneous investment in talent development, practical use cases, open models and federated infrastructure, and trust-building through participatory design processes.


Session transcript

Sangbu Kim

access and energy. Number two, computing power. Number three, data access. Number four, talent building. And number five, credible, responsible AI framework and policy. Among those five, everything is very important, but we are currently struggling with some lack of access to computing power and data sets. So that’s why today’s discussion is very important. Unfortunately, more than 80 % of our data set in the world are very heavily skewed to the developed world, high -income countries. Less than 2 % in Africa, sub -Saharan Africa. If we just carve out South Africa, less than zero -something percent, only for the other sub -Saharan Africa. So we see the big gap in this space. So this is a pretty important time to talk about how we can really democratize the competing power access in this space.

So thank you for joining us, and then I look forward to really good discussion with all of our panels. Thank you.

Faith Waidaka

Thank you, Sangbu, for that opening. So I will start by asking the panelists to introduce themselves in a very short way, and I’ll start with myself. I’m Faith Waidaka. I build the infrastructure that makes AI possible. So I build the electrical, mechanical infrastructure in data centers in Africa, and I’m also the board chair of the Africa Data Center Association. So we’ll go this way. Yann, please tell us who you are.

Yann LeCun

So I’m Yann Le Carin. I’m the executive chairman of AMI Labs, Advanced Machine Intelligence Labs, which is a new company. I’m building. to build a next generation AI system. I’m also a professor at New York University still. And just a month ago, I left my position as chief AI scientist of Meta after 12 years at Meta.

Sanjay Jain

I’m Sanjay Jain. I lead the digital public infrastructure team at the Gates Foundation.

Saurabh Garg

I’m Saurabh Garg. I’m secretary in the Ministry of Statistics and Program Implementation in the Government of India.

Chenai Chair

And I am Chennai Che, the director of Masakane African Languages Hub, which emerged from a grassroots community called Masakane, focusing on African language NLP.

Faith Waidaka

Good. So, Chennai, and coming back this way to all my panelists, what is the single biggest barrier? And I can imagine that we’re all coming from different segments from the introductions we just did. But what do we feel is the single biggest barrier today to democratizing AI compute? Chennai?

Chenai Chair

Thanks, Faith. So there are over 2 ,000 documented languages on the African continent. So our single biggest barrier is the breadth of work we actually have to do to document these languages to ensure they’re well represented and also focus on the communities that actually speak them.

Saurabh Garg

I would say access to models, open models, and AI literacy to be able to utilize those models. And the reason I say that is perhaps infrastructure is something which might get acquired over time. And hopefully the… the requirement of the size of that infrastructure may also change. And the focus, we probably need to focus much more on the models.

Sangbu Kim

I would say too much concentration of digitized data only for developed world.

Sanjay Jain

I should also go on the data point because we believe that AI will scale effectively only when data for everyone is available. So when I can get a personalized service because my personal data is accessible through some protected means to a model, so then that will allow AI to reach everyone.

Yann LeCun

I’ll just echo some of the things that were said earlier. Certainly, the availability of top -performing open models, open -weight but also open -source, would be a way to remove the barrier. or at least if not a sufficient condition at least a necessary condition and the problem is that today there is no such thing the open models are behind but there is a way to get them to surpass the proprietary system and it’s through data so the access to data was mentioned if various regions of the world collect or digitize their cultural data whatever it is and then contribute to training a global model that would constitute eventually a repository of all human knowledge then those models would be much better quality than all the proprietary system because the proprietary system would not have access to that data and this can be done technically in a way in which regions don’t need to actually communicate that data they can keep ownership of that data and then contribute to training a global model by exchanging parameter vectors I don’t want to get into the weeds of technicalities there but it’s a form of federated learning and I think this is a way to open up access to AI and it’s absolutely crucial for the future because we’re going to need a wide diversity of AI assistance for the reason that there’s a wide diversity of linguistic, cultural differences value systems, political opinions and philosophies and if our AI assistance comes from a handful of companies on the west coast of the US or China, we’re in big trouble so we absolutely need this

Faith Waidaka

Okay, so we’ve had the challenges and there are a wide range of them from inclusion to compute to data sets what we’re going to discuss today is how do we overcome those barriers from the different perspectives and the different angles that we have on this team So coming to you, Sangbo, from a World Bank perspective, what does it mean to democratize AI? And would you please give us one indicator that signals that a country is moving from consuming AI to actually building it?

Sangbu Kim

From the World Bank point of view, democratizing data computing is very important. But let’s think about this. So many people very easily talk about building data centers physically and securing more GPUs and servers from the beginning. I agree that the fundamental infrastructure is very crucial and very important. But the more important thing is how can we use that computing power for what? So we need to really think about… what would be the best way which can create demand for computing power. That is more crucial part. So without having very clear application and some solutions, nobody can really run their own computing data center business in Africa. So it is very crucial part. So I would like to say we need to think differently from even though computing power is very important, how can we really create the data demand.

So in this regard, so the clear indicator is that how can we really fully manage the data in the local. So one good thing, one good news is that anyhow local data, local context can be fully owned, controlled, managed and managed. by local country and local people. That is a very good news. Even though we see a lot of inequality in the computing infrastructure and resources, but what cannot change, even in this AI era, is that people and the local country and local community can strongly hold their context and then hold their data set. So it is a really important signal and opportunity. So I would say measuring the fully utilizing and harnessing the data set in the local will be the key indicator for this.

Faith Waidaka

Okay. Yan, you spoke about compute a few minutes ago, open compute. And I would really like, I would like to know, Is the concentration of frontier compute a temporary scaling phase or a structural feature of AI? And where do you see the biggest technical opportunity to reduce compute intensity? It’s something that Sang -Boo as well touched on.

Yann LeCun

Okay, so first of all, I think the computing requirements for training modern AI systems is temporary. It’s temporary because the type of AI systems that we build at the moment, LLMs, essentially are knowledge storage systems, right? They accumulate factual knowledge, and therefore they need enormous amounts of memories. The reason why the models are so big in terms of number of parameters, we’re talking hundreds of billions of parameters, which make them really expensive to train and to run, is the fact that they just accumulate knowledge so that it can be easily retrieved. Subtitles by the Amara .org community but there’s another way to be useful in terms of AI it’s not accumulating knowledge but actually being smart and you can replace knowledge by intelligence so current systems are not particularly intelligent but they store knowledge there is another revolution of AI coming which actually my new company is built around which intends to build systems that are smarter even if they don’t necessarily accumulate as much knowledge so those models will be smaller now the bad news with this is that perhaps at inference time they will be more expensive because they’ll reason more than current systems so we’re going to see maybe a shift in the requirements for training but the requirements for inference which is really where most of the computation goes is still going to be quite significant now to answer your second question The incentives are there for the industry to reduce the power consumption of AI system.

A lot of engineers working on AI in industry these days, even in academia, are actually focusing on how can I make this model smaller? How can I distill it in a smaller model? How can I use a mixture of experts so I have sort of a ladder of models that are more and more complex? So that to answer simple questions, I can use a simple model, et cetera. All of it is to optimize power consumption. Why? Because that’s where the money goes. That’s where you spend all the money when you operate an AI system. It goes into power and maintaining your hardware. So the incentives are there. So that’s the good news. You don’t need to have laws or regulations or anything.

They are working on it because they need to. The bad news is that it’s progressing. It’s progressing as fast as it can, and it’s not fast enough. But we’re not going to be able to make it faster unless we find some technological breakthrough at the fabrication level or the architecture or technology. There’s a lot of mileage to be had in those things still. The power efficiency is actually making progress really quickly, much faster than Moswell, but it’s still too slow. So I’m not expecting some big revolution in hardware design until we start building something else than CMOS transistors and silicon. That’s not happening for another 10 or 20 years. 10 or 20 years? Well, I mean, there’s going to be progress in the meantime.

It’s not what I mean. But if you want a real breakthrough, like some completely new way of building computing systems, there’s nothing on the visible horizon. There’s no horizon that really will allow this, whether it’s carbon nanotubes, Pintronics, or whatever it is.

Faith Waidaka

Okay, that’s very interesting to think that the training models will become smaller, yet the inference might be the one that will take up the compute. Yet we’re also looking at bringing inference to devices as close as possible to the people using it. So there’s a bit of a balance to be done in that 10 -year period. I think 10 years is a lot of time, considering what AI has shown us over the past decade. And I think in terms of research, we might see it sooner. Yeah, so Rob, you led the other digital ID, and now in statistics. How do you see digital public infrastructure enabling AI innovation? And how can countries expand access to shared AI infrastructure without creating new dependencies or compromising data sovereignty?

Saurabh Garg

Thank you. So I think two characteristics of digital public infrastructure, which are key, are to ensure that not only there is access, but also agency of the people. So most people would not like to be just consumers, but also be co -creators. And I think that’s the real issue going forward. For any system to be a DPI, I think there are a few essential characteristics. It needs to be trusted. It needs to be interoperable and shareable. And obviously. Reusable is part of it because and that’s what. is it’s able to bring these characteristics onto this. And this is what will also ensure that innovators focus on solutions rather than trying to get together the infrastructure together.

And in the democratizing AI working group, which was one of the seven working groups of this AI summit setup, which I had the privilege of chairing along with representatives from Kenya and Egypt, one of the outcomes of this, of course, there was a charter on AI diffusion. But one of the outcomes of that is what we are suggesting building initially, which might be a digital public good, but modularly it will become an infrastructure as we move ahead, is the METRI platform, which we’ve called Friendship. METRI standing for multi -stakeholder AI for a trusted and resilient infrastructure. and how we can, in a modular manner, add on the four, which I think my fellow panelists have also mentioned, components of AI, compute, data, models, and talent.

These are the four aspects, and, of course, governance mechanisms would, of course, be there. So how we can ensure that different countries are able to contribute in whatever manner to build this, if I can call it a global platform, which is, in a way, owned by all and yet looks at what are the issues of real criticality. And I’m sure there’s a major role for not only countries, for private sector and philanthropies to be able to build. So how we can build this structure together, which will meet the requirements of of countries, private sector and the philanthropies because each of them have different motivations to it and the private sector would have a profit motive and that has to be kept in view.

As far as the dependencies, that’s the second part of the question that you asked me. I think one of the areas is that we need to ensure that we follow a federated structure rather than a centralized structure. I think that would be key and that would also ensure that the variety of languages and cultural contexts that the data sets carry and which will also ensure that ownership remains wherever is contributed with the data. And yet technology and open systems exist now to be able to ensure that sharing can be done in a safe and trusted manner. So how we are able to ensure that this collaboration and cooperation is done based on trust. and what kind of mechanisms we can develop.

And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensure that we don’t generate new dependencies. Thank you.

Faith Waidaka

Sanju, when I said DPI, you nodded your head. So in terms of digital public infrastructure, we’ve seen it scale because it was interoperable. How can we ensure that data and AI systems that we build now are interoperable and open by design so that even small startups or governments, like we’ve just spoken about, can plug in and benefit? I actually

Sanjay Jain

want to go off what Dr. Goerg said. Broadly, DPI provides a way for data of all individuals, so their records, their ID, their transactions. are sort of a system of record on top of which DPI sits. So DPI provides a management layer on that and provides consented access. And so that’s something which we have seen around the world, particularly, for example, in India we see this a lot, is that now that you have access to all of this data, you can actually build on top of that through consented access lots of applications. And that’s really where a lot of the value comes in. And I think Jan mentioned about training data sets. That’s, again, the same model can be applied to allow either consented access or anonymized access so that you can do a federated learning so that the data never goes to the model, but the model comes to the data.

And so with, and India has been looking at this data empowerment and protection architecture, which is on that lines. And that, I think we are now starting to see the structural building blocks come together, which would allow for this underlying data layer to be built, but that requires strong DPI. And so we do think that there’s a lot of reason for countries around the world to adopt DPI systems so that citizens’ data can be managed in a very trusted way, access with consent. And then we have things like MCP coming up, which then allow users’ context to be taken, which then allows AI to be safe. Of course, as long as the data is, the rights on the data are quite clear that they’re not going to be stored.

So overall, I think we are moving towards this world where we are seeing the underlying pieces come together. They have to come together at a global scale. I think that’s the point that Dr. Gerg was making. And so from that perspective, I think we are in a fairly good place. But then to make sure this happens, we have to, I think, act in a unified manner. I mean, for example, we have to work together to fund efforts at the grassroots. So, for example, what you’re seeing with Masakhane, where you’re working with… With countries, with communities, so that their languages can be represented. so that that context becomes very important because finally we are going to have to serve users in their languages.

So I do think, you know, I’m very positive that we’re moving in the right direction. I just think that there’s still some ways to go. I think there are other barriers as well. But on this aspect, I think DPI provides a way for us to get past the data hurdle as long as, of course, DPI is implemented in a responsible manner in the countries and in the right way. Thank you. Chenai, you’ve

Faith Waidaka

cautioned against technology becoming extractive. How should we build data infrastructure that is trusted by communities? And would you please give us an example of what principles would make an AI project in a village or in a community, in some rural place, place in Africa, for example? Thank you. feel empowering rather than extractive? Thank you so much

Chenai Chair

Faith for that question. And I think I have the pleasure of sitting here as a representation of what it means when community is involved in building something. Masakana basically means we build together loosely translated in Isizuru. And that was then a creation of a participatory approach in knowledge building as a result of being excluded in spaces. So if we’re going to build data infrastructure that community trusts is to respond to the realities that they live in and to be participatory. So that’s the first example. And just to prove how important for something to be participatory is that 2019 -2020 there were not as many data sets around African languages. I think a source of data was the Jehovah’s Witness 300 Bible.

And they had translated the languages for their own purpose. And then so the community came together, the Masakane community came together and brought in everyone, linguists, NLP people, machine learning people, anyone who spoke the language to actually develop the scripts and do the machine translation work on top of that. And this community that was unfunded, doing everything by the bootstraps, actually won a Wikimedia Award in 2021 for their participatory action work. And I think that is then crucial to actually show that if you’re going to build trust, people have to see what the end value is and also be recognized. So this paper actually has, I think, about 20 people on it, a lot of people on it, which some people could never have been authors, but they contributed to it and they’ve got a paper published and that’s significant.

And then secondly, it’s really thinking about meeting communities where they are, regardless of what their location is. It’s realizing the inequity that we exist with. So one of the projects that we will be doing at Masakane is called Project Echo. It’s designed to be a gender -responsive project because gender transformative is also the North Star that we’re hoping to get to one day. And in that instance, it understands the realities of gendered inequality on the African continent, regardless of any technological innovation. And what we’re doing in partnership with Gates Foundation and also working with IDRC, who are working on this as well as a gendered intervention, is to actually then create, work with tech entrepreneurs developing gender -responsive use cases that focus on women’s economic empowerment as well as health to then think about how we’re creating an impactful tool when you add African languages on top that will result in better economic outputs for them or better information when it comes to health.

So again, it is thinking about designing with the communities and meeting the needs of the communities and where they are. And then lastly… And this is to say that this is, we love to say this on our team, that what we’re not doing is new. The technology may be new, but there are practices that we can borrow from other spaces to actually then ensure this is done. So I would like to reference the community network models. Last mile connectivity is a significant issue across the continent. We’ve had universal service access funds as an incentive for mobile network operators to do this. But sometimes some communities are not served well enough. And so then there have been interventions to actually result in internet connectivity that’s localized, being developed by the communities.

They’re in charge of building the mass for their community networks. They’re in charge of creating the content that people are going to need, figuring out what the necessary power is. Do you then, you know, create and have a transformative booster in one person’s home? And then people go and charge their phones there because it’s the whole life cycle of this. So if we’re going to build infrastructure that people trust, we have to borrow from what’s already been done and then ensure that people are part of the whole life cycle so that they see ownership and also it allows for sustainability because they are like, that’s my resource and I’m not going to wait for anyone else to support it but I’m going to be in charge of making sure that it continues to exist.

Interesting. Faith Waidaka

I like that. Community ownership. And I don’t think we can do that if we don’t build small AI. So Sangbu, you’ve written a lot on small AI. What would be your playbook for scaling small AI responsibly?

Sangbu Kim

user can, you know, can, restrictions, so user cannot fully utilize some technology without get trained and learn. So, 20, 30 years ago, we talked a lot about digital literacy and some basic digital skills and how to use window and explorer, et cetera. That mean, that meant it is not very user -centric because user had to do a lot of things. But now, AI is going towards very user -centric services. So, users doesn’t need to do that much. They can only control and ask verbally about what they are curious, what they need. And then it can be automatically provided to the users. That is the philosophical concept of AI in my mind. So, in that sense, our focus is how to more bring more user centric mindset to this field along with our client because you know we have compared to develop the world we have pretty much big you know context base ground and local data and so many user interest so that’s our approach how that’s how we but are fully harness and utilize for this area

Faith Waidaka

thank you for that now that we’re speaking about communities and users Sanjay you’ve spoken about moving from digital age to digital empowerment in the context of AI what would digital empowerment look like and what should development partners like gates while bank sitting in this forum prioritize so that countries are not just consumers of AI but co -creators.

Sanjay Jain

So the thread I’m going to pick up back is the DPI thread. And broadly what we have done in that space is to look at how instead of building systems for countries, we sort of have open source systems which countries can then adopt to build systems which are adapted to their needs. So when we look at Aadhaar in India, that’s one thing, but then for the rest of the world we’re looking at MOSIP. And MOSIP is a modular open source ID platform that we have supported, which countries are taking and building with their own policy layers, building their own application versions of it. And so in Ethiopia you have FIDA, which is based on MOSIP, and it’s actually very much customized to what they need.

So the idea is you build these pieces of technology which then countries can adopt and build in a way that suits their needs, is governed by them, is local laws work on that, so all of that institutional infrastructure. legal infrastructure is then sits on top of the technology layer to do that. Similarly we have supported other open source efforts like OpenG2P for government payments, we have supported Digit for Healthcare campaigns and so the whole idea is you build open source, let countries and communities take that and adopt it. Similarly with Masakhane again the same idea is that if you have a way by which local communities can come together and collect data but then make that available for global needs.

So we have funded those kinds of efforts in India and in Africa as well so that these efforts are now there where local communities are empowered to make sure that AI systems can understand and speak their language and that is again a form of empowerment. So broadly that’s sort of the way we think about it is how do we build open standards, open source products that countries and communities can use and contribute back to and co -create essentially their versions of their systems. that then work in a unified way across the world. And so that is really empowering them to be a part of the community, and that is what we would love to see more happen.

Faith Waidaka

Thank you for that. Now, Jan, I can’t help but come back to these world models. That in my mind, I was thinking they would increase the compute power necessary so the infrastructure would be bigger. But from your explanation, it looks like being more intelligent means less compute, and we now move the power not on the grid side for the training models, but on the infant side, on the devices. So what does that actually mean for the government people, the AI ecosystem, the startups that are in this room? What does that actually mean for the government people, the AI ecosystem, and what should be their focus over the next 1, 5, 10 years? if these changes are to happen, and I do believe they will happen.

Yann LeCun

Wonderful question. Thank you. So there’s going to be another AI revolution, right? We’ve seen in recent years the deep learning revolution and the LLM revolution. And unfortunately, the type of AI systems we have access to at the moment manipulate language very well, and it fools a lot of people into thinking that we have it made, that we have systems that are as intelligent as humans because we think of language abilities as properly human. But it’s a mistake that generations after generations of computer scientists and people around them have made in AI. for the last 70 years of discovering a new paradigm for AI and assuming that this paradigm will lead us to systems that have human -level intelligence.

And it’s just false, and it’s false today as well. Our current technology is limited. It’s useful. There’s no question it’s useful. It should be deployed, developed. It’s going to help people use it all the time. But it’s limited, like previous generations of computer technologies and AI systems. So what is the next revolution? It’s the revolution of AI systems that understand the real world. And I think there is a lot of applications of that throughout the world for all kinds of domains, of market segments, if we’re talking about commercial systems, or just helping people in their daily lives. Now, it turns out that, and we’ve known this for a long time, that understanding the real world is much, much more complicated than understanding language and manipulating language.

It’s because language is a sequence of discrete symbols and it turns out that makes it easy for computers to handle. But the real world is messy, it’s high dimensional, it’s continuous, it’s noisy, and it’s just much more complicated. So I’ve been making that joke for many years to kind of try to explain this to everyone that your house cat is smarter than the biggest LLMs. And in many ways that’s true, certainly in the understanding of the physical world, your cat is way smarter than the biggest LLMs. It doesn’t mean the LLMs cannot accumulate knowledge about the real world, but they don’t really understand the underlying nature of it. So the next revolution are systems that really understand how the world works and sort of learn how the world works, a little bit like children who open their eyes.

And let me give you a… Interesting number. LLMs are pre -trained today. on basically all the text available on the internet publicly, which mostly is English or languages spoken in developed countries, which of course, as this panel has pointed out, is an issue. But it represents roughly 10 to the 14 bytes. Okay, a one with 14 zeros. That seems like a lot of data, and it is, because it would take us, any of us, about half a million years to read through it. But then compare this with the amount of data that gets to the visual cortex of a young child. In four years, a young child has been awake a total of 16 ,000 hours.

And if we put a number on how much data gets to the visual cortex, it’s about 2 megabytes per second. Do the arithmetics, that’s about 10 to the 14 bytes in four years, instead of half a million years. And so it tells you we’re never going to get to human -level intelligence or anything like that by just training on text, which is human -produced. we’re going to have to have systems that understand the real world and are trained to understand the real world through sensory input, it can be video it can be all kinds of stuff and by the way, 16 ,000 hours of video is not a lot of video, it’s about 30 minutes of YouTube uploads if you get a day of YouTube uploads, it’s about a million hours, and that’s about 100 years of video, and we have video systems that we’ve trained that have been trained with that kind of data they understand a lot more about the real world than any LLM they can tell you if something impossible happens in the video that they watch so they’ve acquired a little bit of common sense so my guess is that this is going to make a lot of progress in the future and from those kind of techniques, we can build world models, what is a world model given there’s an idea or representation of the state of the world at time t and an action or intervention that you imagine taking, a world model would predict the state of the world at time t plus one resulting from this action or intervention.

And this is how you can build an intelligent system because they would be able to predict the consequences of their actions before taking the action. And they would be able to plan and reason because reasoning is like planning. So everybody is talking about agentic systems in the industry. The way agentic systems are built today is not this way. Anyway, agentic systems today are not able to predict the consequences of their actions. And this is a terrible way of planning actions. So I think, you know, again, we’re going to see a revolution over the next few years based on world models, based on systems that can learn from the real world, messy data. And I’m not very popular in Silicon Valley when I say this, but those are not generative models.

They’re kind of a different type. And so, yeah, my colleagues who work on LLM and generative AI… don’t like me very much. For me, I’m really liking this.

Faith Waidaka

So I’m going to ask you a number question. What would it take? What kind of money would it take to make this faster?

Yann LeCun

Okay, so there’s a number of different things that need to happen. The first thing is there’s a lot of research to be done, like academic research, right? And in fact, what’s interesting as a phenomenon is that this idea of world model and this non -generative architecture, which I call JEPA, but there’s sort of various incarnations of it, are mostly worked on by academic groups who are interested in applying AI to science and mostly ignored by industry. Industry, particularly Silicon Valley, which is, you know, dominant players, is entirely focused on LLM and everybody is working on this. It’s the same thing. everybody is stealing each other’s engineers and working on the same thing because nobody can afford to do something slightly different and then run the risk of falling behind.

And so that creates kind of a monoculture that makes the industry a little blind. And so right now it’s in the hands of academia. So basically kind of propping up this kind of research in academia and preventing LLMs from sucking the oxygen out of every room you get into, I think is the first step. Second step is, of course, there is a role for governments and industry to play there in sort of pushing those models once they work. And that’s what I’m working on. That’s why I left Meta and created this company, because I think the time is right for trying to make this, make it real. And then, you know, obviously there’s going to be a lot of applications of this everywhere in the world.

There was an experiment that was run a few years ago, a couple years ago by some of my colleagues at MITA where they gave smart glasses to farmers in India, rural India. And you could talk to the assistant in, you know, Indic languages, asking them, what’s this disease on my crop? Or, you know, should I harvest now or wait a little bit? What’s the weather tomorrow? So there’s a lot of things like this that could be useful if the price, you know, could be brought down with systems that really understand the world better than we currently do. And in the future, all of us will be walking around with an AI assistant that will, you know, essentially amplify our own intelligence.

It’s like, you know, all of us will be sort of, you know, the leader, manager of a staff of virtual people who are smarter. Which is a great thing to do, by the way, working. I’m very familiar with the concept of working with people who are smarter than you. it’s the greatest thing that can happen to you so we shouldn’t feel threatened by that so it’s going to allow people to get more knowledgeable, more educated make more rational choices but we need systems that basically approach or surpass human intelligence in certain domains and understand the real world

Faith Waidaka

Thank you Yann, so we know where Yann is putting his money coming back to all my panelists not just your money if I had 500 million dollars to give and I’m not asking you for a P &L I’m not asking you to give me a profit I’m just asking you to help me democratize AI and make it accessible for everyone where would you each put your money let’s start with sanjay

Sanjay Jain

incidentally 500 million is the amount that you’re looking at as raising capital capital to get dpi everywhere in the world because we think that you know getting those underlying systems of record getting people access to their data in a digital form can actually empower them so much that they can then participate in the ai revolution in the right way with the right controls and structures in place so you know you’ve kind of just made my case that we would want to think about how we can take that money deploy it and bring everyone up to the same level in terms of digital infrastructure getting the data getting their ledgers getting the health records all of those digitized so that then they can take benefit of ai for their needs so that’s actually what we would want to do

Sangbu Kim

okay okay okay again again i would say i’ll spend that big money to develop some more use cases again and again. So we are identifying agriculture, education, healthcare, and some more. The government service can be a really promising use case field. So developing some more practical and profitable use case and which adds so much value will be the really critical one. On top of that, maybe why we are developing the use cases, more important thing is that some change user mindset and inspire users. Because one typical problem we are facing is that our low -income users and clients and people are not… do not really know what they don’t know. So inspire, even though they can do something with this type of technology, but they… don’t clearly understand what they can do.

So inspire them that they can really do this with higher productivity, with low cost. That would be very important things to remind them. Thank you.

Saurabh Garg

Given the volume of funds available, I would focus a lot more on capability development of people to be able, their ability to use AI for improving productivity. And maybe if I can add to it, just to again stress on models on the need for small domain specific niche models. Small may not be the right word to use. But domain specific and niche models, which will ensure that they use lot less power, lot less infrastructure and not have the problems of large language model.

Chenai Chair

so I’m assuming each one of us is getting 500 million yes so I co -sign on everything in addition I would say for us what is critical given the point I mentioned about the breadth of work that needs to be done is actually having open models and also investing in talent so the open models do allow for people to innovate on top of them and an example of this is crane AI which actually developed a offline first AI stack focusing on health education and agricultural services and they emerged from the Masakana community so what happens when we actually can fund a lot of people to think about this and build on top of open models and then lastly talent, talent is very important across the whole value chain, talent that actually looks at the building of the models, the uptake the business cases to motivate for people to allow for sustainability but also the talent to actually build capacity of the end users to understand so that we create an ecosystem where people are excited for these new technological innovations instead of afraid.

And that’s sort of been the biggest narrative of you’re either very excited or you’re very afraid. And coming from a South African context, everyone is afraid to lose their job to AI. So how do we ensure that we’re creating that ecosystem that’s favorable for innovation?

Faith Waidaka

So as we come to the end of our panel, with everything that’s been said, even with all the money on the table, free money, we see that it’s not a one -size -fits -all. We simply can’t just focus on one area and leave the rest. We need the talent. We need the compute. We need the data centers. We need the regulatory framework. We need the reforms. We need everything to come together to make this possible. And with that… I’m done with my questions. I have five minutes. Before I even finish my question. So would someone help me with a mic? What I’ll do, I’ll take three questions, hopefully to three different people from you guys.

And then since I see no one, I’m quite good. Thank you. Let’s start here.

Arun Sharma

Thanks, Faith. Thank you all for such a brilliant session. My name is Arun Sharma. I work with the World Bank. My question to anyone, Jan specifically, what is the lag that we have in the physical and the virtual world? It’s dominated a lot by the machinery. I mean, you gave the example of a farmer wearing glasses. But then the seeds or the fertilizer, anything that he orders still run on archaic systems. So obviously there is a lag between the hardware and the software. The software is evolving much faster. where do you see that happening and going and I ask this specifically because in the Indian system where we have not been able to deploy our resources is in the education space or in the healthcare space where we still lag in those areas so thanks

Faith Waidaka

let me take the three questions I would prefer that you throw the next question to someone else I’ll take a question from the back there

Audience

thanks a lot Daniel Dobos particle physicist from CERN originally and then a research director for Swisscom you mentioned federated learning technologically this is easy the architecture of collaboration might be difficult for that So do you have some ideas which kind of organization could coordinate this kind of collaboration? Thank you.

Faith Waidaka

Okay, and one last question, let me get from him. The guy with the red flag.

Audience

Hi, thank you. Thank you, sir. My question is to you. Like, you have said that we have the data like 10 to the power of 14 bytes and the same data that a boy consumes, likely four to five years of age. So do you think that data is the only bottleneck, despite of compute and the architecture, to get the AGI, or maybe the humans, the superintelligence, artificial superintelligence? And the next question is, when we will achieve AGI, what was the benchmark? Like, what was the benchmark? Like, how we benchmark AGI that, like, it will be definitely smarter than humans. So how humans will evaluate that? so yeah that’s it

Yann LeCun

quick answers I’ll go in reverse order so there’s no such thing as a GAI there is human level AI perhaps but human intelligence is extremely specialized and so calling this general intelligence is complete nonsense but we will get to we will build systems that are as intelligent as humans in all domains where humans are intelligent it’s just not going to be next year unlike what you know some some colleagues in the industry are claiming this is going to take a lot longer it’s not going to be an event it’s not like we’re going to discover one secret that’s going to just you know unlock intelligence it’s going to be you know progress it’s going to be much more difficult than we think it’s always been more difficult than we thought in the past and it’s still the case so no event for AGI and no AGI human level AI yet super intelligent AI yes we should call it ASI artificial super intelligence yeah well it depends so that’s the first thing and you had a second part to your question I can’t remember what it was so I’m going to answer the other one there is a number of organizations that could so first of all the thing that’s needed for this federated learning idea for an open source model should be bottom up it should be people actually kind of putting up a github and then collaborating on sort of building the infrastructure for this of course we can get help from governments and organizations and that’s required too but I think it’s going to ultimately people need to build code, write code so there’s a number of groups that have already built their own LLMs that are pretty good quality, there’s a group in Switzerland centered at EPFL and ETH so you probably know it there is a group in the UAE centered on MBZ UAI there is similar models in Korea in various other countries and they all would like they should all get together and basically join forces and then bring in other countries as well I think SEM can play a role I think UNESCO can play a role I think Switzerland should play a role they have all those organizations in Geneva maybe that’s where and the next summit is going to be there so maybe that’s the right thing to do and have a bottom up and top down one big organization that can play a role is the AI Alliance which is a group that promotes open source AI

Faith Waidaka

Jan let me cut you short we’ve run out of time and we would like to thank you all for coming yes thank you so much for all the speakers we just have a small memento from the government side to make this a memorable event. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

S

Sangbu Kim

Speech speed

113 words per minute

Speech length

793 words

Speech time

419 seconds

Barrier to AI democratization – data access

Explanation

Kim highlights that limited access to data is a core obstacle for democratizing AI, emphasizing the need for open data and responsible frameworks to enable broader participation.


Evidence

“Number three, data access.” [4]. “From the World Bank point of view, democratizing data computing is very important.” [14].


Major discussion point

Barrier to AI democratization (compute, data, language coverage)


Topics

Artificial intelligence | Closing all digital divides


Indicator of moving from AI consumer to builder

Explanation

Kim argues that full local ownership, control, and utilization of data sets signals a transition from merely consuming AI to building it locally.


Evidence

“Number three, data access.” [4]. “From the World Bank point of view, democratizing data computing is very important.” [14].


Major discussion point

Indicator of moving from AI consumer to builder


Topics

Capacity development | Artificial intelligence


Funding allocation – develop high‑impact use cases

Explanation

Kim proposes allocating funds to create practical, high‑impact AI use cases in sectors such as agriculture, health, and education to inspire users and improve productivity.


Evidence

“So we are identifying agriculture, education, healthcare, and some more.” [83]. “So inspire them that they can really do this with higher productivity, with low cost.” [79]. “On top of that, maybe why we are developing the use cases, more important thing is that some change user mindset and inspire users.” [81].


Major discussion point

Funding allocation / playbook for scaling small AI responsibly


Topics

Financial mechanisms | Capacity development | Artificial intelligence


F

Faith Waidaka

Speech speed

94 words per minute

Speech length

1085 words

Speech time

691 seconds

Compute as the biggest barrier to AI democratization

Explanation

Waidaka asks the panel to identify the single biggest barrier today, pointing directly to compute as a critical limitation for AI democratization.


Evidence

“But what do we feel is the single biggest barrier today to democratizing AI compute?” [1]. “We need the compute.” [13].


Major discussion point

Barrier to AI democratization (compute, data, language coverage)


Topics

Artificial intelligence | The enabling environment for digital development


Digital public infrastructure enables AI innovation

Explanation

Waidaka queries how DPI can support AI innovation, prompting discussion on trusted, consented data layers that empower AI use while preserving sovereignty.


Evidence

“How do you see digital public infrastructure enabling AI innovation?” [40].


Major discussion point

Digital Public Infrastructure (DPI) as foundation for AI innovation and data sovereignty


Topics

Information and communication technologies for development | Data governance


Y

Yann LeCun

Speech speed

153 words per minute

Speech length

2772 words

Speech time

1083 seconds

Open‑weight, open‑source models remove barriers

Explanation

LeCun stresses that making top‑performing open models publicly available is essential to lower barriers for AI adoption worldwide.


Evidence

“Certainly, the availability of top -performing open models, open -weight but also open -source, would be a way to remove the barrier.” [7].


Major discussion point

Barrier to AI democratization (compute, data, language coverage)


Topics

Artificial intelligence | Closing all digital divides


Federated learning to share data without compromising sovereignty

Explanation

LeCun proposes federated learning as a technical solution that lets regions keep ownership of their data while contributing to global model training, expanding AI diversity and quality.


Evidence

“…regions don’t need to actually communicate that data they can keep ownership of that data and then contribute to training a global model by exchanging parameter vectors… it’s a form of federated learning and I think this is a way to open up access to AI and it’s absolutely crucial for the future…” [6].


Major discussion point

Barrier to AI democratization (compute, data, language coverage)


Topics

Artificial intelligence | Data governance


Hardware limits – CMOS will not see breakthrough for 10‑20 years

Explanation

LeCun notes that current CMOS‑based hardware is a bottleneck and that a major new hardware paradigm is unlikely for another decade or two.


Evidence

“So I’m not expecting some big revolution in hardware design until we start building something else than CMOS transistors and silicon.” [16]. “That’s not happening for another 10 or 20 years.” [17]. “Our current technology is limited.” [18].


Major discussion point

Compute intensity, model scaling and hardware outlook


Topics

Artificial intelligence | Environmental impacts


Future AI paradigm shift toward world models

Explanation

LeCun argues that the next AI revolution will focus on systems that build world models from sensory data, moving beyond purely knowledge‑store LLMs toward reasoning‑centric intelligence.


Evidence

“…there’s a number of organizations that could … build a next generation AI system.” [92]. “to build a next generation AI system.” [93].


Major discussion point

Future AI paradigm shift toward world models and AGI considerations


Topics

Artificial intelligence


S

Sanjay Jain

Speech speed

182 words per minute

Speech length

1081 words

Speech time

355 seconds

DPI provides trusted, consented data layers

Explanation

Jain explains that DPI creates a management layer that gives citizens control over their data, enabling AI applications while preserving data sovereignty.


Evidence

“Broadly, DPI provides a way for data of all individuals, so their records, their ID, their transactions.” [41]. “So DPI provides a management layer on that and provides consented access.” [47]. “And so we do think that there’s a lot of reason for countries around the world to adopt DPI systems so that citizens’ data can be managed in a very trusted way, access with consent.” [48].


Major discussion point

Digital Public Infrastructure (DPI) as foundation for AI innovation and data sovereignty


Topics

Information and communication technologies for development | Data governance


Invest in digitizing records to broaden AI access

Explanation

Jain advocates using DPI to digitize health, education, and other records, thereby creating the data foundation needed for AI services in low‑income contexts.


Evidence

“…getting those underlying systems of record getting people access to their data in a digital form can actually empower them… digitized so that then they can take benefit of ai for their needs…” [42].


Major discussion point

Funding allocation / playbook for scaling small AI responsibly


Topics

Financial mechanisms | Capacity development


Open‑source ID platforms empower sovereign digital systems

Explanation

Jain highlights MOSIP as an open‑source ID platform that countries can adopt and customize, supporting sovereign digital identity and broader AI ecosystems.


Evidence

“And MOSIP is a modular open source ID platform that we have supported, which countries are taking and building with their own policy layers, building their own application versions of it.” [53].


Major discussion point

Digital Public Infrastructure (DPI) as foundation for AI innovation and data sovereignty


Topics

Information and communication technologies for development | Data governance


S

Saurabh Garg

Speech speed

130 words per minute

Speech length

700 words

Speech time

321 seconds

Access to open models and AI literacy are needed

Explanation

Garg stresses that without open models and sufficient AI literacy, many stakeholders cannot effectively use AI technologies.


Evidence

“I would say access to models, open models, and AI literacy to be able to utilize those models.” [8].


Major discussion point

Barrier to AI democratization (compute, data, language coverage)


Topics

Artificial intelligence | Capacity development


DPI must be interoperable, shareable, and ensure agency

Explanation

Garg outlines key characteristics for DPI: interoperability, shareability, and giving people agency over their data, to avoid new dependencies.


Evidence

“It needs to be interoperable and shareable.” [37]. “So I think two characteristics of digital public infrastructure, which are key, are to ensure that not only there is access, but also agency of the people.” [44]. “For any system to be a DPI, I think there are a few essential characteristics.” [45].


Major discussion point

Digital Public Infrastructure (DPI) as foundation for AI innovation and data sovereignty


Topics

Information and communication technologies for development | Data governance


Prioritize capability development to reduce compute needs

Explanation

Garg recommends focusing funds on building people’s capability to develop niche, domain‑specific models, which can lower overall compute requirements.


Evidence

“Given the volume of funds available, I would focus a lot more on capability development of people to be able, their ability to use AI for improving productivity.” [80].


Major discussion point

Funding allocation / playbook for scaling small AI responsibly


Topics

Financial mechanisms | Capacity development


C

Chenai Chair

Speech speed

169 words per minute

Speech length

1023 words

Speech time

361 seconds

Participatory, gender‑responsive data infrastructure builds trust

Explanation

The Chair emphasizes that data infrastructure must be designed participatively and be gender‑responsive to earn community trust and relevance.


Evidence

“So if we’re going to build data infrastructure that community trusts is to respond to the realities that they live in and to be participatory.” [65]. “It’s designed to be a gender -responsive project because gender transformative is also the North Star that we’re hoping to get to one day.” [66].


Major discussion point

Community‑driven, participatory data infrastructure and trust


Topics

Capacity development | Closing all digital divides | Social and economic development


Open models and talent investment accelerate community‑led AI

Explanation

The Chair argues that investing in open models, talent, and community projects like Crane AI enables grassroots AI solutions in health, education, and agriculture.


Evidence

“…open models do allow for people to innovate on top of them and an example of this is crane AI which actually developed a offline first AI stack focusing on health education and agricultural services and they emerged from the Masakana community… talent is very important across the whole value chain…” [69].


Major discussion point

Funding allocation / playbook for scaling small AI responsibly


Topics

Financial mechanisms | Capacity development | Artificial intelligence


African language NLP hub addresses representation gap

Explanation

The Chair highlights the Masakane African Languages Hub, a grassroots effort to develop NLP resources for under‑documented African languages, reducing the language representation gap.


Evidence

“And I am Chennai Che, the director of Masakane African Languages Hub, which emerged from a grassroots community called Masakane, focusing on African language NLP.” [70].


Major discussion point

Barrier to AI democratization (compute, data, language coverage)


Topics

Closing all digital divides | Artificial intelligence


A

Arun Sharma

Speech speed

157 words per minute

Speech length

140 words

Speech time

53 seconds

Physical‑software lag hampers AI impact

Explanation

Sharma points out that while software evolves quickly, hardware and physical supply chains lag, creating a mismatch that limits the real‑world impact of AI tools such as farmer smart glasses.


Evidence

“So obviously there is a lag between the hardware and the software.” [19]. “My question to anyone, Jan specifically, what is the lag that we have in the physical and the virtual world?” [31]. “The software is evolving much faster.” [32].


Major discussion point

Compute intensity, model scaling and hardware outlook


Topics

Artificial intelligence | Information and communication technologies for development


A

Audience

Speech speed

146 words per minute

Speech length

166 words

Speech time

67 seconds

Question on data as bottleneck for AGI

Explanation

An audience member asks whether data is the sole bottleneck compared to compute and architecture in achieving AGI, highlighting concerns about data availability.


Evidence

“So do you think that data is the only bottleneck, despite of compute and the architecture, to get the AGI, or maybe the humans, the superintelligence, artificial superintelligence?” [5].


Major discussion point

Barrier to AI democratization (compute, data, language coverage)


Topics

Artificial intelligence | Data governance


Agreements

Agreement points

Data access and ownership are fundamental barriers to AI democratization

Speakers

– Sangbu Kim
– Sanjay Jain
– Yann LeCun

Arguments

Concentration of digitized data heavily skewed toward developed world


Personal data accessibility through protected means is essential for AI to reach everyone


Federated learning allows regions to contribute to global models while maintaining data ownership


Summary

All three speakers identify data access as a critical barrier, with Kim highlighting the severe inequality in global data distribution, Jain emphasizing the need for protected personal data access, and LeCun proposing federated learning as a solution that maintains data ownership while enabling global AI development


Topics

Data governance | Artificial intelligence | Closing all digital divides


Open models are essential for democratizing AI access

Speakers

– Saurabh Garg
– Yann LeCun
– Chenai Chair

Arguments

Access to open models and AI literacy are primary barriers


Availability of top-performing open models is necessary but insufficient condition


Open models, talent development, and capacity building across the entire value chain are essential


Summary

All three speakers agree that open models are crucial for AI democratization, with Garg identifying access to open models as a primary barrier, LeCun noting they are necessary but not sufficient, and Chair emphasizing their importance for enabling innovation


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Community participation and local ownership are critical for sustainable AI development

Speakers

– Chenai Chair
– Sangbu Kim
– Sanjay Jain

Arguments

Community participation and meeting people where they are builds trust in data infrastructure


Local data ownership and context control remains with communities despite infrastructure inequality


DPI provides consented access to individual records and transactions through federated learning approaches


Summary

These speakers converge on the importance of community involvement and local control, with Chair emphasizing participatory approaches, Kim highlighting local data ownership opportunities, and Jain describing DPI mechanisms that enable community control over their data


Topics

Data governance | Human rights and the ethical dimensions of the information society | Closing all digital divides


Talent development and capacity building are essential across the AI value chain

Speakers

– Saurabh Garg
– Chenai Chair
– Sangbu Kim

Arguments

Access to open models and AI literacy are primary barriers


Open models, talent development, and capacity building across the entire value chain are essential


User-centric AI services reduce training requirements and allow verbal interaction without technical skills


Summary

All three speakers emphasize the critical importance of developing human capacity, with Garg focusing on AI literacy, Chair advocating for comprehensive talent development across the value chain, and Kim emphasizing user-centric design that reduces technical skill requirements


Topics

Capacity development | Artificial intelligence | Closing all digital divides


Similar viewpoints

Both speakers recognize that current AI systems are inefficient and advocate for approaches that reduce computational requirements – LeCun through market incentives driving efficiency improvements, and Garg through domain-specific models that require less infrastructure

Speakers

– Yann LeCun
– Saurabh Garg

Arguments

Industry incentives naturally drive power consumption optimization because operational costs focus on power and hardware


Domain-specific models using less power and infrastructure are preferable to large language models


Topics

Artificial intelligence | Environmental impacts | The enabling environment for digital development


Both speakers strongly advocate for digital public infrastructure as a foundation for AI democratization, with Jain providing specific funding requirements and Garg emphasizing the importance of ensuring people have agency as co-creators rather than just consumers

Speakers

– Sanjay Jain
– Saurabh Garg

Arguments

Digital public infrastructure deployment requires $500 million to bring everyone to same digital level


Digital public infrastructure must ensure access and agency for people to be co-creators, not just consumers


Topics

Information and communication technologies for development | Data governance | Financial mechanisms


Both speakers advocate for bottom-up, community-driven approaches to AI development, with Chair demonstrating this through Masakhane’s success and LeCun proposing similar collaborative models for federated learning

Speakers

– Chenai Chair
– Yann LeCun

Arguments

Participatory approaches like Masakhane demonstrate community ownership and sustainability


Bottom-up collaboration through code development combined with government and organizational support


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Unexpected consensus

Current AI systems are fundamentally limited and a new revolution is needed

Speakers

– Yann LeCun
– Sangbu Kim

Arguments

Current LLMs are knowledge storage systems requiring enormous memory, but smarter systems could replace knowledge with intelligence


User-centric AI services reduce training requirements and allow verbal interaction without technical skills


Explanation

It’s unexpected to see both a leading AI researcher (LeCun) and a World Bank representative (Kim) agree that current AI approaches are inadequate. LeCun argues for a complete architectural shift toward world models, while Kim emphasizes the need for more user-centric approaches, both suggesting fundamental changes are needed rather than incremental improvements


Topics

Artificial intelligence | The enabling environment for digital development


Physical infrastructure alone is insufficient without demand creation and use cases

Speakers

– Faith Waidaka
– Sangbu Kim

Arguments

AI democratization requires a comprehensive approach addressing multiple interconnected elements simultaneously


Creating demand for computing power through clear applications is more crucial than just building infrastructure


Explanation

It’s surprising that Waidaka, who builds physical data center infrastructure, agrees with Kim that infrastructure alone is not the solution. Despite her role in building the physical foundation for AI, she acknowledges that a holistic approach including talent, regulatory frameworks, and use cases is necessary


Topics

Information and communication technologies for development | The enabling environment for digital development | Social and economic development


Overall assessment

Summary

The speakers demonstrate remarkable consensus on key principles for AI democratization: the critical importance of data governance and community ownership, the necessity of open models and federated approaches, the centrality of capacity building and talent development, and the need for holistic rather than infrastructure-only solutions. There is also agreement that current AI approaches have fundamental limitations requiring new paradigms.


Consensus level

High level of consensus across diverse stakeholders (academic, government, civil society, private sector, international organizations) suggests these principles represent a solid foundation for policy and implementation. The convergence is particularly significant given the speakers’ different backgrounds and roles, indicating these viewpoints transcend sectoral interests and could form the basis for coordinated global action on AI democratization.


Differences

Different viewpoints

Primary barrier to AI democratization

Speakers

– Chenai Chair
– Saurabh Garg
– Sangbu Kim
– Sanjay Jain

Arguments

Language diversity creates enormous scope of work with over 2,000 documented African languages


Access to open models and AI literacy are primary barriers


Concentration of digitized data heavily skewed toward developed world


Personal data accessibility through protected means is essential for AI to reach everyone


Summary

Speakers identified different primary barriers: Chenai focused on linguistic diversity and community representation, Saurabh emphasized model access and literacy, Sangbu highlighted data inequality, and Sanjay stressed personal data accessibility through protected systems


Topics

Artificial intelligence | Closing all digital divides | Data governance


Infrastructure vs. application focus for AI development

Speakers

– Faith Waidaka
– Sangbu Kim

Arguments

Building electrical and mechanical infrastructure for data centers in Africa is essential for making AI possible


Creating demand for computing power through clear applications is more crucial than just building infrastructure


Summary

Faith emphasized the foundational importance of physical infrastructure for data centers, while Sangbu argued that developing use cases and applications to create demand is more important than just building physical infrastructure


Topics

Information and communication technologies for development | The enabling environment for digital development


Future of AI compute requirements

Speakers

– Yann LeCun
– Faith Waidaka

Arguments

Current LLMs are knowledge storage systems requiring enormous memory, but smarter systems could replace knowledge with intelligence


Moving inference closer to devices creates tension with compute requirements in the next decade


Summary

Yann predicted that training models will become smaller as AI becomes more intelligent rather than knowledge-storing, while Faith observed the tension between this prediction and the practical need to bring inference closer to end-user devices


Topics

Artificial intelligence | Environmental impacts


Timeline and approach for AI breakthroughs

Speakers

– Yann LeCun
– Faith Waidaka

Arguments

Real breakthrough in hardware design won’t happen for 10-20 years beyond CMOS transistors


Ten years provides significant time for research breakthroughs given AI’s rapid evolution


Summary

Yann was more conservative about hardware breakthroughs, suggesting 10-20 years for major advances beyond current silicon technology, while Faith was more optimistic about the potential for breakthroughs within a decade given AI’s rapid progress


Topics

Artificial intelligence | Environmental impacts | The enabling environment for digital development


Unexpected differences

Role of physical vs. digital infrastructure prioritization

Speakers

– Faith Waidaka
– Sangbu Kim

Arguments

Building electrical and mechanical infrastructure for data centers in Africa is essential for making AI possible


Creating demand for computing power through clear applications is more crucial than just building infrastructure


Explanation

This disagreement was unexpected because both speakers represent infrastructure-focused organizations (Africa Data Center Association and World Bank), yet they had fundamentally different views on whether to prioritize physical infrastructure building or application development first


Topics

Information and communication technologies for development | The enabling environment for digital development


Academic vs. industry AI research direction

Speakers

– Yann LeCun

Arguments

Academic research on non-generative architectures is being overlooked by industry’s LLM focus


Explanation

LeCun’s criticism of Silicon Valley’s LLM monoculture was unexpected given his recent departure from Meta and his position as a leading industry figure, suggesting significant internal tensions about AI development directions within the tech industry


Topics

Artificial intelligence | Capacity development | The enabling environment for digital development


Overall assessment

Summary

The discussion revealed moderate disagreements primarily around prioritization and sequencing rather than fundamental goals. Key areas of disagreement included: what constitutes the primary barrier to AI democratization, whether to prioritize physical infrastructure or applications first, and timelines for technological breakthroughs. Most speakers agreed on core principles like the importance of open models, community ownership of data, and inclusive AI development.


Disagreement level

The disagreement level was moderate and constructive, with speakers offering complementary rather than contradictory perspectives. The disagreements reflect different professional backgrounds and regional contexts rather than fundamental philosophical differences. This suggests that a comprehensive approach incorporating multiple viewpoints would be most effective for AI democratization, rather than choosing a single approach. The consensus on core principles provides a strong foundation for collaborative action despite tactical differences.


Partial agreements

Partial agreements

All agreed on the importance of open models for AI democratization, but disagreed on implementation approach – Yann focused on federated learning and data contribution, Saurabh emphasized literacy and domain-specific models, while Chenai stressed community participation and comprehensive talent development

Speakers

– Yann LeCun
– Saurabh Garg
– Chenai Chair

Arguments

Availability of top-performing open models is necessary but insufficient condition


Access to open models and AI literacy are primary barriers


Open models, talent development, and capacity building across the entire value chain are essential


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


All agreed on the importance of local data ownership and community control, but differed on mechanisms – Sangbu focused on leveraging existing local data advantages, Chenai emphasized participatory community-driven approaches, while Sanjay advocated for formal DPI systems with consented access

Speakers

– Sangbu Kim
– Chenai Chair
– Sanjay Jain

Arguments

Local data ownership and context control remains with communities despite infrastructure inequality


Community participation and meeting people where they are builds trust in data infrastructure


DPI provides consented access to individual records and transactions through federated learning approaches


Topics

Data governance | Human rights and the ethical dimensions of the information society | Information and communication technologies for development


Both agreed on the need for efficient, scalable infrastructure solutions, but Sanjay focused on comprehensive DPI deployment while Saurabh emphasized smaller, specialized models as the solution to infrastructure constraints

Speakers

– Sanjay Jain
– Saurabh Garg

Arguments

Digital public infrastructure deployment requires $500 million to bring everyone to same digital level


Domain-specific models using less power and infrastructure are preferable to large language models


Topics

Information and communication technologies for development | Financial mechanisms | Artificial intelligence


Similar viewpoints

Both speakers recognize that current AI systems are inefficient and advocate for approaches that reduce computational requirements – LeCun through market incentives driving efficiency improvements, and Garg through domain-specific models that require less infrastructure

Speakers

– Yann LeCun
– Saurabh Garg

Arguments

Industry incentives naturally drive power consumption optimization because operational costs focus on power and hardware


Domain-specific models using less power and infrastructure are preferable to large language models


Topics

Artificial intelligence | Environmental impacts | The enabling environment for digital development


Both speakers strongly advocate for digital public infrastructure as a foundation for AI democratization, with Jain providing specific funding requirements and Garg emphasizing the importance of ensuring people have agency as co-creators rather than just consumers

Speakers

– Sanjay Jain
– Saurabh Garg

Arguments

Digital public infrastructure deployment requires $500 million to bring everyone to same digital level


Digital public infrastructure must ensure access and agency for people to be co-creators, not just consumers


Topics

Information and communication technologies for development | Data governance | Financial mechanisms


Both speakers advocate for bottom-up, community-driven approaches to AI development, with Chair demonstrating this through Masakhane’s success and LeCun proposing similar collaborative models for federated learning

Speakers

– Chenai Chair
– Yann LeCun

Arguments

Participatory approaches like Masakhane demonstrate community ownership and sustainability


Bottom-up collaboration through code development combined with government and organizational support


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Takeaways

Key takeaways

Democratizing AI requires a holistic approach addressing five key areas: access and energy, computing power, data access, talent building, and credible AI framework/policy


Data inequality is stark – over 80% of global datasets are from developed countries, with less than 2% from sub-Saharan Africa


Community participation and ownership are essential for building trusted AI infrastructure, as demonstrated by the Masakhane African Languages Hub’s participatory approach


Current LLMs are primarily knowledge storage systems requiring enormous computational resources, but future AI systems focused on intelligence rather than knowledge accumulation could be more efficient


Digital Public Infrastructure (DPI) can enable AI democratization by providing trusted, interoperable systems that give people agency as co-creators rather than just consumers


Creating demand for computing power through practical use cases (agriculture, education, healthcare, government services) is more important than just building physical infrastructure


The next AI revolution will focus on world models that understand the real world through sensory input, enabling better planning and reasoning capabilities


Open source models and federated learning approaches can help regions maintain data sovereignty while contributing to global AI development


Resolutions and action items

Development of the METRI platform (Multi-stakeholder AI for Trusted and Resilient Infrastructure) as a modular digital public good


Implementation of Project Echo by Masakhane as a gender-responsive AI project focusing on women’s economic empowerment and health


Continued funding and support for grassroots efforts like Masakhane for African language representation in AI


Bottom-up collaboration through code development for federated learning, potentially coordinated through organizations like CERN, UNESCO, and the AI Alliance


Investment in open source DPI systems that countries can adopt and customize (like MOSIP for digital ID, OpenG2P for government payments)


Focus on developing domain-specific, smaller models that require less computational power and infrastructure


Unresolved issues

How to effectively coordinate international collaboration for federated learning at scale


The timeline and practical implementation of transitioning from current LLMs to world model-based AI systems


Specific mechanisms for ensuring data sovereignty while enabling global AI model training


How to bridge the gap between rapidly evolving AI software and slower-moving physical infrastructure and systems


Standardization and interoperability challenges across different countries’ DPI implementations


Sustainable funding models for community-driven AI development initiatives


Balancing profit motives of private sector with democratization goals


Addressing the fear of job displacement from AI, particularly in developing countries


Suggested compromises

Federated learning approach that allows data contribution to global models while maintaining local ownership and control


Modular DPI systems that can be customized by individual countries while maintaining interoperability


Combination of technological and policy-based mechanisms to prevent new dependencies while enabling collaboration


Focus on user-centric AI design that reduces technical barriers while building local capacity


Investment strategy that balances infrastructure development with use case creation and talent building


Open source approach combined with government and organizational support for sustainable development


Thought provoking comments

Current systems are not particularly intelligent but they store knowledge there is another revolution of AI coming which actually my new company is built around which intends to build systems that are smarter even if they don’t necessarily accumulate as much knowledge so those models will be smaller… your house cat is smarter than the biggest LLMs

Speaker

Yann LeCun


Reason

This fundamentally challenges the prevailing narrative about current AI capabilities and reframes the entire discussion from scaling existing models to developing fundamentally different approaches. The cat analogy is particularly powerful in illustrating the gap between language manipulation and true world understanding.


Impact

This shifted the conversation from focusing on democratizing access to current AI systems toward considering what the next generation of AI might look like. It influenced Faith’s follow-up questions about compute requirements and forced other panelists to think beyond current LLM paradigms when discussing infrastructure needs.


We need to think differently… even though computing power is very important, how can we really create the data demand. So without having very clear application and some solutions, nobody can really run their own computing data center business in Africa.

Speaker

Sangbu Kim


Reason

This inverts the typical infrastructure-first approach to AI democratization, arguing that demand creation through practical applications should drive infrastructure development rather than the reverse. It’s a crucial economic insight often overlooked in technical discussions.


Impact

This comment redirected the panel’s focus from supply-side solutions (more compute, more data centers) to demand-side considerations (use cases, applications, user inspiration). It influenced subsequent discussions about practical applications and user-centric design, with multiple panelists later emphasizing the importance of meeting communities where they are.


If various regions of the world collect or digitize their cultural data… and then contribute to training a global model that would constitute eventually a repository of all human knowledge then those models would be much better quality than all the proprietary system because the proprietary system would not have access to that data… this can be done technically through federated learning

Speaker

Yann LeCun


Reason

This presents a concrete technical pathway for developing countries to gain leverage in AI development by contributing unique cultural data while maintaining sovereignty. It transforms the narrative from ‘catching up’ to ‘leading through unique contributions.’


Impact

This concept became a recurring theme throughout the discussion, with Saurabh Garg building on it in his METRI platform proposal and influencing questions about federated learning coordination. It provided a technical foundation for several panelists’ arguments about data sovereignty and collaborative development.


Masakana basically means we build together loosely translated in Isizuru. And that was then a creation of a participatory approach in knowledge building as a result of being excluded in spaces… if you’re going to build trust, people have to see what the end value is and also be recognized.

Speaker

Chenai Chair


Reason

This provides a concrete, successful example of community-driven AI development that challenges top-down approaches. The emphasis on recognition and participatory design offers a practical model for inclusive AI development that goes beyond consultation to genuine co-creation.


Impact

This grounded the theoretical discussions in real-world success, influencing how other panelists framed their responses about community engagement. It reinforced the importance of bottom-up approaches and was referenced by Sanjay Jain as an example of the kind of grassroots efforts that should be funded.


AI will scale effectively only when data for everyone is available. So when I can get a personalized service because my personal data is accessible through some protected means to a model… DPI provides a way for data of all individuals… to be managed in a very trusted way, access with consent.

Speaker

Sanjay Jain


Reason

This connects AI democratization to broader digital infrastructure development, suggesting that individual data empowerment through DPI is prerequisite to meaningful AI access. It reframes the challenge from collective to individual data sovereignty.


Impact

This introduced the DPI framework as a foundational layer for AI democratization, influencing subsequent discussions about data sovereignty and individual empowerment. It provided a concrete policy pathway that other panelists could build upon, particularly regarding federated approaches and community ownership.


The incentives are there for the industry to reduce the power consumption of AI system… because that’s where the money goes. That’s where you spend all the money when you operate an AI system. It goes into power and maintaining your hardware… The bad news is that it’s progressing as fast as it can, and it’s not fast enough.

Speaker

Yann LeCun


Reason

This provides a realistic economic analysis of AI efficiency improvements, tempering optimistic expectations about rapid cost reductions while explaining why progress is happening. It’s particularly insightful because it aligns economic incentives with democratization goals.


Impact

This helped ground the discussion in economic realities and influenced how other panelists approached infrastructure planning. It suggested that waiting for dramatic efficiency improvements isn’t viable, pushing the conversation toward alternative approaches like smaller, domain-specific models and different architectural approaches.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional approaches to AI democratization. Rather than focusing solely on replicating Western AI infrastructure in developing countries, the conversation evolved toward more nuanced strategies: leveraging unique cultural data as competitive advantage, building from community needs upward, and preparing for fundamentally different AI architectures. The interplay between Yann LeCun’s technical insights about AI limitations and future directions, combined with Chenai Chair’s community-driven examples and the policy frameworks from Sanjay Jain and Saurabh Garg, created a multi-dimensional approach that moved beyond simple resource transfer to genuine co-creation and innovation. The discussion ultimately reframed AI democratization from a catch-up game to an opportunity for leapfrogging through alternative approaches.


Follow-up questions

How can we create demand for computing power in developing regions, particularly Africa?

Speaker

Sangbu Kim


Explanation

Kim emphasized that while physical infrastructure is important, the more crucial question is how to create applications and solutions that generate demand for computing power, without which data center businesses cannot be sustainable in Africa.


What technological breakthroughs are needed to significantly reduce AI power consumption beyond current optimization efforts?

Speaker

Yann LeCun


Explanation

LeCun noted that while industry incentives exist to reduce power consumption, progress isn’t fast enough, and real breakthroughs may require moving beyond CMOS transistors and silicon, which won’t happen for 10-20 years.


How can federated learning be implemented technically to allow regions to contribute to global AI models while maintaining data ownership?

Speaker

Yann LeCun


Explanation

LeCun mentioned this as a technical solution for democratizing AI but acknowledged he didn’t want to get into the technical weeds, leaving the implementation details as an area for further exploration.


What organizational structure could coordinate federated learning collaboration between different countries and regions?

Speaker

Daniel Dobos (audience member)


Explanation

While federated learning is technically feasible, the architecture of collaboration between different entities remains a challenge that needs to be addressed.


How can the METRI platform be developed and scaled as a modular digital public infrastructure for AI?

Speaker

Saurabh Garg


Explanation

Garg introduced the concept of METRI (multi-stakeholder AI for trusted and resilient infrastructure) but the specific implementation details and scaling mechanisms need further development.


How can world models be developed and what research funding is needed to accelerate this next AI revolution?

Speaker

Yann LeCun


Explanation

LeCun emphasized that world models represent the next AI revolution but noted that most work is happening in academia while industry focuses on LLMs, suggesting need for more research support.


How can the lag between rapidly evolving AI software and slower-changing physical infrastructure be addressed?

Speaker

Arun Sharma (audience member)


Explanation

Sharma pointed out the disconnect between fast software evolution and archaic physical systems, particularly in sectors like agriculture, education, and healthcare.


What are the specific benchmarks and evaluation methods for determining when AI systems reach human-level intelligence?

Speaker

Audience member


Explanation

The question of how to benchmark and evaluate AGI or human-level AI remains unresolved, particularly regarding how humans would evaluate systems that might be smarter than them.


How can community network models be adapted and scaled for AI infrastructure development?

Speaker

Chenai Chair


Explanation

Chair referenced community network models for last-mile connectivity as a template for community-owned AI infrastructure, but the specific adaptation mechanisms need further research.


What specific mechanisms are needed to ensure federated AI systems don’t create new dependencies while maintaining data sovereignty?

Speaker

Saurabh Garg


Explanation

While Garg mentioned the need for technological and policy-based protocols, the specific mechanisms to achieve this balance require further development.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.