Building Public Interest AI Catalytic Funding for Equitable Compute Access
20 Feb 2026 11:00h - 12:00h
Building Public Interest AI Catalytic Funding for Equitable Compute Access
Session at a glance
Summary
This discussion focused on democratizing AI resources, particularly compute infrastructure, to ensure equitable access to artificial intelligence capabilities across the Global South. The session was organized around India’s ambitious AI mission, which aims to deploy over 38,000 GPUs as public infrastructure, demonstrating how developing nations can build sovereign AI capabilities rather than simply consuming technology from established powers.
Dr. Saurabh Garg outlined India’s approach through the proposed “Maitri” platform, a collaborative framework designed to provide shared access to compute, data, and AI models as digital public goods. He emphasized six foundational pillars: compute, capability, collaboration, connectivity, compliance, and context, arguing that democratization requires moving beyond hardware to include skills development, governance frameworks, and cultural adaptation.
The panelists challenged the narrow focus on compute ownership, with Martin Tisné warning against potential “white elephant” data centers that remain underutilized. He advocated for greater investment in open-source ecosystems and data governance solutions that enable privacy-preserving data sharing. Vilas Dhar criticized the concept of “AI diffusion” as resembling trickle-down economics, arguing that true democratization requires active institutional intervention rather than passive distribution of technology.
Dr. Shikha Gitau presented concrete data from Africa’s perspective, revealing that the continent needs 2.5 million GPU hours annually but currently has access to only 5% of that capacity. She introduced a “compute demand index” to quantify actual needs rather than relying on abstract requests for resources. Sean Seow highlighted practical limitations of cross-border compute sharing, including latency issues and data sovereignty requirements, while noting the importance of aggregating demand to negotiate better pricing.
The discussion concluded with calls for new institutional frameworks that prioritize interdependence over independence, focusing on building collaborative partnerships that create mutual value rather than perpetuating dependency relationships between developed and developing nations.
Keypoints
Major Discussion Points:
– Democratizing AI through compute infrastructure as public utility: India’s AI mission with 38,000 GPUs represents a shift toward treating compute capacity as public infrastructure, similar to water and electricity, with intelligent prioritization rather than rationing for public interest applications.
– Moving beyond hardware to holistic AI ecosystems: Panelists emphasized that compute alone is insufficient – successful AI democratization requires addressing the full stack including data governance, open-source models, talent development, institutional capacity, and contextual localization for diverse languages and cultures.
– Quantifying actual compute needs vs. abstract demands: Dr. Shikha Gitau presented concrete research showing Africa needs 2.5 million GPU hours annually, highlighting the importance of moving from vague requests for “more GPUs” to specific, measurable demands backed by investment readiness assessments.
– Redefining sovereignty from territorial control to relational agency: The discussion challenged traditional notions of AI sovereignty, moving away from simply owning physical infrastructure toward building collaborative, interdependent systems that provide genuine agency and participation in AI development.
– South-South collaboration and new institutional frameworks: Emphasis on creating new models of cooperation between Global South nations, supported by philanthropic organizations and innovative intermediaries, rather than replicating traditional North-South dependency patterns.
Overall Purpose:
The discussion aimed to move beyond theoretical concepts of AI democratization toward concrete, actionable strategies for expanding equitable access to AI resources globally. The session focused on operationalizing the “Maitri” platform concept and establishing practical frameworks for South-South collaboration in AI infrastructure development.
Overall Tone:
The discussion maintained a consistently pragmatic and solution-oriented tone throughout. While acknowledging significant challenges, speakers demonstrated cautious optimism about the potential for meaningful change. The conversation was collaborative rather than competitive, with panelists building on each other’s ideas. There was a notable shift from abstract policy discussions toward concrete metrics and implementation strategies, particularly evident in Dr. Gitau’s presentation of specific GPU hour requirements and investment readiness frameworks.
Speakers
Speakers from the provided list:
– Deepali Khanna – Works at the Rockefeller Foundation, appears to be in a leadership role focused on AI democratization initiatives
– Sushant Kumar – Associated with Kalpa Impact, involved in research on computational resources for AI futures
– Dr. Saurabh Garg – Secretary of the Ministry of Statistics and Program Implementation for the Government of India, Chair of the Democratizing AI Resources Working Group, instrumental in shaping India’s AI governance and previously led the technology stack for the Transformative Adhar Initiative
– Andrew Sweet – VP at the Rockefeller Foundation, served as moderator for the panel discussion
– Martin Tisné – Founder of Current AI and public interest envoy for France’s AI Action Summit, has 15 years of experience building multi-stakeholder initiatives like the Open Government Partnership
– Vilas Dhar – President of the Patrick J. McGovern Foundation, serves on the UN Secretary General’s High-Level Advisory Board on AI, leads philanthropic movements for AI for public purpose
– Dr. Shikha Gitao – Founder and CEO of Kala AI, established Safaricom Alpha, leading voice in digital transformation in Africa focusing on education, healthcare and agriculture
– Shaun Seow – CEO of Philanthropy Asia Alliance, working to catalyze collaborative philanthropy across Asia, has expertise from time at Temasek and as CEO of Mediacorp
Additional speakers:
– Shri Abhishek Singh – Mentioned as a leader who was pulled into another meeting but noted for his leadership and partnership in the AI initiative (not present during the discussion)
Full session report
This discussion on democratising AI resources brought together leading voices from government, philanthropy, and civil society to address equitable access to artificial intelligence capabilities, with a particular focus on expanding AI participation across the Global South. The session was organised around India’s AI mission and featured the release of a working report on computational resources for AI futures.
Opening: The Compute Divide Challenge
Deepali Khanna from the Rockefeller Foundation opened by framing the central challenge: the digital divide is evolving into a compute divide that will determine who shapes the future of artificial intelligence. She acknowledged various leaders including Abhishek Singh, Dr. Garg, Andrew Sweet, Charu, and others for their contributions to the working group process. Khanna emphasised that AI today is constrained not by imagination but by infrastructure, creating a situation where technological capability is concentrated in a handful of geographies while vast populations remain excluded from AI development and benefits.
India’s AI Mission and the Maitri Platform
Dr. Saurabh Garg, Secretary of the Ministry of Statistics and Programme Implementation for the Government of India, provided detailed insights into India’s approach through the proposed Maitri platform—Multi-Stakeholder AI for Trusted and Resilient Infrastructure. India’s AI mission targets deploying over 38,000 GPUs as public infrastructure, treating compute capacity as a public utility rather than a private commodity.
The Maitri framework is built around six foundational pillars: compute, capability, collaboration, connectivity, compliance, and context. Dr. Garg described this as a non-binding, voluntary, modular approach that countries can adopt and customise according to their specific needs. He chairs one of seven working groups focused on democratising AI resources, alongside representatives from Kenya and Egypt.
Dr. Garg also referenced observations about the contrast between current AI systems requiring gigawatts of power and human intelligence operating on roughly 100 watts, raising questions about efficiency in current AI development approaches. He outlined three guiding principles or “sutras”: people, planet, and progress.
Panel Discussion: Practical Challenges and Solutions
Andrew Sweet moderated a panel discussion featuring Martin Tisné (founder of Current AI and public interest envoy for France’s AI Action Summit), Dr. Shikha Gitau (founder and CEO of Kala AI), Vilas Dhar (president of the Patrick J. McGovern Foundation), and Shaun Seow (CEO of Philanthropy Asia Alliance).
Quantifying Africa’s Compute Needs
Dr. Shikha Gitau presented concrete data on Africa’s compute requirements, noting that many in the audience had likely never seen a GPU in person, illustrating the access gap. Her research revealed that Africa requires 2.5 million GPU hours annually, scaling to 7.5 million hours over the next three years. Currently, the continent has access to only about 5% of this capacity.
Dr. Gitau’s compute demand index provides specific, quantifiable targets rather than abstract requests for “more GPUs.” She described negotiations with UNDP Italy for 1.5 million GPU hours as an example of how concrete numbers enable productive partnership discussions. Her AI Investment Readiness Index assesses whether countries have the foundational capacity to effectively utilise compute infrastructure, evaluating factors like power availability, talent pools, and governance structures.
She illustrated challenges through the “Nigerian paradox”—Nigeria ranks highest on compute demand due to its large internet user base and strong e-commerce performance, but faces investment readiness challenges related to power infrastructure and institutional capacity.
Ecosystem Requirements Beyond Infrastructure
Martin Tisné warned against creating “white elephant” data centres—compute facilities that remain underutilised due to lack of contextual data, appropriate models, or local capacity. He highlighted critical gaps in funding for foundational dependencies that underpin open source AI development, noting that while top-tier open source software receives corporate funding, critical dependencies often rely on volunteer efforts.
Tisné described the lack of innovation in privacy-preserving data sharing as “a complete tragedy,” emphasising that data governance challenges are as critical as compute infrastructure. He also announced an afternoon product launch related to these issues.
Philanthropic Coordination and Regional Approaches
Shaun Seow discussed how philanthropic networks can coordinate resources across Asia, referencing Jensen Huang’s AI stack framework. He noted that while physical sharing of compute resources faces constraints from latency issues—making it impractical to share compute between distant countries like India and Indonesia due to 50-100 millisecond delays—there are opportunities for demand aggregation and coordinated investment strategies.
Seow emphasised aggregating demand to negotiate better pricing with cloud providers and subsidising compute costs for impact organisations, recognising that access models can be improved through coordinated philanthropic intervention even where ownership models have limitations.
Reframing Sovereignty and Development Priorities
Vilas Dhar challenged conventional approaches to AI sovereignty, describing “AI diffusion”—the idea that concentrating technological capacity in certain locations will naturally benefit broader populations—as resembling trickle-down economics. He questioned whether developing countries should prioritise compute infrastructure over basic development needs like energy access for their populations.
The discussion shifted toward concepts of agency and meaningful participation in AI development rather than territorial control over infrastructure. This approach emphasises building collaborative networks that enable genuine participation while maintaining local agency over outcomes.
Implementation Challenges
Several significant constraints emerged from the discussion:
Energy Requirements: Current AI systems’ massive energy needs raise questions about development priorities and opportunity costs of AI infrastructure investments.
Technical Limitations: Latency issues between distant countries limit practical implementation of certain compute-sharing models, while data sovereignty requirements add complexity as many countries require data processing within their borders.
Funding Gaps: The open source AI ecosystem faces sustainability challenges, with foundational components relying heavily on volunteer efforts rather than systematic funding.
Governance Complexity: Developing trust-building frameworks that accommodate diverse cultural and political contexts while remaining flexible for local variations remains challenging.
Next Steps and Commitments
The session included the release of a working report on computational resources for AI futures, with a feedback period for stakeholder input. The Maitri platform concept will undergo further development as a practical framework for international collaboration.
Dr. Gitau’s compute demand index and AI Investment Readiness Index provide practical tools that other regions can adapt. The emphasis on building institutional intermediaries that can operate at the intersection of technology, policy, and community needs represents a significant outcome requiring continued development.
The discussion demonstrated that AI democratisation requires coordinated action across multiple domains—not merely technical infrastructure but comprehensive institutional and social frameworks. Success will depend on developing the full ecosystem of capabilities, governance structures, and collaborative frameworks necessary to ensure AI development serves public interest and enables shared prosperity across the Global South and beyond.
Session transcript
to be with us, so thank you. We are here because we believe in AI’s transformative potential, and I’m certain you’ve heard a great deal about it over the past few days. Today, this session is about something deeper. The digital divide is rapidly becoming a compute divide. AI today is not constrained by imagination. It is constrained by infrastructure, by who has access to GPUs, to cloud capacity, to scalable compute. And that divide will determine who shapes the future of AI. Democratization in this context is not about catching up. It is about expanding who gets to lead. It is about ensuring that the next generation of AI breakthroughs are not concentrated in a handful of geographies, but are shaped by diverse talent, languages, and lived realities across the world.
And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done differently. Through the India AI mission and through the compute capacity plan, mobilizing more than 38 ,000 GPUs as public infrastructure, India is building one of the most ambitious public interest compute ecosystems anywhere in the world. This is not incremental reform. This is infrastructure at scale. This is sovereign capability combined with openness. India is demonstrating that public interest AI infrastructure can be built in the Global South by the Global South and for the Global South. And this leadership matters because equitable access to compute is not just about hardware. It sits alongside access to data, open source models, talent, and institutional capacity.
India is proving that you can design AI ecosystems that are both globally competitive and globally competitive. And locally grounded. At the Rockefeller Foundation, we believe this moment requires moving from diagnosis to action. Philanthropy’s role is to be catalytic, to reduce risk, unlock capital, and convene unlikely partnerships that accelerate progress. Over the next hour, our discussion will unfold in three acts. First, what exactly are we democratizing? That’s an important question. Second, how do South -South partnerships and catalytic financing accelerate progress? And third, what concrete commitments can we land this year? If India’s example shows us anything, it is this. Democratization is not theoretical. It is operational. It is scalable. And it is already underway. The question now is how we accelerate it together.
Before we begin, let me take a moment to acknowledge a few leaders in the room. Shri Abhishek Singh who unfortunately has been pulled into another meeting but his leadership has been amazing his steadfast partnership and support has been something that I am extremely grateful for his vision of guiding this important work with clarity has been just spectacular Dr. Sarabgarg we are honored by your presence you have been in sessions since this morning and thank you for your leadership it’s truly a privilege to have you with us today my colleague Andrew Sweet who has joined us from across the world one of the sharpest lines of the Rockefeller Foundation and truly a force for good thank you for being with us today and supporting this conversation and of course I want to also thank Charu who has been working endlessly and very hard to kind of get us to this place thank you Charu for your leadership Martin, Vilas, Sean and Dr. Shikoh thank you for lending yourself your voice and expertise to today’s discussion Your perspectives will help ground this dialogue in both ambition and action, and I know all of you are action -oriented folks, so we’re going to have something really cool come out from here.
And last but certainly not least, our partners at Kalpa Impact, Sushant, Anish, Jennifer, thank you for being extraordinary collaborators and for helping shape today’s session. It is now my pleasure to hand it over to Dr. Gorf. Please, over to you, sir, or maybe I’ll hand it over. Okay.
Thank you. Thank you, Deepali. When I mentioned the report, I fumbled the name, so I’ll go again. Opening up computational resources for new AI futures, new AI world is possible. And this is something that the team has worked really hard over the last few months. And today is an opportunity when we release a working version of that report and invite inputs, feedback, comments, and suggestions, which we will work through over the next few months. This research helped us think through and work with the Democratizing AI Resources Working Group under the leadership of Dr. Saurabh Garg. And he’s here. So it’s a pleasure and a privilege for us to invite him. And the other panelists to release this report.
Thank you. Thank you. Thank you. Thank you. opening up computational resources or in fact all resources that are necessary for development of AI in public interest and for real world impact. I could think of no better person than Dr. Saurav Garg under whose leadership I think we have come a long way in not just the intellectual thinking but as he will tell you in terms of operationalizing how we can bring this to life for billions in the global south and also the other countries in the world. Dr. Saurabh Garg, please for your keynote.
Thank you and colleagues panelists great to be here and great to see a large kind of attendance that we have seen over the past few days in the AI summit and And there were seven working groups set up under the AI Summit umbrella. And one of them was on democratizing AI resources. I had the privilege to chair that group along with Kenya and Egypt. So I’ll obviously talk a bit on that. But before that, just to say that I think all of us are of the opinion that AI will definitely transform the world. I think the question is whether that transformation would be equitable, would be inclusive and aligned with public interest. And I think that’s really the issue which concerns a lot of people.
The AI Summit itself was built around three guiding sources. Sutras, the people, planet and progress. And therefore, the concept being that AI ultimately must serve human welfare. advance sustainable development and enable shared prosperity. I think these would be key background in the way these sutras were developed. And obviously, democratizing many of these resources would be key to that. During our working group discussions, we had the opportunity to talk to a large number of countries, people from academia, civil society, and other international organizations. And I think one consistent message was that most countries are not really seeking only access to AI, but also seeking agency in AI. And I think that’s key. And how the AI systems need to reflect each country’s own development priorities, languages, and social contexts.
From these discussions, there were six foundational pillars that we had to address. And we thought need to form the backbone of the collective roadmap for the future. computer capability collaboration connectivity compliance and context and I’ll just briefly speak on each one of these a bit compute no doubt is today’s defining barrier the access to GPUs accelerators high -performance clusters is a major issue for all AI ecosystems but the issue is how it can be made distributable affordable and reliable across and not concentrated in a few geographies and this would no doubt require us to look at whether compute can become a shared infrastructure in future or kind of a which supports public interest innovation and to the extent that we are focusing on innovation how that part can be a public interest infrastructure secondly infrastructure structure would not be sufficient there is a widening skills gap.
So how we can consider capability diffusion focusing on joint research, shared standards, open platforms and mutual learning. What needs to be done for this responsible deployment is so that we can link innovators to compute resources and citizens to trustworthy AI enabled services. Equally important would be governance. The governance framework needs to be robust enough to build trust, yet flexible enough to adapt to diverse social and cultural contexts. Open source and maybe modular AI stacks would help in enabling localization without creating dependency. So looking at some of these issues, on what mechanisms can be done to facilitate accessible and affordable computing resources by improving utilization rates and reducing transaction costs and also to lower barriers for access regardless of geography.
The working group looked at how this can be taken forward through a collaborative platform designed to expand shared access to compute data in partnerships. And the platform has been termed as Maitri, which is friendship in Hindi. Maitri, M -A -I -T -R -I, standing for Multi -Stakeholder AI for Trusted and Resilient Infrastructure, to be developed as a digital public good that countries can adopt, customize, and build upon. And obviously, it is a non -binding, voluntary, modular approach. depending on the context of each country, what kind of compute and what kind of methods can be used to have accessible, at least for innovators and researchers looking at data sets that can be put out, which are take care of the national laws and national protocols in place and look at models.
So what which are open source and which can be placed. So this this we envisage would help to at least ensure that portions of AI are a global public good, because we we we are focusing on innovation and research out here. And this would go beyond just a focus on hardware and platforms, but also in skills, institutions and governance capacity. I would just like to mention one area. The other is that the technology is a very important part of the development of the technology. but how perhaps it might proceed in future is also the fact that while infrastructure or compute seems to be the biggest constraint going forward as of now, that’s perhaps also based on the present models requiring large amounts of compute capacity and energy.
Going forward, would models retain this system of algorithms that they have, or would there be obviously small domain -specific niche models? I think yesterday there was a very nice remark made by Vishal Sikka, who mentioned that unlike when we talk of compute infrastructure, we are talking in terms of gigawatts, nothing less than that of whatever. But when you talk of a human being, you talk in terms of only 2 ,000 calories requiring a human being to sustain a computer. Which is not more than a 100 -watt bulb. for a day. So are we missing something out here and I think that’s a very important point that he made yesterday and that’s why the focus I think we need to have much more on the models and that itself might solve a lot of the areas that we are and when we’re talking of democratizing AI perhaps that’s the path forward.
So I’ll stop here and thank you all. Thank you for this opportunity.
We now transition to the panel discussion and may I request Andrew Sweet, VP at the Rockefeller Foundation who is the moderator for the panel. Please join us here on the stage. May I request other panelists, Dr. Shikogitao, Martin Martin Tisney Vilas to join us on stage. Yes and Sean sorry Sorry Andrew over to you
thank you Dr. Garg for those inspiring remarks and for the framing insight and perspective that you bring to this conversation all of the many conversations that you’ve had throughout the course of the week so we’re excited to continue and deepen the conversation today and very excited that we have five of the world’s brightest minds to discuss this topic these are all people that have been in the AI arena for decades, this is not new to them and all people that have deep regional expertise and global perspectives so very excited for this conversation today. We don’t have a lot of time we have about 25 or 30 minutes for the conversation so we’re going to dig in, we’re not going to have a number of speeches, Dr. Garg’s speech will be the only speech that you’ve heard today but we’ll have a short series of provocations with actionable ideas for how we can move this agenda forward And so hopefully this conversation can be, you know, informal, back -and -forth banter.
I think we’ll have one round of questions, but it would be great if we could kind of feed off of each other’s questions and energy because I know we all have a lot to say here on the panel and a lot of expertise to share. So I’ll briefly introduce the panelists, then we’ll dig in. You’ve already met Dr. Garg. He’s the Secretary of the Ministry of Statistics and Program Implementation for the Government of India. He has been instrumental in shaping India’s AI governance and previously led the technology stack for the Transformative Adhar Initiative. We have Martin Tisné, founder of Current AI and public interest envoy for France’s AI Action Summit. Martin has spent 15 years building multi -stakeholder initiatives like the Open Government Partnership that we talked about earlier today to govern technology based on democratic values.
We have Vilas Dhar , president of the Patrick J. McGovern Foundation. Vilas serves on the UN Secretary General’s High -Level Advisory Board on AI and leads one of the world’s largest philanthropic movements to AI for public purpose. my friend Dr. Shikoh Gitau founder and CEO of Kala AI a visionary from Kenya. She established Safaricom Alpha and has been a leading voice in ensuring that digital transformation in Africa solves real problems in education, healthcare and agriculture and finally, Shaun Seow CEO of Philanthropy Asia Alliance. Sean is working to catalyze collaborative philanthropy across Asia, leveraging deep expertise from his time at Temasek and is CEO of Mediacorp so we’ll continue the conversation first question will go to Dr. Gerg India has launched the India AI mission with a target of 38 ,000 GPUs if we view compute as a public utility, much as we do with water and electricity what is the governance model that India is envisioning and should compute access be rationed or priced differently for public interest applications
so I would say that the focus is not on rationing but on intelligent prioritization I think that’s going to be the focus, that the impute capacity is an enabling platform, and as I mentioned, as a digital public good, at least that’s where innovation and research is going. So that we focus, and I think that’s where a lot of the philanthropic organizations would have a large role to play, given that their focus is also on ensuring that AI benefits all. So with that focus in view, how governments, philanthropic organizations, and the private sector can collaborate to ensure that affordable compute capacities are accessible to all. I think that’s the models that we are looking at, and that will ensure experimentation going forward.
Thanks, Dr. Garg. Martin, I’ll go over to you. Through current… AI and the Paris Charter, you’ve convened governments to discuss public interest AI. How do we move nations from being consumers to genuine co -creators? And quickly, you’ve also spoken about this looming data bottleneck. What do we do to unlock data sets for training without compromising privacy?
Okay, two big questions. Thank you. So, as you mentioned, we launched Current AI last year. We’ll be launching just this afternoon our first product, which is an open hardware product looking at linguistic diversity. I think I’ll be a little bit provocative to maybe start our session. I think compute is critical for obvious reasons. I think that from a financial, from an innovation, and from a sovereignty perspective, it is also possible to overplay it. I’ll tell you what my worry is, and I’d love to know what the panel thinks. I do have a worry that we could end up in a few years’ time in a world where we succeed in having compute capacity in inverted commas, in a number of countries, including in the global south, but where effectively the data centers are not used.
We’ve been talking to colleagues around the world. You do also have data centers that are effectively kind of white elephants and that are not used anywhere close to full capacity. And so I think for countries to be able to exercise sovereignty, they need to have contextual AI. They need to have contextual data in their languages with all of the diversity and the incredible richness that typifies their cultures available in order to create contextual localized AI that actually serves outcomes that people care about. And so while the compute piece is important, I think it’s one part of the issue. We need to talk about the data piece and we need to talk about the second part is the open source one.
So briefly, I think throughout the event, people talk about open source AI, that it’s a really good thing, that we’re all pro it. I think we also need to talk about how, from a philanthropic perspective, we resource the open source ecosystem. The reality of open source software is it’s mostly the top tier of open software is funded by large companies that are using it, right? Linux is partly funded effectively by volunteers working for SpaceX that are using it. There’s a bottom tier of dependencies in open source that are run on a shoestring, you know, by a few like critical, amazing people working overnight as volunteers. And there’s very few organizations, one of them, which is a part of the current AI roost, which looks at robust open source trust and safety, that are funding those critical dependencies.
So I think that for states across the world, in the global south and the north, to really be able to exercise sovereignty, and I’d love to talk about this a bit before, but I don’t want to hog the mic, we need to talk about compute, but also we need to be realistic about what the compute is going to be used for. So I think the data piece and the open source piece are really important. I think I’ve probably run out of time to talk about the data bottleneck.
Go for it.
Well, so the number… There are people in the room I’ve worked with for a long time on this issue. Velas, you’re one of them. Sushant, you’re another. I won’t name check everyone. I think it’s fantastic that there’s been so much innovation in compute and we’ve seen such change over the past 10 years. In contrast, I think it’s a complete tragedy that we haven’t seen as much, anywhere near as much innovation when it comes to data and specifically the ability for people to be able to share personal data in ways that both respect privacy and contribute to outcomes. And that’s effectively it. I think we need a huge amount more resources and thought, both when it comes to the technical side of the issue, and here, other than the side, I think that partly it’s solved, but enterprise users of AI have access to these kind of technical safeguards in a way that private users don’t.
And there’s a story that we can talk about if we have time. And then on the governance side, so for example, Velas, you and I have talked for a long time about different, and now there’s different forms of data stewardships, whether data trusts or others. To the day… I haven’t seen one that really scales to the level that we would want to see it scale. to and that I think we need a lot more resources, a lot more thinking there’s been work done but if we could harness even 20 % of the sort of like brain capacity of the world that’s going into compute right now I think we would be in a very different place.
Thank you.
Excellent, thanks Martin. Actually Vilas I’ll go next to you because I think this reminds me a little bit about a recent article you wrote about the Indian Premier League as a model for how India builds world class institutions I re -read it this week in preparation for this conversation is there a similar IPL playbook for public interest compute or is the window for building these public institutions closing as commercial consolidation accelerates?
Well I can’t think of a more controversial topic to spend our time here in this conversation than cricket. It’s been a good week all around but I think many of the people in this room probably know. Before I start I just want to say Dr. Garg I want to acknowledge in particular your leadership on this work. I spend a lot of time with senior decision makers across governments and the conversations that we have had have really given me great hope for the combination of technological confidence but also an understanding of what this means across an ecosystem. And so I want to acknowledge your leadership in particular. Thank you. Look, this question around the IPL I think is great, right?
I mean, let’s not torture the analogy and take something really fun and then try to, like, tie it to AI. But here’s what I’ll say about it. I think in many ways what we need is a new institutional framework that goes from the elites who are participating in their own places to something that feels deeply participatory. And I think around compute infrastructure in particular, we are stuck in a model where we keep reengaging and renovating old concepts and try to describe a new world. I will tell you sovereignty has been the buzzy word of the moment, right? Everybody wants to talk about sovereignty and diffusion. Sovereignty as a Westphalian concept that goes back a few centuries tries to take the idea that ownership of pieces of silicon somehow magically results in outcomes and impact that transform lives.
Now, there are logical links. And, of course, there are codependents. competencies, but to simply say that we will site compute in a particular geography and so figure out a way to disconnect ourselves from the interdependence of the 21st century doesn’t really bring us to a good outcome. I’ll tell you the second part of this, AI diffusion. If you haven’t heard this already, every tech CEO here, this has been the buzzword of the moment. I spent some time yesterday with the prime minister and a number of tech CEOs who wanted to talk about their investments in India. Those investments in many ways followed the playbook of the PR press release. They were, we’re going to build a new data center, we’re going to invest in a new compute capacity.
But when you dig deep and you ask the next question, who will this really benefit? What value does this create for public impact and outcome? How does putting a large number of servers in a particular place result in that community finding an economic uplift, a benefit in economic opportunity, a sense of dignity? The conversation sometimes falls. flat. So AI diffusion to me in its core concept, the idea that you hyper concentrate technological capacity, compute data, and somehow the rest of the society benefits sounds a little too much like something that as an American I know too intimately as trickle -down economics. The idea that if we made the rich as rich as possible somehow the benefits would filter down to everybody else and it would work.
AI diffusion is a passive concept. It starts on the premise that we build technological capacity for a few and somehow it works out for everybody else. But there’s an alternate model and it ties directly to this report that’s been issued today and the work that we’ve been talking about. For AI to benefit everyone requires a direct and active impact. It requires us to step in and say what are the institutions we have to build that actually physically and metaphorically transform the idea of compute infrastructure to be something that everybody can use. It requires us to build the institutional layers and the capacity that lets a community that’s trying to solve a local problem know that compute isn’t the thing that holds them back.
rather the conceptualization of the problem the aggregation of the full stack of resource sets as Martin described that include compute that include data governance mechanisms that include the political agency of communities to participate and let us then turn that into that final app solution infrastructural development that actually leads to the outcome we’re solving for in many ways I think this is the great role of the institutions that are represented here on the stage and in this room for philanthropies to transform the capital landscape in a way that says great entrepreneurs and leaders like my dear friend Chico here and so many here in India that are building open source public access AI stacks don’t have to worry about the resource constraints of the private capital markets that they know that they can access governmental and substantive structural resources that let them build the tools that they want to and know that they have equitable access to the markets as well as a matter both of policy and as a product that they can go out and get to consumers and creators that they can provide a service that lets them people use it at scale.
And the last part of this, and I have to say this, is this doesn’t happen, as we’ve discussed in the private market, but it also doesn’t happen exclusively by going to frontline nonprofits and saying, now you’re supposed to be the builders. It requires us to innovate a new institutional set of intermediaries. I think of groups like Culpa Impact, which I think is an incredible example of a combination of technical sophistication, policy impact, support for government, that actually sits at the layer that connects these different elements and lets us build on top of it. I think this is the work that’s ahead. If we really think about pragmatic outcomes to this conversation, Andrew, I think one of the questions we might ask is, what are the institutions we need to build in the next 12 months that connect the dots around all of these different pieces and support this transformation at scale?
Dr. Shikha, I came across a recent article that you put out there saying that for the West, AI is a matter of efficiency, but for you, it’s a matter of life or death. You’ve been a champion for AI access. You were very active in this summit. You were very active in the Kigali Summit. We were together at the launch of the first ever AI factory for Africa together in April in Kigali. You’ve also talked about global tech companies. If they want African data, they should provide compute infrastructure in return. How do we formalize these reciprocal agreements, and what does a true India -Africa partnership look like that doesn’t just reciprocate global North -South models, similar to what Vilas was just talking about?
Thank you very much for having me. It’s always fun to listen to everyone here on this. I was hoping somebody was going to preempt some of the work that I was going to talk about, but lucky for me, I have some stuff to talk about. Thank you, Vilas. So when we talk about compute, it’s this amorphous thing. In fact, we launched an AI research lab in Nairobi, and we have some GPUs there. And one of the key things was like a demo showing up what a GPU is. And our PS was like, oh, my God, this is what a GPU is, because he’s never seen a GPU. And then I made sure, like, every time I’m speaking, I’m asking, how many of you actually have seen a GPU, not on the Internet, touched one?
Maybe five people. And this is everywhere. Every single room that we’re talking about compute, we ask the same question, have you ever seen a GPU? And so right now, five to ten people. So it’s this thing that people talk about. We need GPUs, we need compute, we need all of these things. And for us, it is very important, as an African continent, we had, like, our research. Colo came for the Global South a few days ago. Same question, how many of you have seen a GPU? about 10 people had never touched a GPU. How many of you need compute? Everybody raises their hand. But what does that actually mean? In fact, one of the panels, the starting point was like, when it comes to compute, we all need Jesus.
And I thought, how do we quantify this? So we, and I think we have already spoken to Calpa about this. We’re working, I think, on the same time about a framework. So we just released a compute demand index. Because we realized every time we speak about compute, people have ideas, they have thoughts, they have proposals. They don’t have the numbers to say that. We need GPUs. How many? We need megawatts. And the gigawatt, megawatt conversation, what does a gigawatt of compute actually mean? So we went ahead and said, for Africa, we need to, every time we’re having conversations with these governments, this is actually what you need. But you actually need to put money into it.
So we, our first index was, did it demand? And the second one is, is your country ready for this? which we are calling AI Investment Readiness Index. So I’ll give you some numbers. Africa needs 2 .5 million hours of GPU hours a year, 7 .5 million for the next three years to be able to start computing well. This is for training as well as research. That is something that I can work with. So when I come to India and say I need 2 .5 million GPU hours a year, how many of them can you give me? And we have this conversation with the UNDP in Italy, and they said, oh, we have 1 .5 million GPU hours that we can donate.
We have 1 .1 million more to go. Cassava is saying we are putting 2 ,000 GPUs. How many hours of GPUs with those GPUs, hours, not actual physical, how many hours of GPUs without those 2 ,000 GPUs actually provide for the continent? So we need to be. We have to start being very practical rather than being arbitrary on what we. want. Of these 7 million GPU hours we need in the next three years, Africa only has 5 % of that. So we are doing the math. We only have 125 ,000 of these GPU hours a year, which is like times three of that for the next three years. So you’re solving, when I go to villas, I’m saying I need these GPU hours.
It’s very practical as I’ll say I can be able to do half a million GPU hours. So it’s not just going with an arbitrary number. I need GPU hours to be able to put this. And for us, that is important. But for me, it is the conversation about investment, and that’s the conversation that we asked. How do we have this South -South collaboration? How does we have this India collaboration? How does it actually look like? There’s the paradox. Everybody, as he said, as Martin said, everybody wants sovereignty. Everyone wants to talk about diffusion of AI. But what does that actually mean? Do we actually need it? So I’ll give you two examples of my two favorite countries.
Hopefully none of them is here, actually in Nigeria. So there’s something you’re calling the Nigerian paradox. Nigeria is the number one country in the computer demand index. Why? Nigeria is doing very well. It’s 110 million Internet users, a huge population. They’re doing very well in terms of, like, e -commerce, financial services. So they’re up there when it comes to why they need compute. And we’ve seen this in India. India is very high there as well when you’re doing the same exact thing. But what about investment readiness? And investment readiness is are they able or capable of running a compute facility? Do they have power? Do they have the talent? And I love, I think, what Mateen and the minister spoke about, what we think about.
When you think about compute, you don’t think about just GPUs. It’s a whole stack of things. It’s talent. It’s governance. It’s all these things. And you’re thinking about. When you think about investment readiness when it comes to compute, you have to look about all these things. Because I can give you GPUs, as he said, and I’ve worked in digital transformation for the last 20 years, and I work for the AFDB as a digital transformation lead, and we’ll buy computers and go three years later, and they have never been powered at all. And that’s the case that is going to be with GPUs, because you’re going to give countries these GPUs. If they don’t have talent, they don’t have the power to run it, they don’t have the data sets, they don’t have models, they don’t have use cases to build on top of it, you’re wasting that money.
And that’s where the investment readiness comes in. So we’re talking to countries, and we’ve had this conversation with African countries, is there’s no point of investing all your dollars in putting a compute facility. Get your talent ready. Get your data sets ready. Have strong use cases that people can back. Then, with all of that, can we then define what are the demands? Can we make money? you need. You don’t need a gigawatt Kenya, you do not need a gigawatt of compute to be able to run. Maybe you need a 200 megawatt facility and that’s where we want. So coming back to the question, how do we interact with India? This is our demand. Burundi might need 50 megawatts of GPU.
Can India be able to facilitate that? But it’s not just about facilitating the GPU, it’s what is the GPU in service of solving for health, education, agriculture. And when you have clear use cases, then the GPU demand becomes an obvious ask. And I think bridging that and convincing governments especially of bridging that gap is what we need to be able to do. And then the governance framework actually comes to play. Thank you. I know that’s a lot.
That’s great. Thank you. Thank you, Dr. Shikha. We’ll go through Sean, and then I want to keep it kind of informal for the remaining ten minutes after Sean speaks, so any reactions to any of the comments, and then we can do a lightning round if we have time, but if we don’t have time, that’s fine as well. Sean, over to you. The Philanthropy Asia Alliance brings together 80 members and partners to address Asia’s interconnected challenges through collaborative philanthropy. Is there an opportunity for Asia’s philanthropic networks to coordinate shared compute and infrastructure, pulling resources from places like India, Indonesia, and other nations, rather than competing, and what would unlock that collaboration?
Thanks, Andrew. The advantage of coming last is that I could say I agree with all of them. Actually, I’m going to add to the much maligned word called compute. Maybe we could end the panel right away. I’m going to join Martin in actually agreeing that compute is actually a bit overrated. The ownership… of compute… So when you think about the stack, I’m going to add another way to frame the conversation, Jensen Huang’s AI stack. When you think about energy, hardware, compute models and applications and the top layer applications is really what will drive and value capture for the economics as well as the impact, social impact. Really the stumbling block is probably energy at a bottom level.
And thankfully for many countries in Asia, the costs have been driven down because of the abundance of hydro, solar and wind. Then when you think about the next layer of hardware, I mean that’s obviously dominated by China, Chinese and American players. And when you think about the compute level, I understand why we fuss over compute because the Americans own 75 % of the GPU cluster performance. The Chinese 15%, Europeans maybe about 4%. and the rest of us are only like 0 .1%. I think even India is just 1 % of that. But I think the issue is actually deeper than just the ownership. I mean, if you think about what it needs to get the work done, it’s more access.
So the question you’ve posed me about sharing of compute, for example, between Indonesia and India, I live in Southeast Asia, and that’s why Indonesia is like a couple of hours away from where I live. And we know the situation in Indonesia quite intimately. There’s data residency requirements, and that’s why there’s a build -out of data centers. Think also of the physical limitations of actually the latency of sharing compute between India and Indonesia. For example, 10 ,000 kilometers apart, when you think about the latency of what, 50 to 100 milliseconds, it’s just not going to work for the sharing of compute between Indonesia. and India. Attractive as the idea is, it doesn’t work. I think they’re just physical limitations, data sovereignty, privacy issues that prevent that from happening.
So I just want to look at the positive side of what’s happening. When you think about the cost of compute is coming down, when you think about the emergence of new clouds, GPU for a service, I think these developments are actually going to be good for the unleashing of AI, for social impact, for economic capture. So the way you can think about it is, how do we then make it a little bit more accessible for startups, for impact organizations? Maybe the way to think about it is really, how do you think about aggregating demand so that you can actually negotiate with the new cloud providers and get a cheaper pricing? How do you then think about philanthropy coming in?
And to subsidize some of the compute costs. and I think I kind of agree more with the observation that you really need to go beyond just infrastructure you need to think about the ecosystem you’re building I think the skills gap in Asia is actually huge and that could be really what’s stopping us from really optimizing maximizing the power of AI in what we want to do Is that too long?
No, that’s perfect I’m not sure if anybody wants to react to any of that Martin, I see you scribbling furiously maybe first reaction to you
No, I am scribbling. I’m scribbling because I’m thinking about your points I’m thinking about the points of the panel and I’m thinking about the term sovereignty So my scribbles are to your point about the Westphalian concept of sovereignty that’s about the ability to make law within your territory and it’s a very global north concept and it’s a notion of territory which has physicality and I’m just what I was scribbling was the physicality of the territory, it’s like we’re very focused on the physicality as you were saying on the GPUs of the bricks and mortar, so we’re going to be okay because we’re going to be sovereign on this data centre, the data centre it’s on my territory and what got me thinking is other concepts of sovereignty such as when I was spending a lot of work working on data calamities and data stewardship, thinking about indigenous data sovereignty which is a different type of concept it’s a more relational concept than a territorial concept, right?
It’s about a pre -existing an inherent authority a relational authority over that which makes up a people, and so when we were studying for example indigenous data sovereignty in the Maori context in New Zealand the Maori community any data that in any way involves Maori, the Maori community … legacy is part of the Maori community so I think that there’s something here in thinking about a very in some ways a quite rigid approach to sovereignty which is about control as mentioned and one which is more about agency and which is more relational so that’s what the panel has got me thinking and I’ve been doing some writing and thinking with colleagues and friends around the notion not of a sort of like a controlled national stack but a global open resilient collaborative stack and that’s not one at all and just I’ll finish with that, that doesn’t mean that like all the data is open and anything goes and anyone can extract your personal data and you’re back in a sort of Zuboff you know surveillance capitalism world it’s one where it’s a question of choice and agency and the what you wish to exercise authority over and how.
That’s my scribbles. Thank you.
As you can tell, when we get on a panel with people you love and respect, the conversation just flows. So I want to build off this point and a little bit of what you said, Chico. I want to take a different tack to this question of agency, which is if I had asked any development leader in the world 10 years ago, if you could have your dream of an extra gigawatt of energy capacity in your country, what would you do with it? I can’t imagine that any of them would have said, well, I want to use it to run a bunch of computation on things that may or may not have short -term economic value for my country.
Andrew, your organization has been incredible around the world at building capacity and grids in power production, in ensuring that people can use power for development. And yet somehow, I think for many of us, we are surrounded by conversations where now the question has become, how many megawatts and gigawatts can you put into compute for AI? It is a fundamental challenge when you think about what are our priorities in development. Again, going back to core principles of human rights and dignity and participation in the world, to say governments who have limited capacity should now all of a sudden be focused on this topic. It brings us again from this question and this shift. In many ways, I acknowledge that the traditional conversation around compute is one of breaking over -dependence on the American AI stack, on other international players that are coming in.
But the response to over -dependence isn’t internal dependence, it’s interdependence. It’s saying if there are places that have incredible capacity and even potential to drive, as Shiko said, the availability of compute hours, how do we build interconnectedness that lets that be a mutual value exchange? Not merely, again, clients who have to go to another country and say, please give or let us buy compute, but rather the products of that compute are going to be to build the infrastructure that you can then use in your own country. To allow for centers of excellence that allow for local capacity and local competence to drive what gets built and to allow that to be the new tokens of international trade in a way that leads to a much more connected and shared prosperity rather than descending back to that 200 -year -old concept of how do we make sure that we’re competitive in an adversarial frame.
I recognize that what I’ve just shared with you is maybe not where the dominant private sector conversation is. And to those who would oppose it, the primary critique is, well, that sounds quite naive. And yet we’ve seen it happen. We’re seeing it in the few areas of hope in the multilateral system where we’re actually finding that technology governance is something that brings everybody to the table, that lets people engage in meaningful shared outcomes. We’re seeing the seeds of it. The question is whether we’re going to let them sort of die out in the sun or if we’re actually going to water them, invest in them, in order to grow.
Great. Dr. Garg, any final insights?
I know there’s a little time, but just one thing I would say that perhaps we need to spend a bit more time going forward on the frameworks that will help ensure public interest frameworks looking at things beyond compute, looking at models, looking at talent, looking at data, how that can be shared and interoperable and in a manner which takes care of public interest. So I’ll just stop it out there.
Well, thank you. Thank you to the Indian government. thanks to our partners at CalPA for putting this together especially for the authors this is now officially out there I think you have until March 31st to read the copies are available out there you have until March 31st to review the document and submit your reactions thank you to the panelists, really appreciate it enjoy the rest of the summit, thank you I think the NDIA team wants to hand over some souvenirs from the panel Thank you Thank you. Thank you. you you Thank you. Thank you.
Deepali Khanna
Speech speed
148 words per minute
Speech length
649 words
Speech time
262 seconds
Compute divide and public‑interest GPU ecosystem
Explanation
Deepali highlights that the digital divide is increasingly a compute divide and showcases India’s ambitious plan to mobilise over 38,000 GPUs as a public‑interest compute infrastructure, demonstrating that such infrastructure can be built by and for the Global South.
Evidence
“Through the India AI mission and through the compute capacity plan, mobilizing more than 38 ,000 GPUs as public infrastructure, India is building one of the most ambitious public interest compute ecosystems anywhere in the world.” [1]. “The digital divide is rapidly becoming a compute divide.” [12]. “India is demonstrating that public interest AI infrastructure can be built in the Global South by the Global South and for the Global South.” [15].
Major discussion point
Democratizing AI Compute as Public Infrastructure
Topics
Artificial intelligence | Closing all digital divides | Information and communication technologies for development
From diagnosis to action – concrete commitments
Explanation
Deepali calls for moving beyond analysis to tangible commitments this year, urging philanthropic and institutional actors to translate insights into measurable actions.
Evidence
“At the Rockefeller Foundation, we believe this moment requires moving from diagnosis to action.” [134]. “And third, what concrete commitments can we land this year?” [133].
Major discussion point
Practical Metrics and Implementation Strategies
Topics
Financial mechanisms | Follow-up and review | Artificial intelligence
Sushant Kumar
Speech speed
78 words per minute
Speech length
278 words
Speech time
212 seconds
Opening computational resources via working report
Explanation
Sushant stresses the need to open up all computational resources for public‑interest AI and announces the release of a working version of the report to gather feedback and refine implementation.
Evidence
“opening up computational resources or in fact all resources that are necessary for development of AI in public interest and for real world impact.” [17]. “Opening up computational resources for new AI futures, new AI world is possible.” [18]. “And today is an opportunity when we release a working version of that report and invite inputs, feedback, comments, and suggestions, which we will work through over the next few months.” [19].
Major discussion point
Practical Metrics and Implementation Strategies
Topics
Artificial intelligence | Information and communication technologies for development | Capacity development
Dr. Saurabh Garg
Speech speed
126 words per minute
Speech length
1172 words
Speech time
555 seconds
Compute as the defining barrier
Explanation
Dr. Garg argues that access to GPUs and high‑performance clusters is the primary bottleneck for AI ecosystems and calls for shared, affordable, reliable compute infrastructure that is not concentrated in a few geographies.
Evidence
“compute no doubt is today’s defining barrier the access to GPUs accelerators high -performance clusters is a major issue for all AI ecosystems but the issue is how it can be made distributable affordable and reliable across and not concentrated in a few geographies.” [3]. “So looking at some of these issues, on what mechanisms can be done to facilitate accessible and affordable computing resources by improving utilization rates and reducing transaction costs and also to lower barriers for access regardless of geography.” [36].
Major discussion point
Democratizing AI Compute as Public Infrastructure
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Maitri – modular digital public good for shared compute
Explanation
Dr. Garg introduces the Maitri platform, a non‑binding, voluntary, modular digital public good that enables countries to adopt, customise and build shared compute and data infrastructure.
Evidence
“Maitri, M -A -I -T -R -I, standing for Multi -Stakeholder AI for Trusted and Resilient Infrastructure, to be developed as a digital public good that countries can adopt, customize, and build upon.” [59]. “And the platform has been termed as Maitri, which is friendship in Hindi.” [60]. “And obviously, it is a non -binding, voluntary, modular approach.” [61].
Major discussion point
South‑South Partnerships and Collaborative Platforms
Topics
Artificial intelligence | Data governance | Information and communication technologies for development
Six foundational pillars for AI development
Explanation
Dr. Garg summarises the discussion into six pillars—compute, capability, collaboration, connectivity, compliance and context—that together form a roadmap for responsible AI deployment.
Evidence
“computer capability collaboration connectivity compliance and context” [3]. “From these discussions, there were six foundational pillars that we had to address.” [120].
Major discussion point
Complementary Pillars: Data, Open‑Source, Talent, Governance
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Andrew Sweet
Speech speed
108 words per minute
Speech length
1001 words
Speech time
551 seconds
Philanthropy as catalytic, risk‑reducing partner
Explanation
Andrew frames philanthropy as a catalyst that reduces risk, unlocks capital and convenes unlikely partnerships to accelerate South‑South AI collaboration.
Evidence
“Philanthropy’s role is to be catalytic, to reduce risk, unlock capital, and convene unlikely partnerships that accelerate progress.” [66]. “Is there an opportunity for Asia’s philanthropic networks to coordinate shared compute and infrastructure, pulling resources from places like India, Indonesia, and other nations, rather than competing, and what would unlock that collaboration?” [63]. “The Philanthropy Asia Alliance brings together 80 members and partners to address Asia’s interconnected challenges through collaborative philanthropy.” [69].
Major discussion point
South‑South Partnerships and Collaborative Platforms
Topics
Financial mechanisms | Artificial intelligence | The enabling environment for digital development
March 31 review deadline for report feedback
Explanation
Andrew announces a concrete timeline, giving stakeholders until March 31 to review the released report and submit reactions, turning the discussion into actionable next steps.
Evidence
“thanks to our partners at CalPA for putting this together especially for the authors this is now officially out there I think you have until March 31st to read the copies are available out there you have until March 31st to review the document and submit your reactions” [129].
Major discussion point
Practical Metrics and Implementation Strategies
Topics
Follow‑up and review | Financial mechanisms | Artificial intelligence
Martin Tisné
Speech speed
183 words per minute
Speech length
1162 words
Speech time
379 seconds
Under‑used data centres and need for contextual data
Explanation
Martin warns that many data centres become ‘white elephants’ if not fully utilised and stresses that contextual, multilingual data is essential for locally relevant AI outcomes.
Evidence
“You do also have data centers that are effectively kind of white elephants and that are not used anywhere close to full capacity.” [43]. “They need to have contextual data in their languages with all of the diversity and the incredible richness that typifies their cultures available in order to create contextual localized AI that actually serves outcomes that people care about.” [47].
Major discussion point
South‑South Partnerships and Collaborative Platforms
Topics
Data governance | Artificial intelligence | Information and communication technologies for development
Data bottleneck and under‑funded open‑source dependencies
Explanation
Martin points out a critical data bottleneck and notes that lower‑tier open‑source projects rely on minimal funding, threatening the sustainability of the AI stack.
Evidence
“I think I’ve probably run out of time to talk about the data bottleneck.” [102]. “There’s a bottom tier of dependencies in open source that are run on a shoestring, you know, by a few like critical, amazing people working overnight as volunteers.” [103]. “And there’s very few organizations, one of them, which is a part of the current AI roost, which looks at robust open source trust and safety, that are funding those critical dependencies.” [104].
Major discussion point
Complementary Pillars: Data, Open‑Source, Talent, Governance
Topics
Artificial intelligence | Data governance | Financial mechanisms
Data‑trust models for equitable South‑South exchange
Explanation
Martin highlights emerging data‑trust mechanisms as a way to enable secure, sovereign data sharing between Global South partners.
Evidence
“different forms of data stewardships, whether data trusts or others.” [97]. “And so I think for countries to be able to exercise sovereignty, they need to have contextual AI.” [78].
Major discussion point
South‑South Partnerships and Collaborative Platforms
Topics
Data governance | Artificial intelligence | Closing all digital divides
Vilas Dhar
Speech speed
204 words per minute
Speech length
1556 words
Speech time
456 seconds
New participatory institutional frameworks
Explanation
Vilas calls for innovative, participatory institutional intermediaries that move beyond elite, territorial models to support inclusive AI infrastructure development.
Evidence
“I think in many ways what we need is a new institutional framework that goes from the elites who are participating in their own places to something that feels deeply participatory.” [76]. “It requires us to innovate a new institutional set of intermediaries.” [77]. “what are the institutions we need to build in the next 12 months that connect the dots around all of these different pieces and support this transformation at scale?” [75].
Major discussion point
South‑South Partnerships and Collaborative Platforms
Topics
The enabling environment for digital development | Capacity development | Institutional frameworks
AI diffusion is passive – need active institutions
Explanation
Vilas critiques the notion of AI diffusion as a passive trickle‑down process and argues that active institutions are required to translate compute into tangible societal benefits.
Evidence
“AI diffusion is a passive concept.” [108]. “For AI to benefit everyone requires a direct and active impact.” [110]. “It requires us to step in and say what are the institutions we have to build that actually physically and metaphorically transform the idea of compute infrastructure to be something that everybody can use.” [31].
Major discussion point
Complementary Pillars: Data, Open‑Source, Talent, Governance
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Dr. Shikha Gitao
Speech speed
174 words per minute
Speech length
1259 words
Speech time
432 seconds
Compute Demand and AI Investment Readiness Indices
Explanation
Dr. Gitao presents two new metrics—the Compute Demand Index and the AI Investment Readiness Index—to quantify GPU‑hour needs and assess a country’s capacity to host AI projects.
Evidence
“So we just released a compute demand index.” [86]. “which we are calling AI Investment Readiness Index.” [87]. “So when I come to India and say I need 2 .5 million GPU hours a year, how many of them can you give me?” [5]. “Africa needs 2 .5 million hours of GPU hours a year, 7 .5 million for the next three years to be able to start computing well.” [121].
Major discussion point
Practical Metrics and Implementation Strategies
Topics
Monitoring and measurement | Artificial intelligence | Data governance
Investment readiness must include talent, data, use‑cases
Explanation
She stresses that without talent, power, data sets, models and concrete use‑cases, investments in compute are wasted, urging a holistic view of readiness.
Evidence
“If they don’t have talent, they don’t have the power to run it, they don’t have the data sets, they don’t have models, they don’t have use cases to build on top of it, you’re wasting that money.” [107]. “When you think about investment readiness when it comes to compute, you have to look about all these things.” [92].
Major discussion point
Complementary Pillars: Data, Open‑Source, Talent, Governance
Topics
Artificial intelligence | Capacity development | Financial mechanisms
Shaun Seow
Speech speed
159 words per minute
Speech length
576 words
Speech time
217 seconds
Compute may be overrated – broader AI stack matters
Explanation
Shaun argues that focusing solely on compute overlooks other critical factors such as energy, latency, hardware, and application layers that drive impact and value capture.
Evidence
“I’m going to join Martin in actually agreeing that compute is actually a bit overrated.” [49]. “Think also of the physical limitations of actually the latency of sharing compute between India and Indonesia.” [7]. “When you think about energy, hardware, compute models and applications and the top layer applications is really what will drive and value capture for the economics as well as the impact, social impact.” [52].
Major discussion point
Democratizing AI Compute as Public Infrastructure
Topics
Artificial intelligence | Environmental impacts | Closing all digital divides
Skills gap in Asia limits AI impact – need philanthropy‑subsidised training
Explanation
Shaun points out a large skills gap in Asia that hampers AI optimisation and calls for ecosystem support, including philanthropy‑funded training programmes.
Evidence
“the skills gap in Asia is actually huge and that could be really what’s stopping us from really optimizing maximizing the power of AI…” [54]. “When you think about the cost of compute is coming down, when you think about the emergence of new clouds, GPU for a service, I think these developments are actually going to be good for the unleashing of AI, for social impact, for economic capture.” [55].
Major discussion point
Complementary Pillars: Data, Open‑Source, Talent, Governance
Topics
Capacity development | Artificial intelligence | Financial mechanisms
Agreements
Agreement points
Compute infrastructure alone is insufficient for meaningful AI democratization
Speakers
– Martin Tisné
– Vilas Dhar
– Dr. Shikha Gitao
– Dr. Saurabh Garg
Arguments
Compute is important but can be overplayed; unused data centers become “white elephants” without proper utilization
AI democratization requires the full stack: compute, data governance, political agency, and institutional capacity
Investment readiness is crucial – countries need talent, power, governance, and use cases before investing in compute infrastructure
Six foundational pillars are needed: compute, capability, collaboration, connectivity, compliance, and context
Summary
All speakers agree that focusing solely on compute infrastructure without addressing the broader ecosystem (talent, governance, data, use cases) leads to underutilized resources and failed democratization efforts
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Need for practical, quantified approaches rather than abstract discussions
Speakers
– Dr. Shikha Gitao
– Andrew Sweet
– Sushant Kumar
Arguments
Africa needs 2.5 million GPU hours annually, with only 5% currently available, requiring practical quantification of compute demands
The conversation should focus on actionable ideas and concrete commitments rather than theoretical discussions
Operationalizing AI democratization requires moving beyond intellectual thinking to practical implementation for billions in the Global South
Summary
Speakers emphasize the importance of moving from theoretical frameworks to concrete, measurable implementation with specific targets and actionable outcomes
Topics
Artificial intelligence | Information and communication technologies for development | Monitoring and measurement
Traditional sovereignty concepts are inadequate for AI governance
Speakers
– Martin Tisné
– Vilas Dhar
Arguments
Traditional territorial sovereignty concepts are inadequate; relational sovereignty focused on agency and choice is more appropriate
The response to over-dependence shouldn’t be internal dependence but rather interdependence and mutual value exchange
Summary
Both speakers reject traditional territorial sovereignty models in favor of more collaborative, agency-focused approaches that emphasize interdependence and mutual benefit
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Access to compute is more important than ownership
Speakers
– Shaun Seow
– Dr. Saurabh Garg
Arguments
Compute ownership alone doesn’t guarantee outcomes; access and utilization are more important than physical possession
Compute capacity should function as an enabling platform and digital public good, focusing on intelligent prioritization rather than rationing
Summary
Both speakers prioritize accessible, shared compute resources over individual ownership models, emphasizing utilization and public benefit
Topics
Artificial intelligence | Information and communication technologies for development
Similar viewpoints
Both emphasize the importance of local context, cultural relevance, and fair exchange relationships rather than extractive models in AI development
Speakers
– Martin Tisné
– Dr. Shikha Gitao
Arguments
Countries need contextual AI with local data in their languages and cultures to exercise true sovereignty
True partnerships should involve reciprocal agreements where data access is exchanged for compute infrastructure
Topics
Artificial intelligence | Data governance | Social and economic development
Both advocate for creating new types of intermediary institutions that can bridge technical capabilities, policy implementation, and government support
Speakers
– Vilas Dhar
– Andrew Sweet
Arguments
New institutional frameworks are needed that connect technical sophistication with policy impact and government support
Philanthropic organizations should focus on building new institutional intermediaries that connect technical sophistication with policy impact and government support
Topics
The enabling environment for digital development | Financial mechanisms | Artificial intelligence
Both emphasize the importance of countries having agency and control over their AI development rather than being passive consumers, with India serving as a model
Speakers
– Dr. Saurabh Garg
– Deepali Khanna
Arguments
Countries seek agency in AI, not just access, wanting AI systems that reflect their development priorities and contexts
India is building one of the world’s most ambitious public interest compute ecosystems with 38,000 GPUs as public infrastructure
Topics
Artificial intelligence | Information and communication technologies for development
Unexpected consensus
Skepticism about compute-centric approaches despite the session’s focus on compute democratization
Speakers
– Martin Tisné
– Shaun Seow
– Vilas Dhar
Arguments
Compute is important but can be overplayed; unused data centers become “white elephants” without proper utilization
Compute ownership alone doesn’t guarantee outcomes; access and utilization are more important than physical possession
Governments with limited capacity shouldn’t prioritize compute infrastructure over basic development needs like energy access
Explanation
Despite the session being specifically about democratizing compute access, multiple speakers expressed skepticism about overemphasizing compute infrastructure, suggesting the field may be moving toward more holistic approaches
Topics
Artificial intelligence | Social and economic development | Environmental impacts
Agreement on the need for South-South collaboration models
Speakers
– Dr. Shikha Gitao
– Vilas Dhar
– Dr. Saurabh Garg
Arguments
True partnerships should involve reciprocal agreements where data access is exchanged for compute infrastructure
The response to over-dependence shouldn’t be internal dependence but rather interdependence and mutual value exchange
The Maitri platform represents a collaborative approach to shared access for compute, data, and partnerships
Explanation
Unexpected consensus emerged around moving away from North-South dependency models toward collaborative South-South partnerships, suggesting a shift in how global AI development might be structured
Topics
Artificial intelligence | Information and communication technologies for development | Financial mechanisms
Overall assessment
Summary
Strong consensus exists around moving beyond compute-only approaches to comprehensive AI ecosystems, emphasizing practical implementation over theoretical frameworks, and developing new models of sovereignty and collaboration
Consensus level
High level of consensus with significant implications for AI democratization strategy – suggests the field is maturing beyond infrastructure-focused approaches toward more holistic, collaborative models that prioritize agency, context, and practical outcomes over resource ownership
Differences
Different viewpoints
Importance and prioritization of compute infrastructure ownership versus access
Speakers
– Dr. Saurabh Garg
– Martin Tisné
– Shaun Seow
– Vilas Dhar
Arguments
India is building one of the world’s most ambitious public interest compute ecosystems with 38,000 GPUs as public infrastructure
Compute is important but can be overplayed; unused data centers become “white elephants” without proper utilization
Compute ownership alone doesn’t guarantee outcomes; access and utilization are more important than physical possession
The response to over-dependence shouldn’t be internal dependence but rather interdependence and mutual value exchange
Summary
Dr. Garg emphasizes India’s massive investment in GPU infrastructure as public good, while Tisné, Seow, and Dhar question whether ownership of compute infrastructure is the right approach, arguing that access, utilization, and interdependence are more important than physical possession
Topics
Artificial intelligence | Information and communication technologies for development | The enabling environment for digital development
Feasibility and desirability of cross-border compute sharing
Speakers
– Shaun Seow
– Dr. Shikha Gitao
Arguments
Physical limitations like latency and data sovereignty requirements prevent effective compute sharing between distant countries
Africa needs 2.5 million GPU hours annually, with only 5% currently available, requiring practical quantification of compute demands
Summary
Seow argues that technical limitations and data sovereignty requirements make cross-border compute sharing impractical, while Gitao’s quantified demands for African compute needs imply that cross-border partnerships and sharing arrangements are necessary and feasible
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Closing all digital divides
Development priorities: AI infrastructure versus basic development needs
Speakers
– Vilas Dhar
– Dr. Saurabh Garg
– Dr. Shikha Gitao
Arguments
Governments with limited capacity shouldn’t prioritize compute infrastructure over basic development needs like energy access
India is building one of the world’s most ambitious public interest compute ecosystems with 38,000 GPUs as public infrastructure
Investment readiness is crucial – countries need talent, power, governance, and use cases before investing in compute infrastructure
Summary
Dhar questions whether developing countries should invest in AI compute when they have more pressing needs like energy access, while Garg champions India’s massive AI infrastructure investment, and Gitao takes a middle position emphasizing readiness before investment
Topics
Social and economic development | Environmental impacts | Artificial intelligence
Unexpected differences
Role of energy resources in AI development versus basic development
Speakers
– Vilas Dhar
– Dr. Saurabh Garg
Arguments
Governments with limited capacity shouldn’t prioritize compute infrastructure over basic development needs like energy access
India is building one of the world’s most ambitious public interest compute ecosystems with 38,000 GPUs as public infrastructure
Explanation
This disagreement is unexpected because both speakers are generally aligned on AI democratization goals, but Dhar fundamentally questions whether energy resources should be diverted to AI compute rather than basic development needs, while Garg champions massive AI infrastructure investment
Topics
Environmental impacts | Social and economic development | Artificial intelligence
Optimism about future AI efficiency versus current infrastructure needs
Speakers
– Dr. Saurabh Garg
– Dr. Shikha Gitao
Arguments
Future AI models may require less compute as algorithms evolve toward smaller, domain-specific models rather than large general ones
Africa needs 2.5 million GPU hours annually, with only 5% currently available, requiring practical quantification of compute demands
Explanation
Unexpected because Garg suggests the compute problem may solve itself through more efficient algorithms, while Gitao focuses on immediate, quantified infrastructure needs, representing different time horizons and levels of technological optimism
Topics
Artificial intelligence | Environmental impacts | Closing all digital divides
Overall assessment
Summary
The main disagreements center on the relative importance of compute infrastructure ownership versus access, the feasibility of cross-border sharing, and whether developing countries should prioritize AI infrastructure over basic development needs
Disagreement level
Moderate disagreement with significant implications – while speakers share the goal of AI democratization, their different approaches could lead to very different policy recommendations and resource allocation decisions. The disagreements reflect fundamental tensions between sovereignty and interdependence, immediate needs versus future potential, and infrastructure-first versus ecosystem-first approaches
Partial agreements
Partial agreements
All agree that AI democratization requires more than just compute infrastructure and must include comprehensive ecosystems, but they disagree on the relative importance and sequencing of different components like data, governance, talent, and infrastructure
Speakers
– Martin Tisné
– Dr. Shikha Gitao
– Vilas Dhar
Arguments
Countries need contextual AI with local data in their languages and cultures to exercise true sovereignty
Investment readiness is crucial – countries need talent, power, governance, and use cases before investing in compute infrastructure
AI democratization requires the full stack: compute, data governance, political agency, and institutional capacity
Topics
Artificial intelligence | Capacity development | Human rights and the ethical dimensions of the information society
Both support collaborative, non-territorial approaches to AI governance, but Garg focuses on practical platform implementation while Tisné emphasizes conceptual frameworks around relational sovereignty
Speakers
– Dr. Saurabh Garg
– Martin Tisné
Arguments
The Maitri platform represents a collaborative approach to shared access for compute, data, and partnerships
Traditional territorial sovereignty concepts are inadequate; relational sovereignty focused on agency and choice is more appropriate
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance
Both see important roles for philanthropic organizations in AI democratization, but Dhar focuses on building new institutional intermediaries while Seow emphasizes practical demand aggregation and cost subsidization
Speakers
– Vilas Dhar
– Shaun Seow
Arguments
New institutional frameworks are needed that connect technical sophistication with policy impact and government support
Philanthropic organizations should aggregate demand to negotiate better pricing and subsidize compute costs for impact organizations
Topics
Financial mechanisms | Artificial intelligence | The enabling environment for digital development
Similar viewpoints
Both emphasize the importance of local context, cultural relevance, and fair exchange relationships rather than extractive models in AI development
Speakers
– Martin Tisné
– Dr. Shikha Gitao
Arguments
Countries need contextual AI with local data in their languages and cultures to exercise true sovereignty
True partnerships should involve reciprocal agreements where data access is exchanged for compute infrastructure
Topics
Artificial intelligence | Data governance | Social and economic development
Both advocate for creating new types of intermediary institutions that can bridge technical capabilities, policy implementation, and government support
Speakers
– Vilas Dhar
– Andrew Sweet
Arguments
New institutional frameworks are needed that connect technical sophistication with policy impact and government support
Philanthropic organizations should focus on building new institutional intermediaries that connect technical sophistication with policy impact and government support
Topics
The enabling environment for digital development | Financial mechanisms | Artificial intelligence
Both emphasize the importance of countries having agency and control over their AI development rather than being passive consumers, with India serving as a model
Speakers
– Dr. Saurabh Garg
– Deepali Khanna
Arguments
Countries seek agency in AI, not just access, wanting AI systems that reflect their development priorities and contexts
India is building one of the world’s most ambitious public interest compute ecosystems with 38,000 GPUs as public infrastructure
Topics
Artificial intelligence | Information and communication technologies for development
Takeaways
Key takeaways
AI democratization requires moving beyond just compute access to building comprehensive ecosystems including data governance, talent development, open source infrastructure, and institutional capacity
India’s AI mission with 38,000 GPUs represents a model for public interest compute infrastructure that other countries can adapt and customize
Compute should be treated as a public utility with intelligent prioritization rather than rationing, focusing on public interest applications
True AI sovereignty means agency and contextual AI development, not just physical ownership of infrastructure in one’s territory
Practical quantification of compute needs is essential – Africa needs 2.5 million GPU hours annually but currently has only 5% of that capacity
Investment readiness (talent, power, governance, use cases) is as important as compute infrastructure to avoid creating ‘white elephant’ data centers
South-South partnerships should involve reciprocal value exchanges rather than traditional donor-recipient models
Future AI models may evolve toward smaller, domain-specific applications requiring less compute power
Philanthropic organizations have a crucial catalytic role in aggregating demand, subsidizing costs, and building new institutional frameworks
Resolutions and action items
Release of the working report ‘Opening up computational resources for new AI futures’ with feedback period until March 31st
Development of the Maitri platform (Multi-Stakeholder AI for Trusted and Resilient Infrastructure) as a digital public good for countries to adopt and customize
Creation of compute demand index and AI Investment Readiness Index to provide practical frameworks for countries
Need to develop concrete public interest frameworks covering models, talent, and data sharing beyond just compute
Build new institutional intermediaries that connect technical sophistication with policy impact and government support
Establish mechanisms for philanthropic organizations to aggregate demand and negotiate better compute pricing for impact organizations
Unresolved issues
How to effectively scale data stewardship models like data trusts to meaningful levels
Addressing the data bottleneck and innovation gap in privacy-preserving data sharing mechanisms
Funding models for critical open source AI dependencies that currently rely on volunteer efforts
Physical limitations of compute sharing between distant countries due to latency and data sovereignty requirements
Balancing development priorities between AI compute infrastructure and basic needs like energy access
Creating sustainable governance frameworks that are robust enough to build trust yet flexible for diverse cultural contexts
Determining optimal resource allocation between large general AI models versus smaller domain-specific models
Suggested compromises
Focus on interdependence rather than complete independence – building collaborative networks instead of isolated national AI stacks
Adopt a modular, non-binding approach through platforms like Maitri that countries can customize based on their specific contexts and needs
Combine territorial sovereignty concepts with relational sovereignty models that emphasize agency and choice over pure control
Balance compute infrastructure investments with comprehensive ecosystem development including talent, governance, and use case development
Use philanthropic capital to bridge the gap between private market limitations and government capacity constraints
Shift from passive AI diffusion models to active, direct impact approaches that ensure benefits reach intended communities
Thought provoking comments
I do have a worry that we could end up in a few years’ time in a world where we succeed in having compute capacity in inverted commas, in a number of countries, including in the global south, but where effectively the data centers are not used… You do also have data centers that are effectively kind of white elephants and that are not used anywhere close to full capacity.
Speaker
Martin Tisné
Reason
This comment challenged the prevailing assumption that simply providing compute infrastructure equals democratization. It introduced the critical distinction between having infrastructure and actually utilizing it effectively, forcing the discussion to move beyond hardware to consider the full ecosystem needed for meaningful AI development.
Impact
This shifted the conversation from a focus on compute quantity to compute utility and effectiveness. It set up the framework for subsequent speakers to address the broader stack of requirements (data, talent, governance) and influenced Dr. Shikha’s detailed response about quantifying actual compute needs versus arbitrary demands.
AI diffusion to me in its core concept… sounds a little too much like something that as an American I know too intimately as trickle-down economics. The idea that if we made the rich as rich as possible somehow the benefits would filter down to everybody else… AI diffusion is a passive concept.
Speaker
Vilas Dhar
Reason
This analogy was particularly provocative as it reframed the entire AI democratization discourse by comparing it to a widely criticized economic theory. It challenged the assumption that concentrating AI capabilities in certain locations would naturally benefit broader populations, introducing the need for active, intentional distribution mechanisms.
Impact
This comment fundamentally shifted the discussion from passive infrastructure deployment to active institutional design. It led to deeper exploration of what specific institutions and mechanisms are needed to ensure AI benefits reach intended populations, influencing the conversation toward concrete implementation strategies.
Africa needs 2.5 million hours of GPU hours a year, 7.5 million for the next three years… Of these 7 million GPU hours we need in the next three years, Africa only has 5% of that… When I go to villas, I’m saying I need these GPU hours. It’s very practical.
Speaker
Dr. Shikha Gitao
Reason
This comment was groundbreaking because it moved the discussion from abstract concepts to concrete, quantifiable metrics. By providing specific numbers for compute demand, it demonstrated how to make infrastructure planning actionable and evidence-based rather than aspirational.
Impact
This transformed the conversation from theoretical discussions about compute needs to practical frameworks for measurement and planning. It provided a model for how other regions could approach compute planning and influenced the discussion toward more concrete, measurable approaches to AI infrastructure development.
Unlike when we talk of compute infrastructure, we are talking in terms of gigawatts… But when you talk of a human being, you talk in terms of only 2,000 calories… Which is not more than a 100-watt bulb for a day. So are we missing something out here?
Speaker
Dr. Saurabh Garg (referencing Vishal Sikka)
Reason
This observation was profound because it questioned the fundamental assumptions about compute requirements by comparing artificial and biological intelligence energy consumption. It suggested that current AI models might be fundamentally inefficient and that the solution might lie in better algorithms rather than more hardware.
Impact
This comment introduced a completely different perspective on the compute scarcity problem, suggesting that technological innovation in model efficiency could be more important than infrastructure scaling. It added a layer of complexity to the discussion by questioning whether the premise of needing massive compute resources was correct.
I’m thinking about indigenous data sovereignty which is a different type of concept… It’s about a pre-existing an inherent authority a relational authority over that which makes up a people… thinking about a very in some ways a quite rigid approach to sovereignty which is about control… and one which is more about agency and which is more relational.
Speaker
Martin Tisné
Reason
This comment introduced an entirely different conceptual framework for thinking about sovereignty in the AI context. By drawing from indigenous data sovereignty concepts, it challenged Western notions of territorial control and introduced ideas of relational authority and community agency.
Impact
This reframed the entire sovereignty discussion from a nation-state, territorial model to a more nuanced understanding of community agency and relational authority. It influenced Vilas Dhar’s subsequent comments about interdependence versus independence and helped shift the conversation toward more collaborative, community-centered approaches to AI governance.
If I had asked any development leader in the world 10 years ago, if you could have your dream of an extra gigawatt of energy capacity in your country, what would you do with it? I can’t imagine that any of them would have said, well, I want to use it to run a bunch of computation on things that may or may not have short-term economic value.
Speaker
Vilas Dhar
Reason
This comment was particularly insightful because it highlighted the opportunity cost of AI infrastructure investments in developing countries. It forced consideration of whether massive compute investments align with fundamental development priorities and human needs.
Impact
This comment brought the discussion back to core development principles and forced participants to justify AI infrastructure investments in terms of human development outcomes. It influenced the conversation toward thinking about AI infrastructure as part of broader development strategy rather than as an end in itself.
Overall assessment
These key comments fundamentally transformed what could have been a straightforward discussion about AI infrastructure into a nuanced examination of the assumptions, frameworks, and priorities underlying AI democratization efforts. The conversation evolved from initial enthusiasm about compute infrastructure to critical questioning of effectiveness, measurement, and alignment with development goals. Martin Tisné’s early challenge about ‘white elephant’ data centers set the tone for critical examination, while Dr. Shikha’s quantitative framework provided a practical counterpoint. Vilas Dhar’s economic analogies and development perspective grounded the discussion in broader social and economic contexts. The interplay between these perspectives created a rich dialogue that moved beyond technical solutions to examine institutional, governance, and philosophical questions about how AI development should proceed in the Global South. The discussion ultimately demonstrated the complexity of AI democratization and the need for multifaceted approaches that go far beyond simply providing compute infrastructure.
Follow-up questions
How can we ensure that compute infrastructure doesn’t become ‘white elephants’ that are underutilized?
Speaker
Martin Tisné
Explanation
Martin expressed concern that countries might succeed in having compute capacity but the data centers end up not being used to full capacity, highlighting the need for contextual AI and proper utilization strategies
How do we resource the open source ecosystem, particularly the critical dependencies that run on volunteer efforts?
Speaker
Martin Tisné
Explanation
Martin noted that while top-tier open source software is funded by large companies, there’s a bottom tier of critical dependencies run by volunteers that need better funding mechanisms
How can we achieve innovation in data sharing that respects privacy while contributing to outcomes?
Speaker
Martin Tisné
Explanation
Martin called it ‘a complete tragedy’ that there hasn’t been enough innovation in enabling people to share personal data in ways that both respect privacy and contribute to positive outcomes
What are the institutions we need to build in the next 12 months that connect the different elements of AI democratization?
Speaker
Vilas Dhar
Explanation
Vilas emphasized the need for new institutional frameworks that can connect compute, data governance, political agency, and other elements to support transformation at scale
How do we quantify actual compute needs rather than making arbitrary demands?
Speaker
Dr. Shikha Gitao
Explanation
Dr. Gitao highlighted the need for practical frameworks to determine specific compute requirements, noting that people often ask for GPUs without knowing how many they actually need
How can we formalize reciprocal agreements where global tech companies provide compute infrastructure in exchange for African data?
Speaker
Andrew Sweet (referencing Dr. Shikha Gitao’s previous work)
Explanation
This addresses the need for structured partnerships that ensure fair value exchange rather than one-sided data extraction
How do we aggregate demand to negotiate better pricing with cloud providers for startups and impact organizations?
Speaker
Shaun Seow
Explanation
Shaun suggested this as a practical approach to make compute more accessible, recognizing that physical sharing of compute across long distances has limitations
What frameworks can help ensure public interest considerations across compute, models, talent, and data sharing?
Speaker
Dr. Saurabh Garg
Explanation
Dr. Garg emphasized the need to develop comprehensive frameworks that go beyond just compute to address the full stack of AI democratization needs
Will future AI models retain their current high compute and energy requirements, or will we see more efficient, domain-specific models?
Speaker
Dr. Saurabh Garg
Explanation
Dr. Garg referenced Vishal Sikka’s point about the vast difference between AI compute requirements (gigawatts) versus human brain efficiency (100-watt equivalent), questioning whether current approaches are missing something fundamental
How do we build interconnectedness that creates mutual value exchange rather than client-dependency relationships?
Speaker
Vilas Dhar
Explanation
Vilas argued for moving from over-dependence to interdependence, where compute sharing creates products that benefit all parties rather than traditional buyer-seller relationships
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

