Open Forum #53 AI for Sustainable Development Country Insights and Strategies
24 Jun 2025 14:45h - 15:45h
Open Forum #53 AI for Sustainable Development Country Insights and Strategies
Session at a glance
Summary
This IGF Open Forum session focused on leveraging artificial intelligence for sustainable development, examining country-level strategies and challenges in achieving the UN Sustainable Development Goals through AI implementation. The discussion was moderated by Yu Ping Chan from UNDP’s Digital AI and Innovation Hub, with panelists representing diverse stakeholders including academia, private sector, civil society, and government perspectives from organizations like Carnegie Endowment, Co-Creation Hub Africa, Intel, and the Indian government.
The panelists identified several key challenges in the current AI landscape for sustainable development. These include significant digital divides between the Global North and South, with Africa accounting for only 0.1% of global computing capacity, and concentration of AI power among a few multinational players. Energy consumption emerged as a critical concern, with modern AI systems requiring enormous computational resources that could undermine climate sustainability goals. The discussion highlighted the tension between AI’s potential benefits and its environmental costs, emphasizing the need for more efficient, locally-relevant AI solutions.
A central theme was the importance of “local AI” – systems developed by and for local communities rather than imposed from external sources. Speakers stressed that effective AI for development must involve affected communities in design and governance, addressing linguistic diversity and cultural contexts. The Indian government representative shared their DPI (Digital Public Infrastructure) model as an example of successful public-private partnership, making AI tools and datasets accessible at low cost while maintaining responsible AI principles.
Funding challenges were extensively discussed, with traditional donor models proving inadequate for the scale needed. New collaborative funding approaches are emerging, but they require more localized, non-extractive models that can become self-sustaining. The session concluded with cautious optimism about AI’s potential for sustainable development, contingent on addressing equity gaps, building local capacity, and ensuring inclusive governance structures.
Keypoints
## Major Discussion Points:
– **AI Equity Gap and Digital Divides**: The discussion extensively covered the growing divide between Global North and Global South in AI access, with Africa accounting for only 0.1% of world computing capacity and significant disparities in funding, infrastructure, and technical capacity.
– **Localization and Community Engagement in AI Development**: Panelists emphasized the critical need for AI solutions to be developed locally by and with communities, rather than being “helicoptered in from afar,” including considerations of linguistic diversity, cultural context, and user-centered design.
– **Sustainable AI vs. AI for Sustainability**: The conversation distinguished between making AI itself more sustainable (addressing energy consumption, environmental impact) and leveraging AI to achieve broader sustainability goals and SDGs.
– **Funding Models and Governance Challenges**: Discussion of how current funding paradigms often don’t align with local needs, the emergence of collaborative pooled funding efforts, and the importance of multi-stakeholder governance approaches in AI ecosystem development.
– **Evidence-Based Implementation and Capacity Building**: Strong emphasis on the need for concrete evidence of AI impact, moving from “hype to hope to truth,” and prioritizing capacity building and skills development as fundamental requirements for inclusive AI adoption.
## Overall Purpose:
The discussion aimed to examine how AI can be leveraged at the country level to achieve sustainable development goals, with particular focus on addressing challenges of bias, exclusion, capacity gaps, and infrastructure limitations. The session sought to translate global AI discussions into practical, in-country impact strategies while fostering international cooperation and multi-stakeholder collaboration.
## Overall Tone:
The discussion began with cautious optimism tempered by realism, as evidenced by the audience’s initial 5.0 rating on AI optimism for sustainable development. The tone remained constructively critical throughout, with panelists acknowledging both significant challenges and promising opportunities. Speakers demonstrated practical experience-based perspectives rather than theoretical enthusiasm, emphasizing the need for patient, evidence-based approaches. The conversation maintained a collaborative, solution-oriented atmosphere while honestly addressing systemic barriers and inequities in the current AI landscape.
Speakers
**Speakers from the provided list:**
– **Yu Ping Chan** – Head of Digital Partnerships and Engagements at the United Nations Development Program’s Digital AI and Innovation Hub; Session moderator
– **Armando Guio Espanol** – Representative of the global network of internet and society centers (Network of Centers); Academic network representative working on AI impact analysis and evidence gathering
– **Oluwaseun Adepoju** – Managing Director of the Co-Creation Hub Africa; Pan-African Innovation Enabler focusing on AI solutions for societal issues
– **Aubra Anthony** – Non-Resident Scholar at the Carnegie Endowment for International Peace; Researcher focusing on AI funding models and governance in the Global South
– **Anshul Sonak** – Principal Engineer and Global Director at Intel Digital Readiness Programs; Private sector representative working on digital readiness and AI capacity building
– **Participant** – (Multiple instances, appears to be the same person) Additional Secretary Abhishek from the Ministry of Electronics and Information Technology of the Government of India; Government representative discussing India’s DPI model and AI initiatives
– **Audience** – (Multiple instances) Various attendees asking questions, including Jasmine Khoo from Hong Kong
**Additional speakers:**
– **Abhishek** – Additional Secretary, Ministry of Electronics and Information Technology, Government of India (mentioned by name in later parts of the transcript, same person as “Participant”)
Full session report
# Leveraging Artificial Intelligence for Sustainable Development: A Multi-Stakeholder Dialogue on Country-Level Strategies
## Executive Summary
This IGF Open Forum session, jointly organized by UNDP, Carnegie Endowment for International Peace, and Co-Creation Hub Africa, examined practical strategies for leveraging artificial intelligence to achieve the UN Sustainable Development Goals. Moderated by Yu Ping Chan from UNDP’s Digital AI and Innovation Hub, the interactive session brought together perspectives from academia, private sector, civil society, and government to address challenges and opportunities in AI for sustainable development.
The discussion revealed significant challenges including digital divides, resource disparities, and energy consumption concerns, while highlighting promising approaches through community-centered development, evidence-based implementation, and innovative funding models. Audience polling showed moderate optimism (5.0 average on a 1-10 scale) with priorities focused on AI regulation, inclusion, and capacity building.
## Opening Context and Audience Engagement
Yu Ping Chan opened by noting that UNDP works in over 170 countries and territories, with 130+ countries engaged on digital transformation and AI for SDGs. The session used interactive Slido polling to gauge audience perspectives, revealing that participants rated their optimism about AI for sustainable development at an average of 5.0 out of 10, with many rating it at 3, indicating cautious optimism.
Priority polling showed audience focus on AI regulation and governance, inclusion and equity, and capacity building as top concerns. The interactive format was designed to complement another IGF session on international cooperation for AI.
## Evidence-Based Approaches and Information Gaps
Armando Guio Español from the Network of Centers emphasized the critical need for evidence-based decision making in AI development, highlighting significant information asymmetries between stakeholders. He noted that while there is considerable discussion about AI’s potential impact on employment, “Instead of replacement of jobs, for example, what we are seeing right now is augmentation, actually improvement in the work some workers around the world are developing.”
Guio Español stressed the importance of rigorous analysis to understand what AI technologies can actually deliver versus theoretical promises, advocating for methodological approaches developed with MIT colleagues to bridge the gap between AI enthusiasm and practical implementation realities.
## Global AI Equity and Resource Disparities
Aubra Anthony from the Carnegie Endowment for International Peace provided stark evidence of global AI inequities, revealing that “Africa currently accounts for only 0.1% of the world’s computing capacity, and just 5% of the AI talent in Africa has access to the compute power it needs.” This data highlighted how AI development is concentrated among a few multinational players, creating systemic barriers for Global South countries.
However, Anthony reframed these constraints as potential innovation opportunities, suggesting that resource limitations could drive more efficient and locally relevant AI solutions. She advocated for “collaborative pooled funding efforts” and emphasized that AI development must be “non-extractive, self-sustaining, and involve communities impacted by AI.”
## Local AI Development and Community Engagement
Oluwaseun Adepoju from Co-Creation Hub Africa introduced the concept of transitioning “gradually from hype to hope” in AI development, acknowledging that “before we transition from hope to truth, we’re going to make a lot of mistakes, we’re going to have a lot of losses, and we’re also going to see a lot of success at the end of the day.”
Adepoju emphasized that “AI is local and must be built by local people,” explaining that effective local AI development requires substantial community engagement: “Forced to build anything in like a six month project, we spent the first two or three months just engaging the people, co-creating the problem statement with them.” He used the example of plantain classification to illustrate how local context shapes AI applications, noting that what constitutes “ripe” plantain varies significantly across different communities and use cases.
## India’s Digital Public Infrastructure Model
Abhishek, Additional Secretary from India’s Ministry of Electronics and Information Technology, presented India’s Digital Public Infrastructure (DPI) approach to AI development. He explained how India is applying successful DPI principles to AI, providing basic building blocks including compute access, datasets, and testing tools.
The Indian model includes making 35,000 GPUs available at low cost ($1 per GPU per hour) for AI developers, startups, researchers, and students. Abhishek emphasized that this infrastructure-first approach focuses on solving practical problems such as healthcare diagnosis, education personalization, and agricultural advisory services, ensuring AI applications address real societal needs.
India committed to making AI-based healthcare applications available through an AI use case repository for Global South countries, particularly Africa, at the upcoming AI Impact Summit in February.
## Environmental Sustainability and Energy Considerations
A crucial dimension addressed energy consumption concerns. Abhishek noted that “when we build compute systems for AI applications and models, the amount of energy that is needed for powering these systems is very, very high,” mentioning that an H200 GPU consumes power equivalent to one U.S. home. He emphasized the need to balance AI productivity gains with renewable energy objectives and carbon footprint reduction.
This environmental perspective introduced important constraints often overlooked in discussions of AI’s potential benefits, with Abhishek suggesting the need to “prioritise which are the tasks which AI should do, which are the tasks that AI need not do.”
## Personal Productivity and Skills Development
Anshul Sonak from Intel highlighted research showing AI can save “15 hours per week” in personal productivity tasks, emphasizing that “bringing AI skills to everyone should be a national priority.” He advocated for systematic approaches to capacity building that extend beyond technical training to include diverse expertise areas including AI ethicists and security experts.
Sonak stressed the importance of sustainable AI development and multi-stakeholder education approaches that recognize successful AI implementation requires diverse skills and perspectives rather than purely technical capabilities.
## Funding Models and Commercial Viability
The discussion revealed tensions around funding approaches. Anthony identified that “historical donor-led funding approaches are insufficient” and advocated for collaborative pooled funding that better aligns with local needs. Adepoju supported a “patient capital approach needed to avoid commercialisation pressure that compromises safety and equity,” suggesting AI innovations should have at least one year to prove safety and equity before facing commercialization pressure.
However, the Indian government representative emphasized that “AI applications must address real problem statements to be commercially viable and publicly fundable,” highlighting ongoing debates about balancing innovation patience with practical sustainability requirements.
## UNDP’s AI Hub Initiative
Chan announced that UNDP launched an AI Hub for Sustainable Development earlier this month with Italy as part of the G7 Presidency, aimed at accelerating AI adoption in Africa through international cooperation and resource sharing. This represents institutional commitment to supporting AI development in regions with limited resources.
## Audience Questions and Measurement Challenges
Audience questions highlighted ongoing challenges in measuring and tracking performance of locally-built AI systems across different contexts. Questions about efficient measurement approaches and performance tracking indicate gaps in developing appropriate evaluation frameworks for diverse AI applications.
The discussion of different levels of “local” in AI development – from language and cultural adaptation to local problem-solving – revealed the complexity of implementing truly community-centered AI solutions.
## Key Commitments and Next Steps
Several concrete commitments emerged:
– India’s continued provision of low-cost GPU access and healthcare AI applications for Global South countries
– Carnegie Endowment’s commitment to publishing research on funding needs and market inefficiencies in AI ecosystem development
– Co-Creation Hub’s maintenance of patient capital approaches prioritizing safety and equity
– UNDP’s AI Hub for Sustainable Development to accelerate AI adoption in Africa
## Conclusion
The session demonstrated a pragmatic approach to AI for sustainable development that moves beyond theoretical discussions toward evidence-based implementation strategies. The emphasis on community engagement, local ownership, and environmental sustainability indicates growing sophistication in addressing AI development challenges.
The moderate optimism reflected in audience polling, combined with focus on regulation, inclusion, and capacity building, suggests recognition that realizing AI’s potential for sustainable development requires addressing fundamental structural challenges around access, governance, and resource distribution.
The combination of immediate commitments (GPU access, healthcare applications) with longer-term research initiatives (funding models, measurement frameworks) provides a balanced approach addressing both urgent needs and systemic challenges. The session’s interactive format and inclusion of diverse global perspectives, particularly from the Global South, offers a model for future AI governance discussions that prioritize community needs alongside technological advancement.
Session transcript
Yu Ping Chan: Good afternoon, everyone, and welcome to IGF Open Forum session on AI for Sustainable Development, Country Insights and Strategies. Thank you also to everyone who’s joining us online from around the world. Good morning, good afternoon, good night, wherever you are. My name is Yuping Chan, I’m Head of Digital Partnerships and Engagements at the United Nations Development Program’s Digital AI and Innovation Hub. I’ll be moderating today’s session. And we’re very pleased to organize today’s session with the Carnegie Endowment for International Peace and the Co-Creation Hub Africa. Some of you might have been just at the session over in the other room about the international cooperation and the importance of international cooperation for AI. And here we’re proud to complement that with really an in-depth look at what it means to advance AI to leverage, to achieve the sustainable development goals on a country level, really turning global discussions into in-country impact while examining the challenges from bias and exclusion to capacity and infrastructure. So to kick everything off, and perhaps this session will be a little bit different from what you’ve experienced before, is to have a little bit more of an interactive flow with members of the audience. And this is where we wanted to use Slido to really take the pulse of the conversations in the room and what you yourself are thinking. So I’d like to first start by inviting everyone, including our online audience, to engage through Slido. And this is where we have our UNDP colleagues moderating online as well. So I also encourage our online audience to participate in the discussions. So the first question that I want to ask everyone to answer via your phones and devices is on screen right now. And you can scan the QR code and enter your answer to this question of what topic or theme would you like to hear about in this particular discussion? And this will be a chance for our speakers also to reflect on the results and prepare your answers so that we can really try and interact with the audience here. And so as we’re doing that and asking all of you to write back, to put in your responses via the Slido QR code. So scan the QR code, drop a word or phrase, there can be multiple words or phrases, and we’ll see the responses come on screen. And as you’re doing that, I also wanted to emphasize why from the United Nations Development Program, this particular issue is so important. The question of how we leverage AI at the country level to achieve sustainable development. We are present in over 170 countries and territories around the world. We work with more than 130 countries now to leverage digital and AI to achieve the sustainable development goals. And while we’re tremendously optimistic about the role that AI can play in supporting sustainable development, we’re also conscious of the significant AI equity gap that exists between the global south and the global north. And the question of how many people will potentially be left behind in this global AI revolution. And so we’re working to close this equity gap in a number of ways that I mentioned before. And a lot of our speakers today are working in this very practical area as well. How do we leverage AI in novel and exciting ways to achieve sustainable development? So very quickly, a few more minutes to fill in your responses, especially also those online. While I introduce the panel as well. On site, we have Mr. Oluwaseun Adepoju, Managing Director of the Co-Creation Hub Africa. Online we have Obra Anthony, Non-Resident Scholar at the Carnegie Endowment for International Peace. Armando Guio-Español, Network of Centers. And Anshul Sonak, who is Principal Engineer and Global Director at the Intel Digital Readiness Programs. So it’s really a multi-stakeholder panel reflecting the best of the IGF. The idea that we both have government representatives, people from the international organizations, the technical sector, civil society, and really looking to see how we collectively as a community can come together. We also would have a representative from the Indian government, Additional Secretary Abhishek. But I believe he is actually in another session and hopefully will come over shortly soon. So very quickly now, we have reflected on this first question. And here I think we see a number of results where we have, for instance, the three keywords of AI regulation, inclusion, capacity building. And I believe the last one is media, AI in the media. And I also see a very interesting number, 2947217, which is possibly not a response. But you can see a little bit of the scale of the challenges, I think, that really are confronting us today with AI. But I’d also like the speakers perhaps to reflect on the areas that are highlighted in green because these seem to be top of mind among our audience, both online and in screen. And so thank you for those reflections, inclusion, AI regulation, and AI media. Let’s also take another second question. On a scale of one to 10, I ask the audience, how optimistic are you that AI can accelerate inclusive sustainable development within the next five years? One means very pessimistic, and 10 means very optimistic. Okay, we’re going towards the rather negative field. I see a lot of threes. It’s a bit of six. And overall, the score seems to be 5.0, very evenly split in the room, with a rather strong emphasis on number three, which is quite negative. So I think this is actually particularly interesting. We’ve actually done a survey at UNDP through our human development report, which shows that overall, most people tend to be optimistic. So I’m wondering whether maybe it’s the IGF community that slants us a little bit more towards the conservative side. But that’s an interesting reflection, that overall, we are in the middle in terms of optimism about AI and its potential to accelerate inclusive sustainable development. And I think perhaps that points to some of the challenges that we collectively are trying to address. All right, thank you for your responses on that. I’m going to start then to ask, again, the panel to reflect on perhaps what they’ve seen from the audience, and maybe think about those in your responses to some of the questions. And again, we’re going to have an opportunity for the audience to also come in as well to keep this a little bit more interactive. So let’s start a little bit now by going to our distinguished panelists. And we’ll start with, I believe, Armando. And so the question is really in terms of setting the scene. And I will ask this of all our panelists at the same time. How do you see the current landscape of leveraging AI for sustainable development?
Armando Guio Espanol: Yeah, well, thank you very much for this invitation to the UNDP, of course. I think that it’s, I really like the exercise of starting with these questions. I had been reflecting on this question, as you shared with us, with some time for preparation. And definitely, I think from the experience, so I’m here also representing the global network of internet and society centers. We are a network, an academic network of 130 centers around the world. And we have been basically working on this topic and on these issues. And what we are trying to do, it’s, of course, we are, the first thing is that we are working on bringing more evidence about the impact of AI and what AI is really achieving, where are we, where are we standing, and basically try to navigate all this immense, big amount of information that we are processing. Decision makers are getting into a lot of information, evidence about the work that is going on right now, the kind of technologies available. So we want to really help decision makers, policy makers, and of course, colleagues around the world to look into the kind of technologies that there are, the real features that we have, and the real impact of this technology. That’s the other thing that we really want to have access to good evidence of where is the impact and what’s specifically the main issues related with AI. One of the things we have been going through is, for example, we are measuring the impact of AI in the future works, and we have been working with colleagues around the world, especially now with colleagues at MIT, and we have been developing this idea of an epoch or an analysis in order to analyze the real impact of AI, for example, in specific areas. What we are seeing right now is that instead of replacement of jobs, for example, what we are seeing right now is augmentation, actually improvement in the work some workers around the world are developing, and actually AI being helpful in that sense. So this is just an example of how we need to gather this kind of evidence. We need to gather this kind of methodologies and analysis in order to make good decisions, and that’s why I think we can achieve a sustainable use of this technology, at the same time, a sustainable development, because basically what we are doing is trying to really understand what the technology is doing, and in that sense, we have to reduce those big information asymmetries that we have right now. I think that it’s good that we center ourselves on measuring the risks on AI, and that’s also going to be extremely helpful for some of the conversations we’re having on AI governance and AI regulation, as it has been also mentioned, but definitely we need good evidence in order for that process to take place in a way in which really it’s going to be helpful for many countries. And perhaps the last point in this first remark that I would like to make is that we are seeing a lot of efficiencies being gained and a lot of benefits also from the use of the technology. That’s something that we really need to highlight. Of course, there are… There are cases in which the technology is not being used for the best purposes, but we also see that there are some benefits, and that’s something which basically we want, especially countries from majority world, global south countries, to understand and have enough elements to determine how to better use and deploy these technologies in their society. So that’s the kind of work we’re trying to do, building capacity by building evidence, by taking this evidence into those decision makers and trying to promote also research, local research in many regions around the world, and also to provide collaborations in that sense. So that’s what we’re doing, and hopefully that’s first a good glance of the kind of work ahead and some of the challenges in which we are working right now. Thank you.
Yu Ping Chan: Thank you, Armando. I think that landscape of knowing what the challenges are and really having the kind of information that we have to make sure that there are informed decisions about the use of AI is particularly important. Let me go to Aubra now and ask her how she sees the current landscape of leveraging AI for sustainable development.
Aubra Anthony: Thanks so much, Yuping, and thanks to UNDP broadly and my fellow panelists. It’s an important discussion. I’m looking forward to diving in with you all and those in the audience as well. So Yuping, you asked about the current landscape. The way I see it is both promising and fraught for a few different reasons. The first reason that I want to point out is just with AI, I think the risk that we face in the context of the SDGs and inclusion has to do with digital divides that have been longstanding for many years, and with AI, I think we see the risk that those digital divides become more calcified and are linked to a few different things. In the context of tech broadly, but also specifically with AI, I think we see that power is becoming incredibly concentrated with just a few multinational players dominating the discourse, dominating the priority setting, and dominating the types of business models that end up getting pushed out. Sometimes often, these business models aren’t serving the populations that are most in question when we consider how we achieve the SDGs. The notion the bigger is better, a lot of these different themes and narratives end up not really well serving our priorities for the SDGs. I think that the concern is that the broad trend lines are just a continued entrenchment of that concentration, and it ends up that field shaping decisions, really consequential decisions continue to be made in ways that benefit those who are already benefiting the most from AI, both in terms of financially, but also as Armando said, the information asymmetries, et cetera, and the resources that are needed to disrupt that are globally very scarce. Just as an example, Africa currently accounts for only 0.1% of the world’s computing capacity, and just 5% of the AI talent in Africa has access to the compute power it needs as a result. Beyond that, on the data front, even though something like 2,000 worlds, 7,000 languages are spoken on the continent, those languages are considered under-resourced in the context of NLP, because there just isn’t or historically hasn’t been enough digital data on them to train LLMs. These different issues of inclusion crop up when you think about the way that concentration is affecting access globally, but there’s also opportunity there. If you flip it on its head, because of those constraints, I think we’ve seen some really amazing innovations emerge around building AI that’s more robust and less compute-intensive or less energy-intensive with the development of so-called smaller language models and things like this. Innovations that are better suited to the challenges at hand, the constraints at hand. Many firms have managed to do really groundbreaking work in light of those limitations, and in doing that, they offer a really fantastic alternative model to this brute force, bigger speakers, better ethos that’s been dominating the AI playing field. Firms like Lillapa AI, who have developed that small language model, Incubi LM, that stands as hundreds of millions of low-resource language speakers. There are promising signals, as well as the more pessimism-inducing ones, in line with the Slido results. I think we see both sides of the spectrum coming up. Just very quickly, I think there’s a couple of other points here that are worth highlighting in terms of the landscape of AI. I think also the sense of perceived urgency and a mentality of catch-up among many countries. If you don’t catch up, you’ll be left behind. This is very much tied to the digital divide. It’s a growing concern, especially in the context of Africa, which is where we’ve been focusing a lot of our research over the last several months. Some projections show that GDP growth attributed to AI may be 10 times lower or more in Africa than the AI-fueled growth elsewhere. That really creates this sense of urgency. It’s not just keeping up with your neighbors, keeping up with the Joneses. It’s really often coming from this perception that AI can serve as an accelerant of much-needed economic development. Of course, that’s good, but broadly, it’s also, I think, and this is an important thing for us to discuss as a community, I think, again, the flip side of that is that broadly, it’s tough right now to create the space that’s needed to ensure that we’re seeing AI as just another tool in the toolbox, in the arsenal of tools that we have available, that we can apply for what are often very systemic, political, and socially-rooted issues, right? Reduction of poverty, gender inequality, climate change, right? The AI is one tool in the toolbox, and when you have this sense of urgency, that can both help drive the conversation of how we leverage those tools to suit our needs, but I think it also risks forcing us to adopt a solution that may not always match the problem, right? As Armando pointed out, ensuring that we have the evidence that helps us make the decision of when AI is the right tool for that issue. So I think that’s one of the current challenges that we face, and then I have a third point that I’ll talk about more later when we have a little bit more time, and it’s really just around funding, right? I think a lot of the issues that we see right now have to do with the disparities in funding with the diminishment of U.S. foreign assistance and with others’ foreign assistance profiles becoming smaller, I think that really creates an additional urgency around how we address some of these problems. But in the interest of time, I’ll leave it there, and we can talk a little bit more about that later.
Yu Ping Chan: Actually, I do think that question about funding will be an interesting one. It will also be good to hear from voices in the room as to what they feel about this particular moment, because I do think that that is that moment of urgency. We couldn’t agree more with this point about the importance of bringing the global South and focusing on Africa. This is why, for instance, UNDP just launched last week in Rome the AI Help for Sustainable Development, which was with the Italians as part of the G7 Presidency that is really focused on accelerating AI adoption in Africa and really focusing on African countries and empowering and strengthening local AI ecosystems in Africa. So on the point of Africa, let me turn to Oluwaseun, who is here with us in the room. In your work at the Co-Creation Hub, what do you see as the current landscape of leveraging AI for sustainable development and the challenges there?
Oluwaseun Adepoju: Thank you so much, and thank you to the panellists who have spoken before me. I think they’ve raised a lot of important points there, but practically in the work that we do and what we observe every day, I want to point to four major things. Number one is that when we talk about AI for sustainable development and the excitement that comes with the potentials and the opportunities that AI brings to the society, we usually don’t also talk about the balance that we need to create for the unintended consequences of artificial intelligence. In the work that we do being the Pan-African Innovation Enabler, we’ve seen how as the models are getting smarter and bigger and big data is becoming easy to process, there’s also the heavy consumption of energy. In some of the work we’ve done recently, we’ve been benchmarking what’s led to transition from proof-of-work to proof-of-stake in blockchain, and now that can actually be a future iteration for some of the unintended consequences of AI when it comes to energy as well. Number two trend that we are seeing is the transition of artificial intelligence from the stage of hype to the stage of hope. I believe every new technology goes through three stages, the stage of hype, where there’s a fear of missing out and everybody’s dropping investment, everybody’s talking about it, and I think Aubrey mentioned the fact that there’s that pressure from countries and organisations not to miss out in the AI race. And when we use the word race for a new technology, it’s very obvious that everybody wants to take part and nobody wants to be a last comer to the table when it comes to artificial intelligence. But we’ve seen that we’re transitioning gradually from hype to hope. We can see use cases that we can point to, that is driving confidence in a lot of ways and I also believe that the third stage, which is the stage of truth, we’re going to get there, but before we transition from hope to truth, we’re going to make a lot of mistakes, we’re going to have a lot of losses, and we’re also going to see a lot of success at the end of the day. But I think the stage that we are now requires a lot of intentionality in the way we innovate with the technology and also using a multi-stakeholder approach to building AI solutions. We’ve seen that a lot of people are technologically excited about AI. and Anshul Sonak. We have a lot of work to do in education. We have a lot of work to do in AI, but, you know, some of the work we do in education requires that we bring at least 32 types of professionals in the room, especially when we’re building AI for EdTech, especially solutions that include children. And also, for example, to build a very useful EdTech solution for children in Africa, for example, we need a lot of people in the room. We need a lot of people in the room. We need AI ethicists or technology ethicists in the room. You need safeguarding professionals. You need, you know, people who can look at the technology stacks and in terms of digital security as well. So that multistakeholder approach, we’ve started seeing it especially in countries like Nigeria and Rwanda where it’s no longer about the technical people but also multistakeholder approach, and we’ve started seeing it in the world. So, you know, we have a lot of work to do in education. And then, finally, linguistic equity. And a lot of people, because for AI, we see it across two classifications, the technology optimists and the technology skeptics. Some skeptics believe that linguistic equity in artificial intelligence is just a talk and it’s not concrete. And so, you know, we have a lot of work to do in that area. And then, finally, we have a lot of work to do in the multistakeholder approach. Linguistic equity is very important and we’ve started seeing some work in that area. Aubrey mentioned people building small language models, and this is because we need linguistic equity to build stacks and, you know, for some of the languages that are missing. And for us, we do a lot of benchmarking and testing of some of the large language models. So, there’s a lot of work in evidence and also to ensure that AI is, first of all, local, and it’s building features that helps people benefit from the technological dividends, I mean, at the lowest level possible.
Yu Ping Chan: Again, this is where we also, as UNDP, also see that priority in focusing on the areas that you had mentioned. So, for instance, when it comes to linguistic diversity, we’ve been working in countries such as Ghana, around low-resource languages, and looking at how we can use that around low-resource languages and looking towards digitalising those languages to create precisely what you said, that kind of inclusive models that can then serve the engine of AI. On the multi-stakeholder element, where, again, this is something that unites us all at the IGF and that you have actually mentioned, I’m glad now we can actually turn to Anshul, who is actually part of Intel and the work that Intel is doing to build digital readiness and capacity. Anshul, a little few reflections from you, perhaps, on what are the needs and what is the current situation with regards to AI? And, Anshul, I’m sure you have a lot to say about AI for sustainable development.
Anshul Sonak: ≫ Thanks, Yiping. Good morning. So, calling from Silicon Valley, this is a very interesting conversation. This is 6 a.m. for me in the morning. ≫ Thank you for being there. ≫ I appreciate that. I’m really hearing all the comments by all my fellow panellists and 100 per cent agree with everything. As a rural development professional coming from a rural background, I think AI is a great opportunity. So, really appreciate all the comments made by fellow panellists. From my reflection standpoint, I look at it as two big strands emerging in this space. One is sustainable AI itself, so how do we make AI more local, more clean, more green, more safe, more private, more fast, more cheap? So, this is all about AI technology itself, so that’s one conversation where we need to be paying attention at all levels. The other one is, what can AI do for larger sustainability? So, this is probably more relevant to this audience, what can AI do for larger sustainability? So, it’s not a technology conversation, it’s truly a developmental conversation. What does AI for sustainability truly mean? Now, research again shows enough that more than I think there was a nature research two years back, more than 79 per cent of SDGs can be done responsibly and appropriately. So, this gives a big opportunity and big challenge, yes, that’s the true reflection. Opportunity-wise, it can be a potential big equaliser. It’s truly a new electricity, so everything can change once electricity comes in your home, right? Just from a personal productivity standpoint, if you really use AI appropriately, we just did a research recently which shows that you can save roughly 15 hours of your time every week. You can save 15 hours of your own personal productivity. And then you can figure out how to use that time more responsibly for more value creation for yourself and your life, right? So, that’s an opportunity side. Challenges side, of course, I mean, we heard about AI divide argument, right? Not just technology divide, but there are much bigger asymmetries which are getting quite clear. There’s a gender divide, there’s a racial divide, there’s a colour divide, there’s a country divide. So, there are many issues which are emerging. If we want to be a truly inclusive and responsible society, we really need to have this conversation on how do we bring it together and create some kind of an equalisation for a long-term sustainability. So, there are opportunities and challenges in AI for sustainability itself, and there has to be a separate conversation on how to make AI itself more sustainable. These are the two big reflections. I’m happy to kind of be in this conversation. Thank you so much, Anshul.
Yu Ping Chan: This is Anshul from the Ministry of Electronics and Information Technology of the Government of India. I think it might be a good time to come back to that slide that we had at the start. This is where we actually polled both the online participants as well as those in the room to ask them actually what was top of mind for them in terms of the conversation here and what to reflect on. So, those areas in green were actually what came up as sort of the areas that they like to hear about when it comes to AI for sustainable development. So, I turn to you very quickly for a quick response on these and what you see as the state of the field when it comes to leveraging AI for sustainable development.
Participant: So, like the thing that CF in the response, like, of course, regulation, inclusion, capacity building is very, very important. But when we look at sustainability in issues for AI, I would say that the energy use, especially with regard to renewable energy, is very, very important. So, I would say that the energy use, especially with regard to renewable energy, is very, very important. When we build compute systems for AI applications and models, the amount of energy that is needed for powering these systems is very, very high. To the extent that normally, in fact, now we are going to Blackwells and B200, which are more energy-intensive, but I was told that even an H200 consumes power equivalent to one U.S. home. So, when we are building applications and when we are trying to save time, as Anshul was mentioning, when we are trying to push the technology, when we are building applications and when we are trying to save time, as Anshul was mentioning, when we are trying to push the technology, when we are building applications and when we are trying to save time, as Anshul was mentioning, when we are trying to push on productivity, when we are trying to push on benefits in various sectors, you also have to see where do we balance the SDG objectives for renewable energies, for climate, with regard to more efficient computing systems. At some level, we will have to see that the benefits of AI applications and models should not outweigh the costs that come in because of high energy usage. So, this will require extensive research in building these systems, which are low energy consuming. This will involve more investments in renewable energies. This will involve limiting the use of AI for non-essential functions, like things that humans can do better. Why do we need to rely on using AI for doing the same things? When we found people using it for very simple tasks, like writing poems and writing text or summarising text. We need to prioritise which are the tasks which AI should do, which are the tasks that AI need not do. How do we limit the energy consumption for powering AI systems? How do we prioritise usage of AI? How do we not ignore the needs of the challenges that climate change poses? How do we reduce our carbon footprint? These are issues that I think are as important as the issues related to AI regulation or inclusion and building capacities.
Yu Ping Chan: Thank you so much, Abhishek. So, we’re now going to turn back to the panel and ask all of you to reflect on a little bit of what you’ve heard from fellow panellists, as well as what you see from the responses of the audience on the screen. And link that to perhaps a small tailored question based on your expertise in your areas of expertise. I’ll start with Aubrey, I think. And I want to pick up on a point that you had raised just now, and now I think has been picked up by a couple of other colleagues as well, around that question of funding and governance. And I find it particularly interesting, because India, for instance, will be chairing the next AI Action Summit. And so, in that question of governance and funding, how do you think funding models are shaping national AI ambitions when it comes to global majority? How do you think that’s going to play out? And how can we as an international community address some of these challenges there?
Aubra Anthony: Yeah, thanks, Yuping. And, yeah, a very auspicious time, really. I mentioned earlier some of the issues that I think we’re all tracking, right? The US foreign assistance has been effectively shuttered, and many of the largest bilateral donor governments and NGOs, philanthropies, have also kind of moved to shift away from international funding. And so, I think it’s really important for us to think about that. And I think it’s also important for us to think about how we as an international community address some of these challenges there. And I think it’s really important for us to think about how we as an international community address some of these challenges there. And NGOs, philanthropies, have also kind of moved to shift away from historical levels of foreign assistance. So, right now, it’s unfortunately a pretty precarious time for funding, not just AI applications, but the necessary components of AI ecosystems globally, right? So, the fundamental ecosystem strengthening that needs to be in place for AI to be leveraged in a responsible, sustainable way by locally impacted actors, as Shun mentioned, right? Like, the enablers, right? So, the enablers that are really key to having an ecosystem thrive, things like compute, interoperable privacy-preserving data systems, the talent that’s necessary, right? The capacity in-country to be able to do that. And so, I think it’s really important for us to think about that. And I think it’s also important for us to think about how we as an international community address some of these challenges there. And so, right now, it’s unfortunately a pretty precarious time for funding, not just AI applications, but the necessary components of AI ecosystems globally, right? So, the fundamental ecosystem strengthening that needs to be in place for AI to be leveraged in a responsible, be able to design these systems, which a lot of that may fall more into the realm of DPI than AI uniquely. But I think that’s absolutely part of this conversation. But even with those trends, I think there’s also a very strong growing recognition that given the scale and the scope of the need, supporting ecosystem development through these kind of historical donor-led, siloed, uncoordinated investment really leads to a sum that’s far less than just the addition of its parts. So because of that, I think in large part, there’s been an increase in recent years in these more collaborative pooled funding efforts. We’ve seen this with the AI4D Funders Collaborative that was launched in 2023 at Lesley Park, the current AI, Public Interest AI initiative that was launched earlier this year at the Paris AI Action Summit. Yuping, you mentioned the UNDP and Italian government’s launch of the AI Hub earlier this month or last month, which is very exciting. And then we’ll see what comes from the Indian Summit next year, right? There are a lot of different efforts that I think are trying to meet the moment and hopefully moving us in a better direction. But so part of the landscape and part of this kind of broader conversation needs to recognize that these larger, more multilateral, more multi-stakeholder funding initiatives that can honestly really better address the scale of the challenge financially are emerging. But they’re also kind of introducing new complexities and new challenges for those who are having to navigate that, right? So whether that’s governments, practitioners, the people who are having to navigate AI ecosystem strengthening are having to navigate a lot of different trends and trend lines. And I’m going to say something that I think Shen mentioned earlier. And we all hopefully at this point agree on. But if we don’t, I would love to get into more discussion here. But I think the assertion that I want to make here is that for AI to really deliver for the SDGs and for sustainable development broadly, it cannot be something that is helicoptered in from afar, right? Its development and deployment have to involve communities impacted by AI. Its governance has to involve the communities who are impacted by AI. And the problem is that critically, the funding paradigm historically has really not aligned with that as well. It’s really been more about AI that’s produced elsewhere reaching foreign shores. And so I think the way that we see this shaping out is really going to have a fundamental effect on whether we can actually achieve this goal that I think we all share of better leveraging AI for sustainable development. And I think there’s been a really solid movement amongst the ICT community over the last several years with the principles for digital development and kind of a recognition that the funding paradigm needs to shift. It needs to be more localized. And we need to better appreciate all of that. But we at Carnegie, we’ve been doing a lot of research on this and trying to understand where the funding needs are really best matched by what’s on supply and where there are divergences, right? Where there’s kind of a market inefficiencies that are coming up there. And so just very quickly, I’ll share at a high level some of the things that we found through the interviews, the consultations that we’ve been doing. Oh, sorry. Yes?
Yu Ping Chan: Yeah, I just want to give enough time to all the panelists and hopefully have some questions from the floor. So if you don’t mind, I think the Carnegie research actually sounds very interesting. And perhaps you could share some links in the chat for everybody to consult, if you don’t mind. Absolutely.
Aubra Anthony: So I was going to say we’re going to be publishing this soon. So hopefully, everyone can see this. But I can give you the three key takeaways from the different research that we’ve done around the funding that we’ve discussions that we’ve had. It’s basically, and I think we can get into more detail in the chat. A, funding must be structured to be non-extracted. And I think this is a key thing that’s come up in other comments as well. It must be capable of becoming self-sustaining at some point, even if it’s not at the outset. If donors are coming in to fund, there needs to be some path towards sustainability in the long term, whether that’s through engagement with the commercial sector or otherwise. And then also, lastly, and again, happy to share more links to this, but we really need to ensure that the way that funding is structured and supports ecosystem development plays to different stakeholders’ strengths, both in terms of what’s being brought to the table, but also how risk aversion factors in. And I think those are really big issues that often go unappreciated, especially when you have a lot of different stakeholders coming together, which is what’s critical here. Again, in the interest of time, I’ll stop there. But thanks so much for that.
Yu Ping Chan: Thank you, Ava. And actually, that, again, is a nice segue into Oluswane, when she mentioned, for instance, the role of ecosystems, the entrepreneurial ecosystems. That’s where, for instance, your work at the AI Hub working directly with tech innovators is relevant. And how are you supporting that work? And what do you think, in response to some of the areas of challenge that have already been highlighted so far?
Oluwaseun Adepoju: Thank you so much. Quickly, when I mentioned earlier that there is hype around AI in the early days, I think if you look at some of the amount of money that has gone into supporting AI innovators in Africa, we’ve not gotten at least even 2% of that investment. Because in the early days, everybody, all the innovators were just using Chargipiti to do summarization. And they would put a label on it that they were building AI companies. But in recent times, we’ve seen that you can no longer do that. And one of the strategies we used is to say, you are not just an AI company by just name. What are the core societal issues that you’re using AI to solve? And then recently, we’re supporting innovators using integrating artificial intelligence and DPI. Because I think we need to start connecting artificial intelligence to core societal issues. There’s also the situation of breaking what is not broken, or trying to fix what is not broken with artificial intelligence. So if, for example, a state in Nigeria has a social register that they use in distributing farm produce, or seed input, or farm inputs to farmers, but the state has been struggling with actually identifying the trends of who they are giving the seeds to, what does the trend look like in terms of output as well. That’s an instance of using artificial intelligence to get output, rather than build something that, we are all excited when you present it, or when you talk about it, but when you look at the practicalities of the application, it’s not really solving any issue, right? So for us, we are very intentional around AI solutions that are connected to core societal issues. When you need to transverse the Maslow hierarchy of need, you will not waste the little investment we have on white label AI projects. We want to see use cases. And I think some of the monetary commitment that comes to us as an organization to support innovators building AI solution, we are not in a hurry, because there’s that pressure to quickly demonstrate that something works. We are not in a hurry. If we are unable to get 10 use cases, we are fine. If we find two or three that works and solves practical societal issues, we are good with that. But there’s also the quick idea that you need to commercialize. I think it takes a while for us to demonstrate good use cases of artificial intelligence. And that’s why some of the work we do now, we based it on the public value theory, first of all, and not support the startups on the commercialization side first. Because once commercialization pressure comes, you begin to compromise on the safety of what you’re building. So we usually give ourselves at least one year for you to actually prove that what you’re working on is good, it solves problem, it’s safe, and it’s equitable in so many ways. And then you cannot start talking about your commercialization trail. So we usually work with patient capital now, because this innovation that needs patients must not be brought under pressure of commercialization, first of all. And I think that’s what some of the big techs are compromising on when it comes to some of the conversations around safety, around equity, and the good use of people’s data. There are so many tools that we’ve been experimenting with that we know, fundamentally, this is ethically wrong in the way the data is scripted to be able to fit those models. And we must not repeat that. And that’s why for us, AI is local and must be built by local people. And lastly, I’m very happy that we saw a capacity building on that side of the responses from people. For us, we’re doing a year-long module on AI for Business Masterclass, where we gather business people all across Africa, representing 33 countries every month, where we have conversations around for them to really understand artificial intelligence. Because sometimes when the society don’t understand a technology, we usually launch a very complex social technical solution on people, which they don’t understand how to use in the first place. When people understand, they’re able to contribute meaningfully and they’re able to use meaningfully as well.
Yu Ping Chan: Thank you very much. I think that was a very comprehensive answer that touches on many aspects. And I’d like to maybe turn to Abhishek here to really reflect on that, because from India has been a leader in the use of digital technologies, AI, and so forth. Reflecting from a macro level on some of the challenges that Suryan has actually mentioned, how has India actually addressed some of these? And how do you balance a lot of the challenges that he’s spoken about to be able to become really a global leader in all these aspects?
Participant: See, when you look at AI or when you look at digital public infrastructure solutions, one thing that one should keep in mind is that all these solutions are sustained and can be scaled up and be used by most people if they are actually addressing a problem statement. If they’re helping solve a problem, then there are ways and means to make it happen, make it commercially viable, make it public funding available if it results in larger social and economic benefits. For example, we had this sort of problem of financial inclusion. We had this challenge of people not having access to banking services, financial services, credit services. and Anshul Sonak. We have seen a lot of benefits that citizens need and this is a lot of needs. It has led to microfinance schemes, it has led to credit schemes, it has led to farmers getting insurance schemes. So when you build a digital public platform like ID, it resulted in a lot of spin-off benefits because all the leakages that were happening in public service programs was cut. People could benefit because they could take up livelihoods and once livelihoods improves, economic benefits, a lot of people can benefit. We have seen a lot of payments, like we realized that many people are out of the organized financial systems because they did not have any tools, they are not eligible for a credit card or a debit card and they are not able to do transactions, digital transactions. With that came the unified payment interface, UPI, as we call it, and today we do almost 20 billion UPI transactions a month and we account for almost 50% of digital transactions globally. We have seen a lot of benefits, like we have seen a lot of benefits in the rural economy, we have seen a lot of benefits in the urban economy, we have seen a lot of benefits in the rural economy and again benefits come in. Similarly, when we are looking at AI-based applications, again we have to look at what problem statements we are solving. For example, if you look at healthcare, one key challenge that we have is that for diagnosing tuberculosis or diagnosing diabetic retinopathy, we do have hospitals which have got X-ray machines and which can do diagnosis in the hospital. If there is an AI tool which can do diagnosis which is as correct or in times better than a human radiologist, then that tool can replace the human radiologist in that hospital. If it is offering that service, the health department, health ministry will be able to meet the cost for that. Similarly, if you look at education, there are a lot of needs of personalised learning plans, there are a lot of needs for augmenting the availability of science and maths teachers in rural areas where those teachers are not there, or helping the children with special needs, get access to lesson plans, get access to content that might be more useful for them, create content in all Indian languages. We are a diverse country. So therein, again, AI-based applications can create a lot of value and there will be public funding available for taking up such solutions. People will be willing to pay for such solutions. So, again, if it solves the problem statement, it becomes very useful. We have seen similar things coming in agriculture, where in AI-based applications, farmer advisories is helping farmers increase their incomes, reduce their input costs. Reduce the water that they use for irrigation. Do timely interventions for fertilisers, for pesticides, access the right markets, the right prices. When the farmers benefit economically, they will be willing to pay a cost for that. So if we design AI-based applications across sectors, so that it solves some social needs or it addresses some economic benefits, there will be a provision for funding them, there will be a provision for building a commercial model out of that, and then those are the solutions only that will ultimately be sustained.
Yu Ping Chan: Thank you very much. And I really want to still try and give some opportunity for some questions from the floor. So I’m going to ask Armando and Anshul, if you don’t mind, to try and keep your responses to a minute. Very quickly, from your respective perspectives as academia and research, as well as the private sector, reflections on what you’ve heard so far and any thoughts that come to mind. Maybe we start with Armando.
Armando Guio Espanol: Sure. Well, yeah, in a minute. I was just going to say that we really need to focus more on implementation and what is working or not. And especially we need to see the efforts that are being done on implementing several of the policies and some of the ideas that have been shared here. I think that we need to understand where are those accelerators of the implementation side and what is working and what is not. Because also we have to be very aware of how this process is taking place. And that will allow us probably to be a little bit more efficient with the resources that we’re having and a little bit more accurate in the kind of support that we’re receiving and also the support that we’re giving in that sense. So I think understanding more of that process, we need to analyze a little bit more of the implementation side, what is working, and we need to start delivering more results in that sense from all fronts. I think it’s very important right now. So that’s my minute. Thank you. That was a great minute. Anshul?
Anshul Sonak: Yeah, my minute, I mean, this requires a balanced, responsible public-private partnership and a great leadership. You have Abhishek sir sitting on the stage and his ministry, for example, really prepared this capacity-building tools for population-scale impact. Look at their example, AFRL for engaging public. Look at their example on education, what they have been doing with COVID-19. Look at their example on education. Look at their example on education, what they have been doing with companies like Intel and others. Employability, entrepreneurship. So these four E’s, right? Engagement of public, entrepreneurship, education, and economic development using employability, right? Creating the right public-private partnership model is very important, and hence the civil society dialogue is very critical.
Yu Ping Chan: Great. Thank you. Okay, before I open the floor, very quickly, another slido so we can all check in here. The slido that you see that Megan is now going to put on screen that you can answer via your QR code is going to be on the question of having heard all these conversations based on your own experiences and so forth, what do you think should be the number one priority for supporting or enabling an inclusive AI ecosystem? In one word or phrase, what should be the number one priority if we are to ensure an inclusive AI ecosystem? In short, what do you think should be the number one priority for supporting or enabling an inclusive AI ecosystem? And you can’t repeat the answer that you gave for the first question. So let’s try not to have the same answers that we saw just now. Capacity building has come up again, so despite my entreaty to you to try and have another answer, this clearly is a priority. One word. What is the priority for ensuring an inclusive AI ecosystem? And then I’m going to ask colleagues as well to start thinking of the question that you’d like to ask our distinguished panelists, and I will ask the distinguished panelists again to try and keep the answers short so we can try and hear from as many people as possible. I will start, I think, with one question that we have in the chat, while those in the room are still doing the QR code. How can IGF, WSIS, and other international stakeholders continue to support the adoption of AI and digital healthcare systems in Africa to achieve sustainable development, especially in this era of digital transformation and health globally today? I think Abhishek talked a little bit about the Indian experience, and specifically how health is actually particularly important here. I’ll ask you if you have any thoughts on that. So this question of digital health, especially in Africa.
Participant: Yeah, in fact, what I would like to say is that what we are looking to do, especially as part of the Impact Summit that we are hosting in February, is that adopt the DPI playbook that we have in which we built DPI, we built a repository for DPI applications and made it available for the whole world, especially the countries of the world, especially in the United States. So we’re going to do that. We’re going to do that in the context of our focus in Africa. We’re going to do that in the context of the global south. Similarly, the AI-based applications in health care, whether it is for cataract screening or whether it’s for diagnosing breast cancer or for tuberculosis or for diabetic retinopathy or similar use cases, those solutions will be made available as part of an AI use case repository. And any countries of the global south, especially countries of Africa, African Union, if they are wanting to use those solutions, we will be providing them. So we’re going to do that in the context of our focus in Africa. We’re going to do that in the context of the global south, especially countries of Africa, African Union, if they are wanting to use those solutions, we will be more than happy to offer those solutions for adoption in these countries with the necessary fine-tuning with the local data sets as may be needed.
Yu Ping Chan: And that really speaks to responsible AI, which is also right up there on the QR responses. Any questions from the floor here? Participants that are sitting here have heard the conversation, the distinguished panelists, and would like to ask a question or comment. I think I saw a gentleman over there. You can come up to the mic right here if you would like to say something. Yes, please. Thank you.
Audience: Is this mic on? A question for Mr. Abhishek. The Indian DPI model was very much rolled out in a partnership between the Indian government and the private sector, like letting the private sector decide what they want to do with the technology. How does that work? How does that work? How does that work? How does that work? How does that work? How does that work? So, you are basically asking the Indian government and the private sector, like letting that kind of agile, quick mindset into government and then using open source, using interoperability to roll that out. And in doing that mix of governance, do you see a change in that approach from the various deterministic kind of applications that came out building on top of an ad hoc, building on top of a UPI, so the idea of the payments? Is that the same way that you’re going to be thinking about building AI applications on top of your DPI?
Participant: It’s going to be similar. It’s going to follow the similar playbook that we had for the DPI. And while in DPIs, as you rightly mentioned, we build the basic building blocks like the Aadhaar or the UPI or the data layer and then various applications for various sectors are built on top of that. In AI, what we look at, what we are doing is that we are providing, again, the basic ingredients for building AI applications, which include providing access to affordable compute to all those who need it, especially AI developers, start-ups, researchers, academicians, students. So we have kind of made available almost 35,000 GPUs at a very low cost of a dollar per GPU per hour for those who are needing it. Then we are also enabling data sets platform called AI Coach, wherein data sets from across domains, across sectors, both from the public and the private sector. And then we are also enabling the data sets platform called AI Coach, wherein data sets from across domains, across sectors, both from the public and the private sector. And then we are also enabling the data sets platform called AI Coach, wherein data sets from across domains, across sectors, both from the public and the private sector. and the skills that they have to build AI applications. So, similarly, we are also providing tools for bias mitigation, for privacy preservation, for identifying deep fakes. So, all those tools which are required to test your applications for conformance to the responsible AI principles, are also being provided on a common platform. So, all the necessary ingredients which will be common for those who are fine-tuning, or those who are doing inferencing, or those who are building sector-wide applications will be provided as a common utility. Similar to the DPI model. So, that’s how we are going ahead with our AI development.
Yu Ping Chan: Thank you, Abhishek. Any other questions from participants here? Our in-person audience here at the IGF? Yes. Please introduce yourself as well.
Audience: Hi. This is Jasmine Khoo from Hong Kong. So, I heard about a concept about localizing the AI applications. So, also a thing that I heard about is, the local people need to build their own AI system. So, my question is, because there may be national level, when you say local, do you mean like national level, regional level, or even from a grassroots community? I just want to clarify what you mean, someone mentioned about the local people need to build their own AI system. And is there a way to measure or help those local people to efficiently build a system with the knowledge and a system to track the performance? Because when you’re helping them, you have to put yourself into their shoes. So, how does it come into an efficient way to do that? Thank you very much.
Yu Ping Chan: Maybe I’ll ask Yun to take the question, and then maybe I’ll ask Armando also to reflect on this question on measurement, and also then turn it over to other panelists.
Oluwaseun Adepoju: Yes, thank you so much. When we say local, it could mean the three examples you shared. For example, when you’re building agricultural stacks, for example, there are, I’ll use the example of plantain. There are different classifications of the, it’s a form of banana, right, plantain, right? And we’ve tested a number of foreign models that does not cater for that classification of plantain. So, when we say local, that means that people that came from the region where plantain is originated from, and those who have the history of the classification needs to contribute to it, right? That’s one of the examples when we say local. But also, some of the work we do in Northern Nigeria, for example, when we work with farmers or vulnerable groups, when you want to use their data, for example, for social welfare distribution from the government, for example, in regions where there’s flooding or there’s extreme poverty, we usually go back to them to ask for the permission to use their data and what we’re using their data to do, right? And that is why we use an SMS system where you get a prompt, you say yes or no for us to use your data for the process we’re building. That doesn’t mean that we’re not using their data for good, it’s for their good, but also we need to let them know what the data is being used for, right? Also in terms of contribution, and what we mean by local as well does not mean that the local people are the technical people building it, but they are aware and they are contributing to the stacks or a knowledge session or even a validation session of what you’re building for them so that at the end of the day, people believe that they co-created the AI solution that they are using. And trust me, when you connect AI and DPI, for example, you need the buying of the people. Forced to build anything in like a six month project, we spent the first two or three months just engaging the people, co-creating the problem statement with them, and then they contribute to the feedback when we build the first iteration of the solution. So that at the end of the day, the buying is almost automatic because they’ve been part of the journey. We’ve seen situation where people have invested millions of dollars on solutions, tech solutions that people rejected. Not even technology this time around. I think in Sudan, for example, the government built a well for people to use because of water issues. But after building that, the government discovered that the people still go to the local wells to go and fetch. And they ask them the question, why are you going to the local, you’re walking distance to fetch water again? And the women said, that is the only time we have to catch up in the evenings when we walk, when we take a walk to where we usually fetch water. And yet, potable water has been built in those communities for them. So for us to avoid unintended consequences of the technology we’re building, it’s good for it to be local. And that is a definition of local in the context of what I said.
Yu Ping Chan: Okay. Thank you so much. Armando and then Obra very quickly. And then we’re going to go back to a last round of one-liners from the, very quickly to the audience. Armando and then Obra. And then I see two very quick questions here. Armando, in 30 seconds.
Armando Guio Espanol: Perfect, no, not 30 seconds. No, well, I was just going to say that definitely this is very contextual. Again, I think we can talk about regional or local or national infrastructure and technologies being provided. That’s something that depends. So for example, in Latin America, we’re having some cases of LLMs being developed for the whole region as a regional project. We will see some elements that will succeed and others that will, of course, not deliver as expected. So this is still very challenging. And I think the big element for me is the governance of these technologies and this kind of public infrastructure that we are building. Who is being involved and what’s the kind of participation, stakeholder participation, and who is taking part in the decisions being made about the functioning, the training, and the whole development of this kind of infrastructure, which is critical for many sectors. So there’s also some perhaps food for thought about the governance of this technology, of this infrastructure, and of course, of many of the related projects that we are going to be seeing with the use of this technology and that will have a sector and national, regional impact, of course. So just for us to consider at the same time, not only the technology, but also the governance side there.
Yu Ping Chan: Oprah, over to you. Thank you so much, Armando.
Aubra Anthony: Yeah, just very briefly, I think I would just press one. Everything that Trinh mentioned around the need for engagement, it’s not, you know, when we talk about local contributions, it’s not just about tech expertise that needs to be brought in. It’s also about just engagement with the community and user-centered design, human-centered design specifically. And we’ve seen a range of examples where that’s been done really effectively. And also, importantly, ways that those opportunities for engagement can be turned into opportunities for capacity development. And so I don’t think we have time to go into it, but we can share a number of links of where that’s been done effectively. I’m thinking specifically of Togo in the context of COVID. They worked initially to develop a model for delivery of cash benefits with a university partner far, far away, and then transitioned that to a real strategic goal for the government of capacity development in country based from those learnings, kind of moving toward more sovereign approaches to developing AI from that experience. So I think there’s a range, there’s a spectrum of ways that that kind of local focus can look different across different contexts. But I think there’s a rich body of examples to draw from for that.
Yu Ping Chan: I think we’re out of time. So I’m very sorry to those who wanted to ask questions here, but the speakers will be available for a little bit. I’m going to ask Anshul and then Abhishek for their last sort of 10-second wrap-up. But at the same time, I just wanted to give the audience a chance to reflect on the question that we asked at the start. On a scale of one to 10, and to see if your opinions have changed since we asked the question just now, you recall that the results ended up at about the even 5.5 level where we were equally optimistic and pessimistic. So after having heard this conversation on a scale of one to 10, how optimistic are you that AI can accelerate inclusive sustainable development over the next five years? Then I’ll ask Anshul and then I’ll ask Abhishek for their last comments. Anshul.
Anshul Sonak: Yeah, 10-second comment, bringing AI skills for everyone has to be a national priority.
Participant: And then Abhishek, your closing words. See, like what I would say, I agreed that bringing AI skills, plus at the same time, I would say bringing more global partnership for enabling sharing of application, sharing of data sets, sharing of algorithms, and sharing of expertise. So if we bring it through summits, through conferences, more global sharing, it will really, really help us in moving forward.
Yu Ping Chan: Thank you so much. And again, to everybody here, to our distinguished panelists, to everybody in the room that’s really contributed to what I thought was actually a very rich and engaging discussion, and to also your views here. Thank you so much for all being here today and let’s all thank our distinguished panelists with a quick round of applause and to my online moderators in support for the UNDP. My thanks again to Co-Creation Hub and the Carnegie Endowment for co-organizing this event with us. Have a good day, everybody, and we hope you enjoyed the session as much as we did organizing it. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you, yeah. Thank you. Thank you. Thank you. Thank you, yes. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you very much. Thank you. Thank you. Thank you, thank you. Thank you. Thank you. Thank you very much. It’s very encouraging. Thank you, thank you. Thank you I did a lot for him. You don’t do this for me. But by all I mean, it was quite moving share. Mate, show us that thing. This ball?
Armando Guio Espanol
Speech speed
195 words per minute
Speech length
1106 words
Speech time
338 seconds
Need for evidence-based decision making to reduce information asymmetries and understand real AI impact
Explanation
Armando argues that decision makers need access to good evidence about AI’s real features and impact to make informed decisions. He emphasizes the importance of gathering methodologies and analysis to understand what AI technology is actually doing, rather than relying on assumptions or hype.
Evidence
Example of measuring AI impact on future work with MIT colleagues, finding augmentation rather than job replacement. Development of epoch analysis methodology to analyze real AI impact in specific areas.
Major discussion point
Current Landscape and Challenges of AI for Sustainable Development
Topics
Development | Economic
Agreed with
– Oluwaseun Adepoju
Agreed on
Need for evidence-based decision making in AI development
Governance of AI infrastructure requires stakeholder participation in decision-making processes
Explanation
Armando emphasizes that the governance of AI technologies and public infrastructure being built is critical and must involve proper stakeholder participation. He argues that decisions about the functioning, training, and development of AI infrastructure should include various stakeholders given its impact on many sectors.
Evidence
Examples from Latin America where LLMs are being developed as regional projects, with mixed success expected
Major discussion point
Localization and Community Engagement in AI Development
Topics
Legal and regulatory | Development
Agreed with
– Aubra Anthony
– Oluwaseun Adepoju
Agreed on
Community engagement and stakeholder participation essential for AI governance
Focus needed on implementation and understanding what works in practice rather than just theory
Explanation
Armando argues that there needs to be more focus on implementation and understanding what is actually working or not working in practice. He emphasizes the need to see efforts being made in implementing policies and ideas, and to understand the accelerators of implementation.
Major discussion point
Implementation and Practical Applications
Topics
Development | Legal and regulatory
Aubra Anthony
Speech speed
177 words per minute
Speech length
2282 words
Speech time
770 seconds
AI development is both promising and fraught due to concentrated power with few multinational players and digital divides
Explanation
Aubra argues that while AI shows promise for SDGs, there’s a risk that longstanding digital divides will become more calcified. She points out that power is becoming concentrated with few multinational players who dominate discourse, priority setting, and business models that often don’t serve populations most relevant to achieving SDGs.
Evidence
Africa accounts for only 0.1% of world’s computing capacity, only 5% of AI talent in Africa has access to needed compute power, and 7,000 languages spoken on the continent are considered under-resourced for NLP training
Major discussion point
Current Landscape and Challenges of AI for Sustainable Development
Topics
Development | Economic
Disagreed with
– Participant
Disagreed on
Priority focus for AI development constraints
Historical donor-led funding approaches are insufficient; need for collaborative pooled funding efforts
Explanation
Aubra argues that traditional donor-led, siloed, uncoordinated investment leads to results that are far less than the sum of their parts. She advocates for more collaborative pooled funding efforts that can better address the scale and scope of needs for AI ecosystem development.
Evidence
Examples include AI4D Funders Collaborative launched in 2023, Public Interest AI initiative launched at Paris AI Action Summit, and UNDP-Italian government AI Hub launch
Major discussion point
Funding and Governance Models for AI Development
Topics
Development | Economic
AI development must be non-extractive, self-sustaining, and involve communities impacted by AI
Explanation
Aubra argues that for AI to deliver for SDGs, it cannot be helicoptered in from afar but must involve communities impacted by AI in its development, deployment, and governance. She emphasizes that funding must be structured to be non-extractive and capable of becoming self-sustaining.
Evidence
Reference to principles for digital development and recognition that funding paradigm needs to shift to be more localized
Major discussion point
Funding and Governance Models for AI Development
Topics
Development | Human rights principles
Agreed with
– Armando Guio Espanol
– Oluwaseun Adepoju
Agreed on
Community engagement and stakeholder participation essential for AI governance
User-centered design and community engagement can be turned into capacity development opportunities
Explanation
Aubra argues that local contributions to AI development should include not just technical expertise but also community engagement and human-centered design. She emphasizes that opportunities for engagement can be transformed into capacity development opportunities.
Evidence
Example of Togo during COVID, which worked with a university partner to develop cash benefit delivery model and then transitioned to building in-country capacity for more sovereign AI development approaches
Major discussion point
Localization and Community Engagement in AI Development
Topics
Development | Capacity development
Oluwaseun Adepoju
Speech speed
179 words per minute
Speech length
2049 words
Speech time
685 seconds
Transition from AI hype to hope stage, requiring intentional innovation and multi-stakeholder approaches
Explanation
Oluwaseun argues that AI is transitioning from a hype stage (where everyone was investing due to fear of missing out) to a hope stage where concrete use cases can be identified. He emphasizes that this transition requires intentional innovation and multi-stakeholder approaches, especially when building AI solutions for sectors like education that involve children.
Evidence
Example of building EdTech solutions for children in Africa requiring 32 types of professionals including AI ethicists, safeguarding professionals, and digital security experts. Examples from Nigeria and Rwanda showing multi-stakeholder approaches.
Major discussion point
Current Landscape and Challenges of AI for Sustainable Development
Topics
Development | Capacity development
Patient capital approach needed to avoid commercialization pressure that compromises safety and equity
Explanation
Oluwaseun argues that AI innovation needs patient capital and should not be rushed into commercialization. He emphasizes that commercialization pressure leads to compromises on safety, equity, and proper use of people’s data, and advocates for giving at least one year to prove solutions work safely before discussing commercialization.
Evidence
Examples of tools they’ve experimented with that are fundamentally ethically wrong in how data is used to fit models. Reference to big tech companies compromising on safety and equity due to commercialization pressure.
Major discussion point
Funding and Governance Models for AI Development
Topics
Economic | Human rights principles
Disagreed with
– Participant
Disagreed on
Approach to AI development speed and commercialization pressure
AI solutions must be local, built by local people, and address core societal issues rather than creating unnecessary complexity
Explanation
Oluwaseun argues that AI companies should focus on core societal issues rather than just using AI for the sake of it. He emphasizes connecting AI to practical problems and avoiding the trap of trying to fix what isn’t broken with AI technology.
Evidence
Example of supporting innovators integrating AI with DPI, example of Nigerian state using AI to analyze trends in seed distribution to farmers rather than building unnecessary new systems, example of plantain classification that foreign models cannot handle properly
Major discussion point
Localization and Community Engagement in AI Development
Topics
Development | Sustainable development
Agreed with
– Participant
Agreed on
AI solutions must address real problems to be sustainable and scalable
Local involvement means community participation in problem definition, validation, and co-creation of solutions
Explanation
Oluwaseun explains that ‘local’ means involving people from the region in contributing to AI solutions, getting permission for data use, and ensuring community participation in co-creating solutions. He emphasizes that people should believe they co-created the AI solution they are using to ensure buy-in.
Evidence
Examples include plantain classification requiring input from people who originated the crop, SMS permission system for data use in Northern Nigeria, example from Sudan where government-built wells were rejected because women preferred walking to traditional wells for social interaction
Major discussion point
Localization and Community Engagement in AI Development
Topics
Development | Human rights principles
Agreed with
– Armando Guio Espanol
– Aubra Anthony
Agreed on
Community engagement and stakeholder participation essential for AI governance
Anshul Sonak
Speech speed
225 words per minute
Speech length
593 words
Speech time
157 seconds
AI presents opportunities as a potential equalizer but faces challenges from various divides (gender, racial, country)
Explanation
Anshul argues that AI can be a potential big equalizer, like electricity, that can change everything when properly implemented. However, he acknowledges significant challenges including various divides (gender, racial, color, country) that create asymmetries that need to be addressed for truly inclusive and responsible society.
Evidence
Research showing AI can save 15 hours per week in personal productivity, nature research indicating 79% of SDGs can be addressed by AI when used responsibly
Major discussion point
Current Landscape and Challenges of AI for Sustainable Development
Topics
Development | Human rights principles
Bringing AI skills to everyone should be a national priority
Explanation
Anshul argues that developing AI skills for the entire population should be treated as a national priority. He emphasizes the importance of balanced, responsible public-private partnerships and strong leadership to achieve population-scale impact in AI capacity building.
Evidence
Reference to India’s ministry example with capacity-building tools, examples of engagement, education, entrepreneurship, and employability programs
Major discussion point
Capacity Building and Skills Development
Topics
Development | Capacity development
Agreed with
– Yu Ping Chan
Agreed on
Capacity building is fundamental priority for inclusive AI
Participant
Speech speed
229 words per minute
Speech length
1715 words
Speech time
447 seconds
Energy consumption of AI systems must be balanced against SDG objectives for renewable energy and climate
Explanation
The participant argues that the high energy consumption of AI systems, particularly advanced GPUs, must be balanced against sustainable development goals for renewable energy and climate. They emphasize that benefits of AI applications should not outweigh costs from high energy usage.
Evidence
Example that H200 GPU consumes power equivalent to one U.S. home, mention of even more energy-intensive Blackwells and B200 systems
Major discussion point
Current Landscape and Challenges of AI for Sustainable Development
Topics
Development | Sustainable development
Disagreed with
– Aubra Anthony
Disagreed on
Priority focus for AI development constraints
AI applications must address real problem statements to be commercially viable and publicly fundable
Explanation
The participant argues that AI solutions are sustained and scalable when they actually address real problems and help solve them. They emphasize that when solutions create social and economic benefits, funding becomes available through commercial viability or public investment.
Evidence
Examples from India including financial inclusion through digital ID leading to microfinance and credit schemes, UPI enabling 20 billion monthly transactions representing 50% of global digital transactions, AI applications in healthcare for tuberculosis and diabetic retinopathy diagnosis
Major discussion point
Digital Public Infrastructure and Scalable Solutions
Topics
Development | Economic
Agreed with
– Oluwaseun Adepoju
Agreed on
AI solutions must address real problems to be sustainable and scalable
Disagreed with
– Oluwaseun Adepoju
Disagreed on
Approach to AI development speed and commercialization pressure
India’s DPI model provides basic building blocks (compute access, datasets, testing tools) for AI development
Explanation
The participant explains that India is applying its successful DPI playbook to AI development by providing basic ingredients including affordable compute access, datasets platform, and testing tools. This approach makes common utilities available for AI developers, startups, researchers, and students.
Evidence
35,000 GPUs available at $1 per GPU per hour, AI Coach datasets platform with data from public and private sectors, tools for bias mitigation, privacy preservation, and deepfake identification
Major discussion point
Digital Public Infrastructure and Scalable Solutions
Topics
Infrastructure | Development
Similar playbook approach being applied to AI as was used for successful DPI implementations
Explanation
The participant explains that India is following the same successful approach used for DPI development, where basic building blocks are provided and various applications are built on top. The model involves public-private partnerships with government providing infrastructure and private sector building applications.
Evidence
Success of Aadhaar, UPI, and data layer implementations that enabled various sector applications to be built on top
Major discussion point
Digital Public Infrastructure and Scalable Solutions
Topics
Infrastructure | Economic
Public-private partnerships and global cooperation essential for sharing applications, datasets, and expertise
Explanation
The participant argues that global partnerships are essential for enabling sharing of AI applications, datasets, algorithms, and expertise. They emphasize that conferences and summits facilitate this sharing which helps in moving forward collectively.
Evidence
Reference to upcoming Impact Summit in February where AI healthcare applications will be made available through repository for Global South countries, especially Africa
Major discussion point
Funding and Governance Models for AI Development
Topics
Development | Economic
Need to prioritize AI usage for essential functions and limit energy consumption for non-essential tasks
Explanation
The participant argues that there should be prioritization of which tasks AI should perform versus tasks that humans can do better. They emphasize limiting AI use for non-essential functions like simple text writing or summarization to reduce energy consumption and carbon footprint.
Evidence
Examples of people using AI for very simple tasks like writing poems, writing text, or summarizing text that humans can do effectively
Major discussion point
Implementation and Practical Applications
Topics
Sustainable development | Development
Yu Ping Chan
Speech speed
196 words per minute
Speech length
3045 words
Speech time
930 seconds
Capacity building emerged as top priority from audience responses for inclusive AI ecosystems
Explanation
Yu Ping Chan notes that capacity building consistently emerged as a top priority in audience responses when asked about priorities for supporting inclusive AI ecosystems. This reflects the community’s recognition that building human capabilities is fundamental to inclusive AI development.
Evidence
Slido poll results showing capacity building as recurring top response from both online and in-person participants
Major discussion point
Capacity Building and Skills Development
Topics
Development | Capacity development
Agreed with
– Anshul Sonak
Agreed on
Capacity building is fundamental priority for inclusive AI
Audience
Speech speed
164 words per minute
Speech length
307 words
Speech time
111 seconds
Clarification needed on what ‘local’ means in AI development – whether national, regional, or grassroots community level
Explanation
An audience member from Hong Kong sought clarification on the concept of localizing AI applications, asking whether ‘local’ refers to national level, regional level, or grassroots community level. They also questioned how to measure and help local people efficiently build AI systems with proper knowledge and performance tracking.
Major discussion point
Localization and Community Engagement in AI Development
Topics
Development | Capacity development
Need for understanding how India’s public-private partnership model for DPI can be applied to AI development
Explanation
An audience member asked about how India’s successful DPI model, which involved partnership between government and private sector with agile approaches and open source interoperability, would be applied to building AI applications. They wanted to understand if the same governance approach would be used for AI as was used for applications built on top of Aadhaar and UPI.
Evidence
Reference to India’s DPI success with Aadhaar and UPI systems
Major discussion point
Digital Public Infrastructure and Scalable Solutions
Topics
Infrastructure | Economic
International stakeholders should support AI adoption in African digital healthcare systems
Explanation
An audience member asked how IGF, WSIS, and other international stakeholders can continue to support the adoption of AI and digital healthcare systems in Africa to achieve sustainable development. This question emphasized the importance of international cooperation in the era of digital transformation and global health challenges.
Major discussion point
Current Landscape and Challenges of AI for Sustainable Development
Topics
Development | Infrastructure
Agreements
Agreement points
Need for evidence-based decision making in AI development
Speakers
– Armando Guio Espanol
– Oluwaseun Adepoju
Arguments
Need for evidence-based decision making to reduce information asymmetries and understand real AI impact
AI solutions must be local, built by local people, and address core societal issues rather than creating unnecessary complexity
Summary
Both speakers emphasize the importance of understanding what AI actually does and focusing on real, measurable impacts rather than hype or theoretical benefits. They advocate for evidence-based approaches to AI development and implementation.
Topics
Development | Economic
Community engagement and stakeholder participation essential for AI governance
Speakers
– Armando Guio Espanol
– Aubra Anthony
– Oluwaseun Adepoju
Arguments
Governance of AI infrastructure requires stakeholder participation in decision-making processes
AI development must be non-extractive, self-sustaining, and involve communities impacted by AI
Local involvement means community participation in problem definition, validation, and co-creation of solutions
Summary
All three speakers agree that AI development and governance must involve meaningful participation from stakeholders and communities that will be impacted by the technology, rather than top-down approaches.
Topics
Development | Human rights principles
AI solutions must address real problems to be sustainable and scalable
Speakers
– Oluwaseun Adepoju
– Participant
Arguments
AI solutions must be local, built by local people, and address core societal issues rather than creating unnecessary complexity
AI applications must address real problem statements to be commercially viable and publicly fundable
Summary
Both speakers emphasize that AI solutions should focus on solving actual societal problems rather than applying AI for its own sake. Solutions that address real needs become commercially viable and attract sustainable funding.
Topics
Development | Economic
Capacity building is fundamental priority for inclusive AI
Speakers
– Anshul Sonak
– Yu Ping Chan
Arguments
Bringing AI skills to everyone should be a national priority
Capacity building emerged as top priority from audience responses for inclusive AI ecosystems
Summary
Both speakers recognize capacity building as a critical foundation for inclusive AI development, with Anshul advocating it as a national priority and Yu Ping noting it as the top audience priority.
Topics
Development | Capacity development
Similar viewpoints
Both speakers critique traditional funding approaches and advocate for alternative funding models that prioritize long-term sustainability and community needs over quick commercialization and donor-driven agendas.
Speakers
– Aubra Anthony
– Oluwaseun Adepoju
Arguments
Historical donor-led funding approaches are insufficient; need for collaborative pooled funding efforts
Patient capital approach needed to avoid commercialization pressure that compromises safety and equity
Topics
Development | Economic
Both speakers emphasize that community engagement in AI development should be meaningful and transformative, turning participation into capacity building opportunities rather than mere consultation.
Speakers
– Aubra Anthony
– Oluwaseun Adepoju
Arguments
User-centered design and community engagement can be turned into capacity development opportunities
Local involvement means community participation in problem definition, validation, and co-creation of solutions
Topics
Development | Capacity development
Both speakers advocate for systematic, large-scale approaches to AI development involving strong public-private partnerships and emphasizing the importance of building national capabilities and international cooperation.
Speakers
– Anshul Sonak
– Participant
Arguments
Bringing AI skills to everyone should be a national priority
Public-private partnerships and global cooperation essential for sharing applications, datasets, and expertise
Topics
Development | Economic
Unexpected consensus
Energy consumption and sustainability concerns in AI development
Speakers
– Participant
– Oluwaseun Adepoju
Arguments
Energy consumption of AI systems must be balanced against SDG objectives for renewable energy and climate
Transition from AI hype to hope stage, requiring intentional innovation and multi-stakeholder approaches
Explanation
While the discussion focused primarily on social and economic aspects of AI for development, there was unexpected consensus on the need to balance AI benefits with environmental sustainability concerns, showing awareness that AI development must consider its environmental footprint.
Topics
Development | Sustainable development
Need to prioritize AI applications and avoid unnecessary complexity
Speakers
– Participant
– Oluwaseun Adepoju
Arguments
Need to prioritize AI usage for essential functions and limit energy consumption for non-essential tasks
AI solutions must be local, built by local people, and address core societal issues rather than creating unnecessary complexity
Explanation
Both speakers unexpectedly converged on the idea that not all tasks need AI solutions, and there should be intentional prioritization of where AI is applied, challenging the common assumption that more AI adoption is always better.
Topics
Development | Sustainable development
Overall assessment
Summary
The speakers demonstrated strong consensus on several key principles: the need for evidence-based, community-engaged AI development; the importance of addressing real societal problems rather than applying AI for its own sake; the critical role of capacity building; and the need for alternative funding models that prioritize sustainability over quick commercialization. There was also unexpected agreement on environmental sustainability concerns and the need for selective AI application.
Consensus level
High level of consensus on fundamental principles of responsible AI development, with speakers from different sectors (academia, private sector, government, civil society) aligning on core values of community engagement, evidence-based approaches, and sustainable development. This strong consensus suggests a mature understanding of AI development challenges and points toward actionable collaborative approaches for AI for sustainable development initiatives.
Differences
Different viewpoints
Approach to AI development speed and commercialization pressure
Speakers
– Oluwaseun Adepoju
– Participant
Arguments
Patient capital approach needed to avoid commercialization pressure that compromises safety and equity
AI applications must address real problem statements to be commercially viable and publicly fundable
Summary
Oluwaseun advocates for patient capital and avoiding early commercialization pressure, giving at least one year to prove solutions work safely before discussing commercialization. The Indian government representative emphasizes that AI solutions need to be commercially viable or publicly fundable from the start by addressing real problems, suggesting a more immediate focus on practical implementation and sustainability.
Topics
Economic | Development | Human rights principles
Priority focus for AI development constraints
Speakers
– Participant
– Aubra Anthony
Arguments
Energy consumption of AI systems must be balanced against SDG objectives for renewable energy and climate
AI development is both promising and fraught due to concentrated power with few multinational players and digital divides
Summary
The Indian government representative prioritizes energy consumption and environmental sustainability as key constraints that must be addressed in AI development. Aubra focuses more on power concentration, digital divides, and access inequalities as the primary constraints, with less emphasis on environmental concerns.
Topics
Sustainable development | Development | Economic
Unexpected differences
Urgency vs. patience in AI implementation
Speakers
– Oluwaseun Adepoju
– Aubra Anthony
Arguments
Patient capital approach needed to avoid commercialization pressure that compromises safety and equity
AI development is both promising and fraught due to concentrated power with few multinational players and digital divides
Explanation
While both speakers advocate for inclusive AI development, they have different perspectives on timing. Oluwaseun explicitly argues for patience and taking time to ensure safety and equity, while Aubra emphasizes the urgency created by digital divides and the risk of being left behind. This creates tension between careful, patient development and the perceived need to act quickly to avoid further marginalization.
Topics
Development | Human rights principles | Economic
Overall assessment
Summary
The discussion shows relatively low levels of direct disagreement, with most speakers sharing common goals of inclusive, sustainable AI development. The main areas of disagreement center on implementation approaches, timing, and priority constraints rather than fundamental objectives.
Disagreement level
Low to moderate disagreement level. The speakers largely align on core principles but differ on tactical approaches, suggesting that while there is broad consensus on the vision for AI for sustainable development, there are legitimate debates about the best pathways to achieve these goals. This level of disagreement is constructive and reflects different expertise areas and regional perspectives rather than fundamental ideological divisions.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers critique traditional funding approaches and advocate for alternative funding models that prioritize long-term sustainability and community needs over quick commercialization and donor-driven agendas.
Speakers
– Aubra Anthony
– Oluwaseun Adepoju
Arguments
Historical donor-led funding approaches are insufficient; need for collaborative pooled funding efforts
Patient capital approach needed to avoid commercialization pressure that compromises safety and equity
Topics
Development | Economic
Both speakers emphasize that community engagement in AI development should be meaningful and transformative, turning participation into capacity building opportunities rather than mere consultation.
Speakers
– Aubra Anthony
– Oluwaseun Adepoju
Arguments
User-centered design and community engagement can be turned into capacity development opportunities
Local involvement means community participation in problem definition, validation, and co-creation of solutions
Topics
Development | Capacity development
Both speakers advocate for systematic, large-scale approaches to AI development involving strong public-private partnerships and emphasizing the importance of building national capabilities and international cooperation.
Speakers
– Anshul Sonak
– Participant
Arguments
Bringing AI skills to everyone should be a national priority
Public-private partnerships and global cooperation essential for sharing applications, datasets, and expertise
Topics
Development | Economic
Takeaways
Key takeaways
AI for sustainable development requires evidence-based decision making to reduce information asymmetries and understand real impact rather than relying on hype
Current AI landscape is characterized by concentrated power among few multinational players, creating digital divides that risk excluding Global South countries
AI development must be localized and community-driven, involving affected populations in problem definition, validation, and co-creation of solutions
Funding models need to shift from traditional donor-led approaches to collaborative, pooled funding that is non-extractive and builds toward self-sustainability
Multi-stakeholder approaches are essential, requiring diverse expertise including AI ethicists, safeguarding professionals, and security experts beyond just technical teams
AI applications must address real societal problems to be viable and sustainable, rather than creating solutions for non-existent problems
Energy consumption of AI systems must be balanced against climate and renewable energy objectives
Digital Public Infrastructure (DPI) model can be successfully applied to AI development by providing basic building blocks like compute access, datasets, and testing tools
Capacity building and AI skills development should be national priorities for inclusive AI ecosystems
Resolutions and action items
India will make AI-based healthcare applications available through an AI use case repository for Global South countries, especially Africa, at the upcoming AI Impact Summit in February
India is providing access to 35,000 GPUs at low cost ($1 per GPU per hour) for AI developers, startups, researchers, and students
UNDP launched the AI Hub for Sustainable Development with Italy as part of G7 Presidency to accelerate AI adoption in Africa
Carnegie Endowment will publish research findings on funding needs and market inefficiencies in AI ecosystem development
Co-Creation Hub will continue patient capital approach, giving innovators at least one year to prove solutions work safely and equitably before commercialization pressure
Unresolved issues
How to effectively measure and track performance of locally-built AI systems across different contexts (national, regional, grassroots)
Specific mechanisms for ensuring linguistic equity in AI development for under-resourced languages
How to balance the urgency of AI adoption with the need for careful, community-engaged development processes
Concrete strategies for addressing the AI talent and compute capacity gaps in Africa (only 0.1% of world’s computing capacity, 5% of AI talent has needed access)
How to prioritize AI usage for essential vs. non-essential functions to manage energy consumption
Governance frameworks for AI infrastructure that ensure meaningful stakeholder participation in decision-making
Suggested compromises
Adopt a ‘patient capital’ approach that delays commercialization pressure for at least one year to ensure safety and equity in AI development
Use public value theory as foundation for AI projects before pursuing commercialization trails
Balance global AI ambitions with local capacity building by providing basic infrastructure (compute, data, tools) while allowing local innovation on top
Combine global cooperation for sharing applications and expertise with local ownership and governance of AI systems
Focus on augmentation rather than replacement of human capabilities, as evidence shows AI is more effective in improving rather than replacing jobs
Thought provoking comments
Instead of replacement of jobs, for example, what we are seeing right now is augmentation, actually improvement in the work some workers around the world are developing, and actually AI being helpful in that sense… we need good evidence in order for that process to take place in a way in which really it’s going to be helpful for many countries.
Speaker
Armando Guio Espanol
Reason
This comment challenges the dominant narrative of AI as a job destroyer and reframes it as a tool for worker augmentation. It emphasizes the critical need for evidence-based decision making rather than fear-driven policies, which is particularly insightful given the tendency toward sensationalized AI discourse.
Impact
This comment set a foundational tone for the entire discussion by establishing the importance of evidence over hype. It influenced subsequent speakers to focus on practical applications and real-world outcomes rather than theoretical concerns, and established the theme of moving from ‘hype to hope’ that other panelists later built upon.
Africa currently accounts for only 0.1% of the world’s computing capacity, and just 5% of the AI talent in Africa has access to the compute power it needs… These different issues of inclusion crop up when you think about the way that concentration is affecting access globally, but there’s also opportunity there.
Speaker
Aubra Anthony
Reason
This comment provides stark quantitative evidence of the AI equity gap while simultaneously reframing constraints as innovation opportunities. It’s particularly insightful because it moves beyond general statements about digital divides to specific, actionable data points that illustrate the scale of the challenge.
Impact
This comment fundamentally shifted the discussion from abstract concepts of inclusion to concrete data about resource disparities. It prompted other speakers to focus on practical solutions and local innovation, and established the framework for discussing how constraints can drive innovation rather than simply being barriers to overcome.
We’ve seen that we’re transitioning gradually from hype to hope. We can see use cases that we can point to, that is driving confidence… but before we transition from hope to truth, we’re going to make a lot of mistakes, we’re going to have a lot of losses, and we’re also going to see a lot of success at the end of the day.
Speaker
Oluwaseun Adepoju
Reason
This three-stage framework (hype → hope → truth) provides a sophisticated analytical lens for understanding technology adoption cycles. It’s particularly insightful because it acknowledges both the inevitable failures and the learning process inherent in technological development, offering a realistic yet optimistic perspective.
Impact
This framework became a recurring reference point throughout the discussion, helping other panelists contextualize their observations about AI development. It encouraged a more nuanced conversation about expectations and timelines, moving away from binary success/failure thinking toward a more mature understanding of technology evolution.
AI is one tool in the toolbox, and when you have this sense of urgency, that can both help drive the conversation of how we leverage those tools to suit our needs, but I think it also risks forcing us to adopt a solution that may not always match the problem.
Speaker
Aubra Anthony
Reason
This comment addresses a critical cognitive bias in technology adoption – the tendency to apply new tools to problems they weren’t designed to solve simply because of their novelty or perceived importance. It’s insightful because it warns against ‘solution in search of a problem’ thinking while acknowledging the legitimate urgency around AI adoption.
Impact
This observation prompted several panelists to emphasize problem-first rather than technology-first approaches. It influenced the discussion toward more careful consideration of when AI is and isn’t appropriate, and reinforced the importance of evidence-based decision making that Armando had established earlier.
For us, AI is local and must be built by local people… when you connect AI and DPI, for example, you need the buying of the people. Forced to build anything in like a six month project, we spent the first two or three months just engaging the people, co-creating the problem statement with them.
Speaker
Oluwaseun Adepoju
Reason
This comment redefines what ‘local’ means in AI development, moving beyond geographic considerations to include community engagement, co-creation, and cultural understanding. The practical example of spending half the project timeline on community engagement challenges conventional tech development timelines and priorities.
Impact
This comment significantly influenced the discussion’s focus on community engagement and participatory design. It prompted questions from the audience about what ‘local’ means and led to rich examples from other panelists about successful community-centered AI projects. It also reinforced the theme that sustainable AI development requires patience and genuine partnership rather than rapid deployment.
When we build compute systems for AI applications and models, the amount of energy that is needed for powering these systems is very, very high… We need to prioritise which are the tasks which AI should do, which are the tasks that AI need not do.
Speaker
Abhishek (Indian government representative)
Reason
This comment introduces a crucial sustainability constraint that challenges the ‘AI for everything’ mentality. It’s particularly insightful because it connects AI development directly to climate goals and forces a conversation about resource allocation and prioritization that is often overlooked in AI enthusiasm.
Impact
This comment brought environmental sustainability into sharp focus and prompted discussion about the trade-offs between AI benefits and environmental costs. It influenced the conversation toward more thoughtful consideration of when AI is truly necessary versus when it’s simply convenient, adding a critical dimension to the ‘appropriate technology’ discussion.
Overall assessment
These key comments collectively transformed what could have been a typical ‘AI is great/AI is dangerous’ discussion into a nuanced exploration of practical implementation challenges and opportunities. The comments established several crucial frameworks: the hype-hope-truth progression, the tool-in-toolbox perspective, the importance of evidence-based decision making, and the centrality of community engagement. Together, they shifted the conversation from abstract policy discussions toward concrete, actionable insights about how to develop AI responsibly and effectively. The comments also successfully balanced optimism about AI’s potential with realistic acknowledgment of constraints and challenges, creating space for both innovation and caution. Most importantly, they elevated voices and perspectives from the Global South, ensuring the discussion remained grounded in the realities faced by the communities most likely to be left behind in AI development.
Follow-up questions
How to measure the real impact of AI in specific areas like future of work and job augmentation vs replacement
Speaker
Armando Guio Espanol
Explanation
He mentioned they are developing methodologies with MIT colleagues to analyze real impact of AI, specifically noting they see augmentation rather than replacement of jobs, but emphasized need for better evidence and analysis
How to address information asymmetries between different stakeholders in AI development and deployment
Speaker
Armando Guio Espanol
Explanation
He highlighted the need to reduce big information asymmetries that exist and help decision makers understand what technologies are available and their real features
How to balance AI benefits with energy consumption and climate goals
Speaker
Abhishek (Indian government representative)
Explanation
He raised concerns about high energy consumption of AI systems and the need to balance productivity gains with renewable energy goals and carbon footprint reduction
How to prioritize which tasks should use AI versus tasks humans can do better
Speaker
Abhishek (Indian government representative)
Explanation
He questioned why AI is being used for simple tasks like writing poems when humans can do them better, suggesting need for prioritization framework
Research on funding paradigms and market inefficiencies in AI ecosystem development
Speaker
Aubra Anthony
Explanation
She mentioned Carnegie is conducting research on where funding needs are best matched by supply and where market inefficiencies exist, with findings to be published soon
How to structure funding to be non-extractive and capable of becoming self-sustaining
Speaker
Aubra Anthony
Explanation
She identified this as a key finding from their research – funding must have a path towards sustainability whether through commercial sector engagement or otherwise
How to measure and track performance of locally-built AI systems efficiently
Speaker
Jasmine Khoo (audience member from Hong Kong)
Explanation
She asked about ways to measure or help local people efficiently build AI systems and track their performance when providing assistance
What constitutes ‘local’ in AI development – national, regional, or grassroots community level
Speaker
Jasmine Khoo (audience member from Hong Kong)
Explanation
She sought clarification on the definition and scope of ‘local’ when discussing locally-built AI systems
How to focus more on implementation and understanding what is working or not working in AI for development
Speaker
Armando Guio Espanol
Explanation
He emphasized need to analyze implementation side, identify accelerators, and understand what works to be more efficient with resources
How to transition AI applications from proof-of-work to proof-of-stake models to reduce energy consumption
Speaker
Oluwaseun Adepoju
Explanation
He mentioned benchmarking blockchain’s transition and suggested this could be a future iteration for addressing AI’s unintended energy consequences
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
