How Small AI Solutions Are Creating Big Social Change
20 Feb 2026 15:00h - 16:00h
How Small AI Solutions Are Creating Big Social Change
Session at a glance
Summary
This panel discussion focused on “Small AI for Big Social Impact,” exploring how data-efficient, lightweight AI models can meaningfully serve underserved communities, particularly in the Global South. The panelists represented diverse organizations including Microsoft’s AI for Good Lab, Google Research Africa, the World Bank, and healthcare innovation initiatives, each working on AI solutions outside mainstream foundation model development.
The discussion emphasized that “small AI” encompasses models that are contextually relevant, cost-effective, and deployable on edge devices with limited connectivity. Panelists shared concrete examples of successful implementations, including Microsoft’s SPARO system for biodiversity monitoring using solar-powered cameras, Google’s weather nowcasting across Africa addressing the continent’s limited radar infrastructure, and healthcare AI tools that work offline in rural settings. A key theme was the importance of partnership-driven development, with Google Research Africa’s work on African language datasets exemplifying community-centered approaches to AI development.
Reliability emerged as a critical concern, with speakers noting that current diagnostic accuracy in healthcare can be as low as 50% in some regions, making even imperfect AI tools potentially beneficial. The World Bank’s perspective highlighted AI’s role in job creation rather than automation, emphasizing the need for digital public infrastructure and ecosystem development. Technical challenges discussed included working with limited, noisy data and ensuring models perform well in low-resource languages, which represent only a tiny fraction of internet training data despite comprising thousands of languages globally. The panelists concluded that small AI represents a practical pathway to achieving development goals and reducing global inequities through technology that respects local contexts and constraints.
Keypoints
Major Discussion Points:
– Definition and Philosophy of Small AI: The panelists explored various interpretations of “small AI,” emphasizing data-efficient, cost-effective models that can run on edge devices and serve underserved communities with respect for local context, rather than generic solutions designed for global north audiences.
– Technical Challenges and Solutions for Low-Resource Languages: Extensive discussion on developing AI models for underrepresented languages, including challenges like limited internet data (60% English), lack of benchmarks for 6,000+ languages, performance gaps, and safety alignment issues. Solutions included continual pre-training, instruction fine-tuning, and community-driven data collection initiatives.
– Healthcare Applications and Reliability Standards: Focus on AI applications in healthcare, particularly the critical importance of reliability and accuracy. Discussion covered the reality that current diagnostic accuracy averages only 50% across basic conditions in eight countries, and how small AI models can improve outcomes while maintaining human decision-making oversight.
– Rural and Community Impact: Emphasis on bringing AI benefits to rural communities through partnerships, co-creation with local stakeholders, and understanding local contexts. Examples included weather forecasting for rain-fed agriculture, biodiversity monitoring, and community health worker support systems.
– Economic Development and Job Creation: Discussion of AI’s role in transitioning emerging economies to advanced economies, with focus on job creation rather than automation, the need for digital public infrastructure, and the importance of local private sector ecosystems.
Overall Purpose:
The discussion aimed to showcase how “small AI” – efficient, contextually-appropriate AI models – can create meaningful social impact in underserved communities, particularly in the Global South. The panel sought to demonstrate alternatives to large foundation models by highlighting practical applications in healthcare, agriculture, education, and language preservation that work within resource constraints.
Overall Tone:
The discussion maintained a consistently optimistic and collaborative tone throughout. Panelists demonstrated mutual respect and built upon each other’s insights rather than competing. The tone was pragmatic yet hopeful, focusing on real-world solutions and acknowledging challenges while emphasizing the transformative potential of small AI. The atmosphere remained professional and solution-oriented, with panelists sharing concrete examples and technical details in an accessible manner for the diverse audience.
Speakers
Speakers from the provided list:
– Aisha Walcott-Bryant – Senior staff research scientist and head of Google Research Africa, focused on AI development addressing the continent’s most pressing challenges. Holds a PhD in electrical engineering and computer science and leadership roles in the IEEE Robotics and Automation Society.
– Alpan Rawal – Chief AI ML scientist at Wadhwani AI, moderating the session.
– Announcer – Event announcer introducing the panel participants.
– Wassim Hamidouche – Principal research scientist at Microsoft’s AI for Good Lab, specializing in computer vision, NLP, and multimodal AI with a focus on low-resource languages.
– Audience – Multiple audience members asking questions during the Q&A session.
– Illango Patchamuthu – World Bank Group Director of Strategy and Operations in the Digital and AI Vice Presidency, also serving as Acting Director for Data and AI.
– Antoine Tesniere – French professor of medicine and entrepreneur, specializing in health innovation and crisis management. Anesthesiologist at the Georges Pompidou European Hospital, co-founded ILEMENTS, coordinated France’s national COVID response, and serves as director of Paris-Saint-Denis Campus since 2021.
– Zameer Brey – Role and expertise not clearly specified in the transcript, but appears to work with AI tools focused on reducing inequality and works with communities in various global contexts.
Additional speakers:
– Neha Butts – Associate Director, Human Resources (mentioned at the end for handing out mementos)
Full session report
This panel discussion on “Small AI for Big Social Impact” brought together leading experts from Microsoft’s AI for Good Lab, Google Research Africa, the World Bank, and healthcare innovation initiatives to explore how data-efficient, lightweight AI models can create meaningful change in underserved communities, particularly across the Global South. The conversation revealed how context-appropriate AI solutions can be more transformative than large foundation models for addressing global development challenges.
Redefining Small AI: Beyond Technical Specifications
The panel established that “small AI” encompasses far more than simply reduced model parameters or computational requirements. Alpan Rawal from Wadhwani AI defined the concept as models that are “data efficient, cheap to run, sit on the edge, and most importantly are meaningful to the communities that we serve.” The discussion evolved to embrace a broader philosophy: small AI represents any technology that meaningfully impacts individuals while respecting their local context.
Zameer Brey from the Gates Foundation provided a memorable conceptualization through his Delhi traffic analogy: “Would anyone, given the traffic here, design something so big as an aeroplane to try and get across the city? No. I think we would design something that’s a lot smaller, faster, sharper, cost-effective, and gets us from point A to point B.” This reframed AI development from a capability-first to a context-first approach.
The panelists emphasized that their work focuses on three critical questions: “Does this work for whom, where, and at what scale?” This represents a fundamental shift from evaluating models against benchmarks to assessing their performance in real-world settings like district hospitals in Telangana, small farms in Zambia, or rural classrooms in Senegal.
Technical Challenges and Innovative Solutions
The discussion revealed significant technical challenges facing small AI development, particularly for low-resource languages and contexts. Wassim Hamidouche from Microsoft’s AI for Good Lab provided detailed insights into the systematic barriers facing multilingual AI development. He noted that while internet data is over 60% English, followed by other high-resource languages, the world’s 7,000+ languages represent only a tiny fraction of training data. More critically, only 300 languages have even basic benchmarks for evaluation, leaving over 6,000 languages without proper assessment tools.
The technical solutions discussed were sophisticated yet practical. Hamidouche described their approach of selecting appropriate multilingual base models with suitable tokenizers, then employing continual pre-training and instruction fine-tuning to achieve significant performance gains. Their work with Inuktitut (northern Canada), Chichewa (Malawi), and Maori (New Zealand) demonstrated 12% performance improvements, closing the gap with English-language capabilities. Crucially, this work required deep community partnerships to access local data and cultural context.
Aisha Walcott-Bryant from Google Research Africa emphasized the importance of innovation driven by constraints. She highlighted how Africa’s limited weather radar infrastructure necessitated innovative approaches to weather nowcasting. This constraint-driven innovation produced solutions that work across the continent, addressing the critical needs of rain-fed agriculture that supports 95% of African farming.
A major announcement came from Hamidouche regarding Microsoft’s Lingua Africa initiative, allocating $5.5 million to support data collection for African languages in partnership with the Gates Foundation and FCDU, led by the Masakani African Languages Hub. This builds on their successful Lingua Europa programme and focuses on domain-specific, application-specific datasets rather than general data collection.
Healthcare Applications: Balancing Reliability and Pragmatism
The healthcare discussion revealed a tension between perfectionist and pragmatic approaches to AI deployment. Zameer Brey advocated for “verifiable AI” that shifts from black box to “glass box” models, providing transparent, auditable logic chains. Using an airline safety analogy, he argued that statistical accuracy may be insufficient: “If I said to you, the plane has a high probability of leaving Delhi and landing safely… and that probability was 90%, 95%, would you get on that plane?”
However, this perfectionist approach was challenged by compelling real-world data. Brey cited a World Bank study showing that diagnostic accuracy for five simple conditions averages only 50% across eight countries. This statistic reframed the AI-in-healthcare debate, suggesting that even imperfect AI systems could represent dramatic improvements over current standards.
Antoine Tesniere, an anesthesiologist at Georges Pompidou European Hospital who coordinated France’s national COVID response, provided a more pragmatic perspective. He emphasized that AI should provide information rather than make decisions in healthcare, noting that validated small AI models are already successfully deployed in radiology, dermatology, and ophthalmology, often working offline on simple computers. His key insight was that the combination of algorithmic intelligence with human expertise produces optimal outcomes.
The panel shared a powerful example of how small AI could prevent tragic outcomes. Brey described a community health worker scenario where a pregnant woman presented with symptoms that, if properly diagnosed as severe gestational proteinuric hypertension, could have prevented the loss of both mother and baby. A small model running on a low-cost smartphone with patchy internet connectivity could have provided the decision support needed.
Economic Development and Partnership-Driven Approaches
Illango Patchamuthu from the World Bank provided crucial insights into AI’s role in economic development, emphasizing that their “North Star is job creation” rather than automation. The World Bank’s approach focuses on supporting countries where talent, data, electricity, and compute power may be limited, making small AI applications particularly relevant.
The World Bank has launched a publicly accessible Small AI Use Case Repository containing 100 cases across health, education, agriculture, and job creation, demonstrating practical applications that can be replicated across different contexts. Patchamuthu mentioned ongoing work in Uttar Pradesh in partnership with Google and similar initiatives in Maharashtra.
Walcott-Bryant emphasized the importance of approaching communities “with humility and relating,” noting her dual identity as both scientist and mother. This human-centered approach shifts the paradigm from building solutions “for them” to building “with us.” Google Research Africa’s “Walo” project (meaning “to speak” in Senegalese Wolof) involved partners across the continent in data collection for 27 African languages, ensuring cultural context and local ownership.
Microsoft’s SPARO (Solar Powered Acoustic and Remote Recording Observation) system demonstrated how small AI can address global challenges through locally deployable solutions. These solar-powered systems with embedded AI models enable biodiversity monitoring in remote locations across Colombia, Peru, the United States, and Tanzania, transmitting observations via satellite where traditional infrastructure is unavailable.
Addressing Audience Questions and Platform Collaboration
During the audience Q&A, questions addressed practical deployment challenges, including work in Rajasthan’s rural populations and agricultural domains. When questioned about competition between major technology platforms, the panelists demonstrated remarkable consensus around collaborative rather than zero-sum approaches.
Walcott-Bryant emphasized that with so many people on the planet and so many unique problems to solve, competition should focus on relevance to end users rather than market dominance. Patchamuthu reinforced this perspective by noting that three billion people remain offline and 3.5 billion lack access to healthcare, providing ample scope for all platforms to contribute meaningfully.
This collaborative spirit was evident in the open-source approaches described throughout the panel. The emphasis on open weights and accessible models reflects a commitment to democratizing AI capabilities rather than concentrating them within large technology companies.
Ongoing Challenges and Future Directions
The panel identified several critical challenges requiring ongoing attention. Device affordability remains a significant barrier, with computing devices still expensive for the bottom 40% of populations who could benefit most from AI applications. The technical challenge of achieving truly verifiable AI with transparent logic chains remains largely unsolved.
Scaling successful pilot projects to larger populations while maintaining effectiveness and community trust presents ongoing challenges. The World Bank’s systematic approach through their use case repository and KPI development represents progress, but sustainable scaling mechanisms require continued refinement.
The panel also highlighted the need for better integration of speech technologies with text-based language models, particularly for low-resource languages where oral traditions may be stronger than written ones. This multimodal approach could unlock AI capabilities for communities currently underserved by text-only systems.
Conclusion: Context-Appropriate AI for Global Impact
The panel discussion presented a compelling argument for rethinking AI development priorities. Rather than viewing small AI as a compromise, the panelists presented it as a strategic choice that prioritizes practical impact over technological impressiveness. The key insight was that context-appropriate AI solutions may be more transformative than large foundation models for addressing global development challenges.
This requires shifting from capability-first to problem-first thinking, from service delivery to co-creation models, and from perfectionist to pragmatic deployment standards that consider real-world baselines. The panelists demonstrated that small AI is already creating meaningful impact through weather forecasting for African farmers, biodiversity monitoring in remote locations, healthcare decision support for community workers, and language preservation for underrepresented communities.
Success depends on deep community engagement, cultural sensitivity, and long-term commitment to supporting local ecosystems rather than imposing external solutions. The panel’s collaborative spirit and focus on partnership-driven development offers a pathway toward more equitable and inclusive technological development that serves the billions who currently lack access to basic services and the thousands of languages that remain underrepresented in digital systems.
Session transcript
Please, I would request you to take your seat on the panel. Wassim Hamidouche, who’s a principal research scientist at Microsoft’s AI for Good Lab, specializing in computer vision, NLP, and multimodal AI with a focus on low -resource languages. Requesting you to please take your seat. Illango, who’s a World Bank Group Director of Strategy and Operations in the Digital and AI Vice Presidency and also serving as Acting Director for Data and AI. Requesting you to please join the panel. Thank you. Aisha Walcott, who is a senior staff research scientist and head of Google Research Africa, focused on AI development, addressing the continent’s most pressing challenges. She holds a PhD in electrical engineering and computer science and holds leadership roles in the IEEE Robotics and Automation Society.
Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneur, specializing in health innovation and crisis management and an anesthesiologist at the Georges Pompidou European Hospital. He co -founded ILEMENTS, coordinated France’s national COVID response, and since 2021 has served as director of Paris -Saint -Denis Campus. Thank you so much for being here. Requesting you to join the panel. And Dr. Alpan Rawal, who’s chief AI ML scientist at Wadwani AI, will be moderating today’s session. Alpan, requesting. Thank you. handing it over to you
yes thank you everyone for coming requesting those at the back if you could close the door so that we can reduce the noise a little bit it’s full ok great well if you could just calm down a bit and settle down thank you welcome to all our esteemed panelists and our panel the topic of our panel as you know is on small AI for big social impact like to deeply thank our panelists for making it all the way for the summit and making it to this panel so what do we mean by small AI I think different people have different definitions and we are sort of open to how each panelist chooses to interpret small AI When we at Wadwani AI brainstormed about this panel, we thought it would reflect in some ways the ethos of our own work, making models that are data efficient, that are cheap to run, that sit on the edge, and most importantly are meaningful to the communities that we serve, which are underserved communities, mostly in rural India.
But it’s increasingly clear that small AI means a lot more. And I see a lot of people talking about small AI in the summit. More generally, I think it encapsulates any AI that meaningfully impacts individuals while taking into account and respecting their very local context, rather than providing generic outputs. So anything like that could rightly be called small AI, and we’re going to hear from our panelists about their experiences with AI models like that. So with that small introduction, let’s now avoid further ado and speak to our panelists. panelists. So, can you hear me at the back? Yeah, okay. So we can start. I have a common question for every panelist. Each of you represents a different and important aspect of AI work that’s happening outside of the mainstream excitement that focuses on large foundation models for a primarily global north audience.
Can you tell us briefly about your organization’s work and perhaps your thoughts on non -foundation AI models in general? Maybe we can start with Zamir.
Thanks Alpan. Yeah. Thanks. Thank you. Thank you. Thank you. we really see the opportunity for AI to reduce inequality and our starting point with AI tools is really does this work for whom where and at what scale so those are some of the departing points for us and so really looking beyond the model against a benchmark but how is this going to work in a district hospital in Telangana or a small older farmer in Zambia or a classroom in rural Senegal and you know part of what we’ve in some ways got caught up with is is the performance of the model on its own. And we’ve forgotten how does this fit into the lives and the context that it operates in.
And in doing so, part of what we need to think about is who’s designing the model and what’s it designed for? And I was thinking about the traffic that we’ve been experiencing the last few days in Delhi. And I thought to myself, would anyone, given the traffic here, design something so big as an aeroplane to try and get across the city? No. Yeah. I think we would design something that’s a lot smaller, faster, sharper, cost -effective, and gets us from point A to point B. Without a first -class airbender. us on
I think that’s a great analogy. Aisha, can you tell us a bit about your work at Google Africa?
Aisha Walcott-Bryant:
Yes. Thank you. So I lead our Google Research Africa team. We have two sites, one in Ghana and one in Kenya, so representing East and West. But the work that we do is essentially from Africa for Africa and the world. Much of our work is scaling from the uniqueness of the continent. Turns out that a lot of the challenges are similar, definitely across the global south and generally worldwide. Our work, so kind of leaning into the next part of your question and thinking about how we approach this type of work, it’s very much interesting. It’s very much problem first. I always say if there’s a red button that… that you can press and it’s a one error zero, just build the red button.
We don’t need to bring AI or technology. So it’s really important to be very thoughtful about the type of problem. Coming from Google Research, we want to leverage our compute, our AI expertise and capabilities, and then our mandate, which is the societal impact at scale, to think about the types of problems that we work on. I’ll give two good examples of those problems. One is around weather now casting, which we launched last year across the continent of Africa. So to have much more accurate weather forecast is absolutely essential, given that much of the continent and as well as in India rely on agriculture for labor. And we are rain fed primarily, 95 % in Africa.
So having much more accurate weather forecast is essential in that case. And at the same time, on the technical challenges side, we know in North America and in Europe, there’s about 300 or so weather radar stations. And in Africa, there’s only 37, I believe. You know you can fit both North America and Europe in Africa. So when you think about that, you have to innovate. And so those constraints of the environment that you were alluding to in the intro are part of the motivation of having a research team in the continent. And so that was one way that we innovated and made solutions that were available to the continent. And then the other one is a complementary side, which is working with the ecosystem, working with partners in Africa, including Macquarie University, Digital Umaga, and Uganda around Africa.
African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so languages. This is the start. Most importantly is it’s partnership -led and driven, and this is because it’s voice, it is about accessibility and about reaching those rural villages as well. So, and enabling the ecosystem to build the solutions from there, whether they’re smaller models or larger models. So, making that type of data open and available is another way that we are leveraging this notion of smaller AI.
Thank you. Thank you. Great, great insights. Zamir, can you tell us about your work at Microsoft?
Yeah. Thank you. Sorry. Yeah, Massim. Thank you for the invitation, and it’s a great pleasure to be here today. So, first, what is AI for Good? AI for Good labs. AI for Good lab is the phylogenetic research of… Microsoft. We are employing advanced AI technology to solve real world problem with real societal impact. This is very important. And how our team and the researcher work, we closely collaborate with NGOs, governments, nonprofit organization, and local communities around the world. And together we are building AI solutions on multiple domains. We are interested about agriculture, food security, healthcare, education, culture, and so on. So this is about AI for Good Lab at Microsoft. Now I am scientist, so I would like to give you two concrete examples where we use small AI and also there are two global solutions to tackle global challenges.
So they are valid for both, they are valid for both global and international north and global south. So the first project in biodiversity is called SPARO. SPARO for solar powered acoustic and remote recording observation. It is an AI powered open source solution designed to track and monitor biodiversity in the most remote and hard to reach region in the world. So SPARO is camera tracks with HAA model that enable to detect animal species and this observation are then transmitted using wireless connectivity and satellite where we don’t have infrastructures to transmit this information. And this SPARO solution is already deployed around the world in many countries. I can cite Colombia, Peru, United States, Tanzania and it’s really enable practitioners and the researcher to understand species present and the ecosystem.
at scale, supporting more timely and informed decision to protect biodiversity. The second project focuses on wildfires. As you know, wildfires becomes real threat, global threat with devastating, impacting lives, communities and ecosystems and even economies. And around the world, firewires are increasingly in both frequency and intensity, making early detection and rapid responses more critical than ever. So through Alert California, we are addressing this challenge using AI. So what is Alert California? Alert California is the network of 1300 cameras operating 24 -7. So what is Alert California? Alert California is the network of 1300 cameras operating 24 -7. And we are developing AI tools that runs on the top of this effort. structure enabling to detect early fire and this will enable emergency responder to act quick quickly and stop fires before it’s split.
So Sparrow and Alert California as I said they are two global solutions for global problems that can be deployed anywhere around the world and we are providing them open source that anyone can embrace them and deploy them. Thank you.
Thank you. And when I think you’re the only member of this group that doesn’t work in the global south. So if you tell us a bit about the work at Paris and how you’re using maybe non foundation models. Right. Well
thank you Alpan for this invitation and I’m happy to be the outsider of the panel. I’m working actually in health care and I’m leading a new kind of innovation ecosystem for health care where we gather researchers. We. We have doctors patients. We have doctors patients. startups and industrial together as well as institutions so the idea is to really create a whole community of innovation and engage into the use of data and artificial intelligence and healthcare is probably one of the fields where AI has a long standing history and the world has discovered AI with the rise of Gen AI but there were a number of small AI models designed for a long time and this is why we already have a number of validated tools that we can use in healthcare answering the question not only does it work but is it reliable which is very important for our patients so before we have the proof of efficiency of LLMs in the medical field which is not fully clear yet, we use machine learning tools which are actually small AI models in very specific areas actually works really nice today is the image analysis or pattern analysis so you can think of radiology for example chest x -ray or fractures in the emergency room are fully analyzed by small AI models and small AI tools that are easily deployable on small computers you can also think of picture analysis in dermatology in ophthalmology etc so these are very concrete example of already validated small AI models in healthcare that are used on a daily basis at least in France and Europe we’ll get back in the discussions on how we need data efficacy on this topic but it’s really important to understand that these models are already deployable and some of them can actually work offline which is really important in some environments Thank you.
Ilango from World Bank perspective. What is your view on these types of models? Thank
you very much for the opportunity to be here. Coming right at the end, I don’t know what new things I can say, but to basically reinforce the messages that have been said. For us at the World Bank, we see AI as a means to an end, and very much of an AI agenda is shaped by the mission of the World Bank, which is to reduce poverty and grow prosperity in the world. And when you take that lens and you apply it, we have to keep it simple. Not all countries have the ability to have the compute power, the electricity, the talent, and the data. So therefore, taking on tested small AI applications to scale and replicating them around the world is something that we see as a mission priority.
So in that respect, what Badwani AI is doing here is pioneering. And what I’ve heard this morning from Dr. Sunil Badwani himself about what you’re doing in TB, what you’re doing in out -of -school children, this is all… tremendous and it has great potential for application and often what happens is we focus a lot on pilots and then what happens the pilots tend to kind of once the sheen wears off people forget the pilot I think what we need to do is and what we are doing at the World Bank is to see those pilots whether it’s in health education agriculture on the small AI setting it works in rural communities where offline where data is not that rich talent is not readily available and it can also not require a lot of electricity it’s plug plug and play then how do we get the right KPIs which then allows us to go from a village of community of 50 villages to a larger population center and to see how best we can help them say in agriculture to improve productivity you better inputs We are now working in UP in partnership with Google, and we are doing the same thing in Maharashtra.
Household income, the inputs get better, and to see how they can access the markets and agriculture credit. Similar in health and education, it’s in great practices that we are seeing in Africa, in Ghana, in Kenya. So how do we take these models and replicate it? So I’d like to assure everybody, and this is something people think, small AI is inferior. No. It’s not second class. No. Small AI can solve problems. It means to an end. And if it can actually fast -track development outcomes, you know, we’ve known the problems with millennium development goals. We’ve known the problems with sustainable development goals, and many countries are lagging behind. And this is an opportunity where this development technology, if it can be put to use in the right context in the right way, I think we can achieve faster
Thank you. That’s really interesting. Thank you. Ayesha, I’m going to come back to you and ask you more specifically how does the work that Google Research does in Africa impact rural communities specifically? How does one bring the benefits of technologies like these big foundation models to devices that may only have patchy Internet and supply very little data?
Aisha Walcott-Bryant:
Thanks, that was a loaded question. Two parts there. So I think, so first and foremost, in general, just approaching these challenges with humility and relating. So I always start with, you know, I’m a scientist, but I’m also a mother, right? And that’s a thread that I’ve been following for a long time. And that’s a thread that binds so many of us. When you think of that, you also think a lot of the solutions that we’re building, it’s not for them. for us. I’m using the same health systems that you all are developing, interesting tools and models for, you know, and we have many of the challenges around weather as well. So I think the first thing is to kind of have that base human layer as we think about our work and to connect with those communities, whether they’re rural or urban, right?
A lot of the work that we do, we’re looking at, you know, these large populations that, you know, if you think about agriculture, for example, where it has a large part of the labor force, you know, there’s many different ways that this is, you know, people are part of that value chain, whether they’re actually doing the growing or providing the inputs or making those decisions and the risks along the way. So that relationship of getting out in the community, getting out in the community, getting out in the community, getting out in the community is a very important part of work. that we do is to connect with those and then really think about you know kind of coming home as Google research you know where is our unique value proposition we’re not necessarily going to solve this whole problem alone usually it requires behavior change policy and many pieces of the puzzle how do we best fit our role and we do this in co -creation with partnerships so that’s kind of the the second layer of fabric on on how we reach these rural communities and then on the other side
Do you have an example of that?
Aisha Walcott-Bryant:
oh yeah yeah absolutely so if you think about okay I’ll do two ways so for example the the languages work that I was talking about wall and wall is a word it’s a Senegalese word that it’s wall of that means to speak and the way we wanted to create this you know we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we the community.
So if you have partners who are across the continent, let them be a part of the process of collecting the data, of understanding their language and their local context to get these high quality data sets. And so I think being partnership driven and knowing our role and our place was what was very successful for that. And then the last point I’ll just say on the second question that you threw in there is really our open models, our open weight models, Gemma, are made for a lot of these solutions that are more closer to the edge. So we have nano models that can run on your laptop and tablets and so forth,
Do you actually use them in Africa?
Aisha Walcott-Bryant:
Oh, yes. Yes, yes, yes, yes.
Great. Thank you. The next question is for you. Much of your work at the foundation is about reducing inequities through promoting safe and responsible use of AI. So what role, in your view, do small and custom AI models have to play in this? And if you can provide examples, that would be great.
Sure, Alpan. You know, I think I do want to just touch a little bit on the issue of reliability because my colleague over here spoke about it. And I think it’s a critical issue. I’m sorry if I’m going to repeat this example from one of my previous panels. But I asked the audience, Alpan, because I’m going to be on the plane later, so it’s a bad idea, but I asked the audience anyway. If I said to you, the plane has a high probability of leaving Delhi and landing safely wherever that’s going to be, and that probability was 90%. 95%. Would you get on that flag? 99 %? No? No. I did have one guy that he kind of thought about it and he said no.
And the point there is that I do think we’ve got to work towards models that have zero error, right? So much so that I think that we are trying to wrap our heads around is there a concept of verifiable AI where it shifts the narrative from a black box to a glass box. It actually exposes the logic. So for a particular set of inputs, you can follow the logic chain and it gives you a set of outputs that you can really track. You can audit. You can see that it’s repeatable. And you can prevent some of the kind of fundamental errors that we start… to see. I think, you know, and I want to go back to a very real example, Alpan, because when I think about small models, I’m coming back to the user, the community alky worker that tried to help a mother.
And we have one of our grantees who shared this very personal story of a first -time mother who presented, she was six months pregnant, and she said her hands and her feet started to get swollen. And the community alky worker looked and said, you’re pregnant, this is normal. Four weeks later, she started having a headache and blur vision. I think colleagues will know where the story goes. Unfortunately, that mother had severe gestational proteinuric hypertension. It was missed. And the mother and the baby didn’t make it. But in that moment, what inspired our grantee was if the community health care worker had a small model that worked on her device, which was a low -cost smartphone, still had patchy internet, but was just built small enough to help her to make good decisions at that point of care.
Today, we actually would be sitting with a very different outcome today. And so I think small models present us those
Very interesting. Very good points. Wasim, you spoke about, you know, in a general sense about the research done at the AI for Good Lab at Microsoft. Again, are there specific examples from your work where you see the benefits of building domain -specific models to realize impact? And are there research… lessons that we can take away from this? I think it would be good for the audience and us to understand what are the research directions of the future that can come out of this work.
models from 4 billion to 15 billion. And once we select the best LLM for one target language, we do all these recipes to boost the performance for these low resource languages. But I wanted to get back to all the challenges we are facing for these low resource languages. So when we train these foundation models, we train them on internet data. And internet data represented by more than 60 % is English and followed by some high resource languages like French, Mondarian, Portuguese, etc. So this low resource language is, even if they represent more than 7 ,000 languages, they represent only a tiny portion of internet data. This is the first challenge. And the second one is the benchmarks.
When we build LLM, we benchmark them, we evaluate the performance on benchmarks. And we have seen, like, there are only at least one benchmarks for only 300 languages. So we can see, like, more than 6 ,000 or 7 ,000. They don’t have that many benchmarks. So we can see, like, more than 6 ,000 or 7 ,000. They don’t have even one benchmark. Even in these 300 benchmarks, most of them are just translation from English to this low -resource language. They have nothing to do with the culture and the context of these languages. The third challenge is the performance gap. Of course, there is a performance gap for these LLMs, even the frontier models between high -resource and low -resource languages.
The fourth one is safety. When we build LLMs, usually we do some safety alignments with reinforcement learning, but these safety are mainly done in English and some of high -resource languages. Now, when we build LLMs for low -resource languages, it becomes very strong for these low -resource languages. It raises some other issues with safety. We have to evaluate these LLMs for safety on this language and do all these alignments, reinforcement learning in the target language also. In this PO, we addressed some of these issues. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments.
We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have been targeting three pilot languages, which are Inuktitut, spoken in north of Canada, indigenous language, Chichewa in Malawi in Africa, and the Maori in New Zealand. Why we have selected these three languages? Just because we have access to local community to help us to get data. So we gathered data from this community. Then we used some continual pre -training, instruction fine -tuning to boost the performance of open weights LLM, and we were able to gain 12 % balance gain, closing the gap with English.
So now what’s next? Next steps, actually we are trying to expand this to more languages. We have some collaboration, for example, in North America with Paraguay to develop LLM for Guarani, and we want to extend this to other languages. But most importantly, we have launched an initiative to help the community to get the best of the language. We have an initiative called Lengua Europe. We have a project called Lengua Europe. to fund data collection in Europe for 10 languages in Europe. It was released in last September. It was very successful. We have received many applications and 10 have been selected. And now we will start working with them. And it was that successful that now we are extending this initiative to Africa through Lingua Africa.
It has been announced just today in the AI Summit. And we will be allocating 5 .5 million to support data collection for African languages. And this is in partnership with Gates Foundation and Microsoft Air for Good and FCDU. And this initiative will be led by Masakani African Languages Hub.
Sorry, just to follow up. So for people who are working on these small language models or domain, specific language models even, you know, say for healthcare domain or some other domain. Are there, you know, strategies? that they should pursue that you can recommend?
Yeah, this is very important, and this is also related to the call for Lingua Africa because many efforts have been done in the past to collect general proposed data. Now we have enough, I think, general proposed data, but we evaluate the performance of these AI tools for application -specific, for example, healthcare, education, agriculture. They don’t work as we want, as expected. So what we want today is rather than, instead of focusing on general data collection, we will be focusing on domain -specific, application -specific, use case -specific data collections and building AI tools for specific domains. At least for this all reliability issues, we will have a model that performs good in that target level resource language in that application that we can deploy and we can be used by local communities and local communities.
This is really a priority for the next…
Thank you. Ilango, let me come to you. You have vast experience in international development. Can you give us a view of the future as it relates to using AI for developmental goals? Do you think AI will have meaningful role to play in transition of emerging economies to advanced economies?
So I do think the prospects are good and our North Star is job creation. And so we need to support countries that AI doesn’t automate jobs away, but AI actually supports the creation and enhancement of jobs. And this is where small AI becomes imperative, unlike the foundation models. Good question. Which will have implications. So the second is how are we going to go about it. And in some sense, whether it’s large language models or small AI solutions, you need an ecosystem. And that ecosystem needs to be powered by the local private sector. And often what we see even now, whether the AI revolution is before us or not, small enterprises, whether in the SME space or in the larger space, struggle for a variety of reasons.
And if the countries don’t reform business processes, make it easier for permitting, which AI can do, you’re going to see that AI actually is not going to play an effective role. So there are some fundamental reforms. And this is where some of the foundational investment in BPI, the digital public infrastructure, needs to happen to create that ecosystem and the ability for the ecosystem to then work with the private sector, the local communities, to be able to create those jobs. And this is what we… seeing everywhere, if that happens and here too you see this whole vibrancy around the startup ecosystem is why? Because everyone, the young people see opportunities and this momentum can drive everywhere in the world.
Whether it be in India, whether it be rest of South Asia or Africa or Latin America or even in the Pacific region. So how do you go about it? And what we did was we joined hands with a number of multilateral development banks and last couple of days we launched this small AI use case repository. It’s a good 100 cases. It explains in health education and agriculture and job creation how AI can be leveraged to the maximum advantage of communities. Both in terms of service delivery, productivity gains, household income gains. All this eventually leads to better jobs, better employment and better income prospects. So we are very much upbeat about small AI but I do take the point about community trust.
Once it fails, the community is not going to believe it. So it’s very important that whatever we put in place work with others, partners including the MDBs or Microsoft, Google, Gates and everyone. We have to ensure that whatever we leave behind in small communities is something trustworthy, reliable and it doesn’t end of the day hallucinate and give them something that the farmer struggles and ends up with other challenges. Thank you.
So this report you mentioned, is it open access?
On the World Bank we are hosting it. It’s called AI Repository. Just type and you’ll be able to access it. It’s got 100 and we’ll continue to update this and once we’re able to sort out some legal issues then we’ll also allow anyone to submit their use case repository obviously into the repository obviously we’ll go through a filtering process to ensure that the right ones are there.
Great. Thank you. Antoine, coming to you. You have an organization that uses AI to advance health outcomes through research and commercialization. Are data -efficient and hardware -integrated AI models important for the work that’s happening at Parasante? And do you see these models as sort of potentially being deployed in low – and middle -income countries like India?
Yes, so clearly they are very important for us for different reasons. Of course, we’ll get back to the scalability and the use in low – and middle -income countries. But at first, what is the reality in healthcare is that data is scarce and siloed. And so you need to work on what you have, actually. So sometimes it’s a large set of data. Sometimes it’s a very small set of data. But you need to have tools that allow you to build relevant algorithms and relevant analysis. on small data sets. In the meantime, of course, we’re building larger data sets. Sometimes it’s at a level of one department in one hospital. Sometimes it’s one hospital. Sometimes it’s a group of hospitals.
At the end, what we are reaching out in Europe is the constitution of a large European health data space. 450 million citizens joining their health data in digital public infrastructure organized in 27 countries, which will be a world premiere. But in the meantime, we need to work on that reality of scarce data. Second thing is that not only data is limited, but also when you want to enter the new revolution in medicine, which is what we call precision medicine, personalized medicine, you need to work on very efficient algorithms because they need to adapt to one person and not only to a whole population. So you need also to get that into account in building the algorithm.
The last thing is that You also have to work with what is existing in the healthcare systems, which is sometimes not supercomputers or high calculation power that exists in servers remotely. But when you’re in a room of a patient or working in hospitals, it’s a very simple computer. And you need to have efficient algorithms and tools that you can have running on that kind of computers. And so, of course, you go all the way to a smartphone at some point if you go into remote areas. So this is why we actually work on this kind of approach, making sure that, of course, we have research on LLMs and large computing power. But we also have this work on small data, very efficient algorithm.
Can you give examples?
Well, yes. I mean, I already gave some examples about radiology. We are able to. We have a radiology algorithm running on small computer machine. And getting back to your example, which I think is really important, it provides me the. opportunity to put two very important facts. One is that the AI that we use is providing information. It’s not making decisions in healthcare. So of course we target high level of reliability but at the end it’s a human decision and this is very important I think. Second one is that we’ve been trying to compare the performance of the algorithm that we’ve been designing with the existing performance. And of course you’re reaching to 99 .999 % etc.
But what very few people actually know is that the actual performance of what we do at the moment is not 99 .999 % So most of the time and I won’t say the numbers but most of the times it’s actually better than what we have. And this is really important in your example. Is it good enough compared to what we can actually do at the moment? And I think it’s particularly important in low – and middle -income countries because a very simple solution, offline LLMs, et cetera, can solve many, many issues.
Alban, can I pick up quickly? I think it’s really important, and actually I’m going to name the number if it’s okay. Okay, so a really important World Bank study from a few years back showed that on a set of five very simple conditions, the diagnostic accuracy was 50 % across eight countries. 50%. What illnesses are we talking about? Acute diarrhea, upper respiratory tract infection, maternal hypertension. And the point about that is I don’t think, any of us would be happy with 50%, the equivalent of tossing a coin and saying that’s okay. And so I completely understand that today there’s a big gap between what the models can offer. And I think the question about are the models performing better than the average clinician, that’s done.
Sorry, I can’t resist the follow -up question. So often you find that average accuracy of models is far better. But models seem to fail more unpredictably than humans. At least that’s sort of the understanding in health care. Do you agree with that or do you think that’s not true? Anyone who wants to answer this.
Well, so I think we need another hour to discuss this. So what you say is absolutely true. But then you need to look at every pathology or every symptoms that you’re looking. Because the performance. The performance of diagnostic can be a little bit higher in certain places, in certain situations, a little bit lower, et cetera. But we get to the right to the same point, which is what we are building. is actually better than what we are able to do at the moment. And what we show in the scientific literature is that actually the combination between algorithm and natural intelligence, I would say the doctor, is actually the best tool so far. So the question, getting back to your question, how do we deploy this in low – and middle -income countries, I think it’s really important.
We need to have a model that are able to run on small devices. That are able to run offline. And sometimes it’s a very limited set of data, very limited set of algorithm. But if you, we were actually discussing in Paris about examples of remote LLM providing answer on the 10 most important questions for healthcare in low – and middle -income countries. That doesn’t need LLMs online with super calculation power. So that’s one first point, edge native AI. We also need to have data -efficient learning systems because most of the time in low – and middle -income countries, we have a limited amount of data available. So this is what I discussed earlier.
We have a lot of data in India, but it tends to be noisy.
Yes, but we need to get the time to actually get them together, clean them, and get them prepared for robust analysis. So I know you are leapfrogging and going very fast, but by the time you will scale, this will create a real power of analysis. And then we need also to understand how we can couple hardware with software and algorithm to design reduced costs so that they can very easily scale. Thank you.
Great. That was fantastic insight. I’d actually like to give some time. to the audience to ask questions to our panelists so yes please
thank you very much I’m Irish Kumar from the CSC Winnie Ocean Center on solar energy particularly in basement I’m belong to Rajesh son question a question to World Bank president very thanks to the World Bank in Rajesh on 60 % population rural areas and totally based on the agricultural domain 40 % population in the youth how our bank is increase the capacity of AI application to the youth as well agricultural domain so the economic changes more productivity more economy more you inclusion in climate change and renewable energy domain
Thank you for that question, which I think is a very foundational question to ask any policymaker in terms of what kind of an AI strategy or implementation you want to have at any geography in the world. So obviously the first thing is you need digital literacy. Second, you need to skill up so that everybody is upskilled and reskilled on AI -related capabilities. Third is improving the STEM capability in schools and universities. So you do create a future cadre of people who can work on these topics. And then the sectors you mentioned, which are our priorities, agriculture, health, and education, obviously this is where we see the greatest potential for small AI. But particularly on Rajasthan, right now I don’t have any information, but I’m happy to share that with you.
But certainly we are working across different states in India like we’re doing elsewhere in the world. And we do prioritize literacies, skilling. STEM and applications in priority sectors like agriculture, health and education.
But having said that, I also want to say one point. I mean, just to respond to devices that can do computing, devices are expensive for the bottom 40%.
Yeah. Hi, my name is Selena. I’m the CEO and co -founder of Zindi. We run competitions to develop models, especially in Africa. And I actually had a question for Wasim about kind of the technical implications, the size implications, the practicality of using… Open source, open weight models, you know, large language models to train very specific, domain specific, you know, language, you know, under -resourced language models. How have you seen that play out?
Yeah, I think what we have seen, like the selection of the base model is very important. Because what is true, what is real, that we cannot train from scratch in LLM, even if small or large language model. We cannot train it from scratch for lower social languages because we don’t have this 15 trillion tokens to train. So it is very important to select the best multilingual model that has the right tokenizer that can be adapted to many lower social languages. This is very important. And then get the data that we need. And what we have seen also, monolingual data helps, but also bilingual data can help. And also translating English into this lower social language can also help to boost the performance.
So in our paper, we are providing all these three CPs to follow to get the best boost in terms of performance. What I would like to add also, with all these level source languages we have, text cannot solve them all. Many of these languages will be solved by speech. It’s very important. ISR models, speech -to -text, text -to -speech will build a very large role into unlocking all these level source languages in addition to LLM that can operate into level source language or in English.
I think we have time for one really short question.
Hi, this is Dr. Ravi Singh. I’m from Miami, and it was a great panel, so a lot of great insights. It’s for Google, Microsoft, and the World Bank. Here’s the scenario. If there’s compliance across all of these platforms, which platform will win the AI wars?
That’s a loaded question. Anyone want to answer? I’m not. So first of all, I think healthy competition is how we’ve been able to develop incredible technologies just over time. So the competition is healthy, and this is great. I don’t see it as a zero -sum game. There’s too many people on the planet, and there’s too many challenging, unique problems that need to be solved. So if we’re making it useful and bringing joy and happiness for all, that’s in the – I just love it, that’s in the theme here, then it’s not necessarily going to be who wins whatever platform. It’s what is relevant to the context of the end user. So taking it back to a more, like, human, personal perspective.
That’s my thinking.
First, three billion people are offline, so there is space for everybody to compete. Second, in health sector alone, three and a half billion people don’t have access to healthcare, so there is enough scope for all kinds of applications.
Just I want to add, many people have been asking me to, if this all efforts we are doing for language, if it is enough to make this model as good as English. I would say maybe not, but without all these efforts we would never reach this objective. So we have all these collective efforts will get us to this objective.
Thank you so much, everyone. I would now like to invite Neha Butts, Associate Director, Human Resources, to just hand out the mementos to all our speakers. And we will just take one group photo. Thank you. One group photo, please. Requesting the speakers to just take one group photo, please. Thank you so much everyone Thank you everyone for joining Thank you Thank you
Alpan Rawal
Speech speed
127 words per minute
Speech length
1158 words
Speech time
546 seconds
Moderator framing of “small AI” as locally relevant
Explanation
The moderator defines small AI as technology that must be meaningful for the end‑user’s local context rather than generic solutions. This sets the tone for the discussion on context‑specific, low‑cost models.
Evidence
“It’s what is relevant to the context of the end user” [6]. “So what role, in your view, do small and custom AI models have to play in this?” [8].
Major discussion point
Definition and Principles of Small AI
Topics
Artificial intelligence | Closing all digital divides
Question on deployment in low‑ and middle‑income countries
Explanation
The moderator asks whether small models can be deployed in LMICs, highlighting the relevance of edge‑native solutions for such settings.
Evidence
“And do you see these models as sort of potentially being deployed in low – and middle -income countries like India?” [4].
Major discussion point
Definition and Principles of Small AI
Topics
Artificial intelligence | Closing all digital divides
Zameer Brey
Speech speed
123 words per minute
Speech length
795 words
Speech time
385 seconds
Context‑specific, low‑cost models
Explanation
Brey stresses that small models must be evaluated for real‑world impact in specific contexts such as district hospitals, farms, and classrooms, rather than only on benchmarks.
Evidence
“And so I think small models present us those” [9]. “we really see the opportunity for AI to reduce inequality and our starting point with AI tools is really does this work for whom where and at what scale so those are some of the departing points for us and so really looking beyond the model against a benchmark but how is this going to work in a district hospital in Telangana or a small older farmer in Zambia or a classroom in rural Senegal” [55].
Major discussion point
Definition and Principles of Small AI
Topics
Artificial intelligence | Social and economic development
Need for verifiable, zero‑error models (glass‑box AI)
Explanation
Brey calls for AI that can be audited and verified, moving from black‑box to glass‑box systems to ensure reliability and trust.
Evidence
“so much so that I think that we are trying to wrap our heads around is there a concept of verifiable AI where it shifts the narrative from a black box to a glass box” [129]. “You can audit” [130]. “And the point there is that I do think we’ve got to work towards models that have zero error, right?” [122].
Major discussion point
Reliability, Safety & Verifiability
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Community health worker example
Explanation
Brey illustrates how a low‑cost smartphone with a small model can empower a community health worker to make better decisions despite patchy connectivity.
Evidence
“when I think about small models, I’m coming back to the user, the community alky worker that tried to help a mother… if the community health care worker had a small model that worked on her device, which was a low‑cost smartphone, still had patchy internet, but was just built small enough to help her to make good decisions at that point of care” [157][158].
Major discussion point
Deployment in Rural / Low‑Connectivity Settings
Topics
Artificial intelligence | Closing all digital divides
Aisha Walcott‑Bryant
Speech speed
Default speed
Speech length
Default length
Speech time
Default duration
Problem‑first, edge‑ready approach for Africa
Explanation
Walcott‑Bryant emphasizes that solutions must start from the problem and be built to run at the edge, especially for African contexts.
Evidence
“It’s very much problem first” [17]. “So that’s one first point, edge native AI” [16].
Major discussion point
Definition and Principles of Small AI
Topics
Artificial intelligence | Closing all digital divides
Accurate weather forecasting & African language datasets
Explanation
Her team launched continent‑wide weather forecasting and released multilingual voice datasets to support low‑resource African languages.
Evidence
“One is around weather now casting, which we launched last year across the continent of Africa” [28]. “We just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so languages” [81].
Major discussion point
Organizational Initiatives & Real‑World Applications
Topics
Social and economic development | Data governance
Open‑weight models for edge deployment
Explanation
She notes that open models such as Gemma are designed for low‑power devices, enabling edge AI across Africa.
Evidence
“our open models, our open weight models, Gemma, are made for a lot of these solutions that are more closer to the edge” [126]. “And we have nano models that can run on your laptop and tablets and so forth” [153].
Major discussion point
Challenges for Low‑Resource Languages & Domains
Topics
Artificial intelligence | Enabling environment for digital development
Partnership‑driven approach with African institutions
Explanation
Success is attributed to collaborations with universities and local partners across the continent.
Evidence
“being partnership driven and knowing our role and our place was what was very successful for that” [40]. “working with partners in Africa, including Macquarie University, Digital Umaga, and Uganda around Africa” [172].
Major discussion point
Partnerships, Open Data & Community Involvement
Topics
Enabling environment for digital development | Capacity development
Regional research footprint in Ghana and Kenya
Explanation
By maintaining research sites in both Ghana (West Africa) and Kenya (East Africa), the team ensures that small‑AI solutions are informed by diverse linguistic, cultural, and infrastructural contexts across the continent.
Evidence
“We have two sites, one in Ghana and one in Kenya, so representing East and West.” [10].
Major discussion point
Organizational Initiatives & Real‑World Applications
Topics
Artificial intelligence | Closing all digital divides
Scarcity of African language resources
Explanation
Walcott‑Bryant highlights that only a handful of African languages—approximately 37—have sufficient data resources, underscoring the urgent need for targeted data collection and model development for the continent’s linguistic diversity.
Evidence
“And in Africa, there’s only 37, I believe.” [11].
Major discussion point
Challenges for Low‑Resource Languages & Domains
Topics
Data governance | Closing all digital divides
Long‑term commitment to African language work
Explanation
She emphasizes a sustained engagement with African language challenges, noting that this focus has been a continuous thread in her research and collaborations.
Evidence
“And that’s a thread that I’ve been following for a long time.” [9]. “And that’s a thread that binds so many of us.” [15].
Major discussion point
Partnerships, Open Data & Community Involvement
Topics
Data governance | Capacity development
Humility‑first collaborative approach
Explanation
Walcott‑Bryant advocates tackling AI challenges with humility and a collaborative mindset, positioning partnership and local insight as central to responsible model development.
Evidence
“So I think, so first and foremost, in general, just approaching these challenges with humility and relating.” [14].
Major discussion point
Capacity development & Ethical approach
Topics
Capacity development | Human rights and the ethical dimensions of the information society
Foundational start of African AI initiatives
Explanation
Walcott‑Bryant frames her work as being at the very beginning, signalling that the effort to build small AI solutions for Africa is still in its early, formative stage and requires foundational investment.
Evidence
“This is the start.” [1]
Major discussion point
Initiation and early‑stage work
Topics
Artificial intelligence | The enabling environment for digital development
Two‑part strategy: data collection and model development
Explanation
She highlights that the programme is split into two essential components – gathering high‑quality data and then building models on that data – underscoring the need for an integrated pipeline.
Evidence
“Two parts there.” [12]
Major discussion point
Strategic framework
Topics
Data governance | Artificial intelligence
Emphasis on African language diversity
Explanation
Walcott‑Bryant repeatedly points to African languages as a central focus, reinforcing the importance of multilingual AI that can serve the continent’s linguistic plurality.
Evidence
“African languages.” [8]
Major discussion point
Linguistic inclusion
Topics
Closing all digital divides | Data governance
Community cohesion as a binding thread
Explanation
She notes that a shared commitment among researchers, partners, and local stakeholders creates a unifying thread that drives collective action on AI for development.
Evidence
“And that’s a thread that binds so many of us.” [15]
Major discussion point
Collaborative community
Topics
Capacity development | Human rights and the ethical dimensions of the information society
Navigating complex, loaded questions with humility
Explanation
Walcott‑Bryant acknowledges that some questions are especially loaded and responds by emphasizing a humble, collaborative stance, showing a willingness to engage with nuanced policy issues.
Evidence
“Thanks, that was a loaded question.” [13]. “So I think, so first and foremost, in general, just approaching these challenges with humility and relating.” [14].
Major discussion point
Dialogue and collaborative problem solving
Topics
Capacity development | Human rights and the ethical dimensions of the information society
Clear endorsement of African AI focus
Explanation
She repeatedly affirms the importance of African languages and AI work, signalling strong commitment to advancing AI solutions that serve the continent’s linguistic diversity.
Evidence
“Yes, yes, yes, yes.” [7]. “African languages.” [8].
Major discussion point
Commitment to African AI agenda
Topics
Artificial intelligence | Closing all digital divides
Active engagement and openness to stakeholder input
Explanation
Walcott‑Bryant repeatedly thanks participants and affirms their contributions, signalling a collaborative stance that values dialogue and stakeholder perspectives in shaping small‑AI initiatives.
Evidence
“Thank you.” [6]. “Yes, yes, yes, yes.” [7].
Major discussion point
Collaboration and stakeholder engagement
Topics
Capacity development | Human rights and the ethical dimensions of the information society
Local ownership of AI solutions
Explanation
Walcott‑Bryant stresses that the AI initiatives are built for African communities themselves, underscoring a sense of ownership and relevance that drives the design of small‑AI models.
Evidence
Major discussion point
Definition and Principles of Small AI
Topics
Closing all digital divides | Capacity development
Readiness to embrace collaborative opportunities
Explanation
Her brief affirmation “Oh, yes” signals a willingness to engage with partners and adopt new ideas, reinforcing a collaborative spirit that is essential for scaling small‑AI solutions across the continent.
Evidence
“Oh, yes.” [4].
Major discussion point
Partnerships, Open Data & Community Involvement
Topics
The enabling environment for digital development | Capacity development
Wassim Hamidouche
Speech speed
152 words per minute
Speech length
1552 words
Speech time
609 seconds
Data‑efficient, community‑focused ethos
Explanation
Hamidouche stresses the need for data‑efficient learning because LMICs often have limited data, and highlights community involvement in data collection.
Evidence
“We also need to have data‑efficient learning systems because most of the time in low – and middle -income countries, we have a limited amount of data available” [7]. “Just because we have access to local community to help us to get data” [30]. “So we gathered data from this community” [31].
Major discussion point
Definition and Principles of Small AI
Topics
Artificial intelligence | Capacity development
SPARO biodiversity monitoring & Alert California wildfire detection
Explanation
He presents two global, open‑source solutions—SPARO for biodiversity monitoring and Alert California for wildfire detection—that run on low‑power devices in remote areas.
Evidence
“So the first project in biodiversity is called SPARO” [86]. “SPARO for solar powered acoustic and remote recording observation” [87]. “So through Alert California, we are addressing this challenge using AI” [88]. “Alert California is the network of 1300 cameras operating 24‑7” [91]. “It is an AI powered open source solution designed to track and monitor biodiversity in the most remote and hard to reach region in the world” [94].
Major discussion point
Organizational Initiatives & Real‑World Applications
Topics
Environmental impacts | Artificial intelligence
Challenges: scarcity of data, benchmarks, safety alignment
Explanation
He points out that low‑resource languages lack benchmarks and that safety alignment work is largely limited to high‑resource languages like English.
Evidence
“They don’t have that many benchmarks” [111]. “It raises some other issues with safety” [114]. “When we build LLM, we benchmark them…” [121]. “safety alignments … these safety are mainly done in English and some of high‑resource languages” [144].
Major discussion point
Challenges for Low‑Resource Languages & Domains
Topics
Artificial intelligence | Data governance | Building confidence and security in the use of ICTs
Lingua Europe/Africa initiatives and funding for language data
Explanation
He describes initiatives to collect and fund data for African languages, supporting open‑source LLM development.
Evidence
“We have an initiative called Lengua Europe” [24]. “allocating 5 .5 million to support data collection for African languages” [79]. “we are extending this initiative to Africa through Lingua Africa” [174].
Major discussion point
Partnerships, Open Data & Community Involvement
Topics
Data governance | Enabling environment for digital development
Antoine Tesniere
Speech speed
151 words per minute
Speech length
1246 words
Speech time
492 seconds
Validated, offline health models
Explanation
Tesniere outlines that many small AI tools for radiology, dermatology, and other diagnostics are already validated, deployable on small computers, and can operate offline.
Evidence
“these are very concrete example of already validated small AI models in healthcare that are used on a daily basis at least in France and Europe… some of them can actually work offline which is really important in some environments” [43]. “That are able to run offline” [41].
Major discussion point
Definition and Principles of Small AI
Topics
Artificial intelligence | Social and economic development
Small, noisy datasets require efficient algorithms
Explanation
He notes that work on small data sets demands highly efficient algorithms to achieve good performance.
Evidence
“on small data sets” [2]. “We also have this work on small data, very efficient algorithm” [33]. “Sometimes it’s a very small set of data” [66]. “And sometimes it’s a very limited set of data, very limited set of algorithm” [117].
Major discussion point
Challenges for Low‑Resource Languages & Domains
Topics
Artificial intelligence | Data governance
Human‑in‑the‑loop safety in healthcare
Explanation
Tesniere stresses that final decisions remain with clinicians, ensuring AI assists rather than replaces human judgment.
Evidence
“but at the end it’s a human decision and this is very important I think” [141]. “It’s not making decisions in healthcare” [142].
Major discussion point
Reliability, Safety & Verifiability
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Offline LLMs for LMICs
Explanation
He argues that simple offline LLM solutions can address many challenges in low‑ and middle‑income countries where connectivity is limited.
Evidence
“a very simple solution, offline LLMs, et cetera, can solve many, many issues” [11]. “We need to have a model that are able to run on small devices” [3]. “So this is why we actually work on this kind of approach, making sure, of course, we have research on LLMs and large computing power” [156].
Major discussion point
Deployment in Rural / Low‑Connectivity Settings
Topics
Artificial intelligence | Closing all digital divides
Illango Patchamuthu
Speech speed
159 words per minute
Speech length
1217 words
Speech time
459 seconds
AI as a means to reduce poverty; simplicity matters
Explanation
He frames AI as a tool to achieve the World Bank’s mission of poverty reduction, emphasizing that small, simple solutions can be effective.
Evidence
“For us at the World Bank, we see AI as a means to an end, and very much of an AI agenda is shaped by the mission of the World Bank, which is to reduce poverty and grow prosperity in the world” [49]. “Small AI can solve problems” [50].
Major discussion point
AI’s Role in Development Goals & Economic Impact
Topics
Social and economic development | Financial mechanisms
World Bank pilots in agriculture, health, education
Explanation
He describes how pilots in various sectors are being scaled from villages to larger populations, focusing on offline, plug‑and‑play solutions.
Evidence
“… pilots tend to once the sheen wears off people forget the pilot… we see those pilots whether it’s in health education agriculture on the small AI setting it works in rural communities where offline where data is not that rich… plug and play… how do we get the right KPIs which then allows us to go from a village of community of 50 villages to a larger population center” [73].
Major discussion point
Organizational Initiatives & Real‑World Applications
Topics
Social and economic development | Capacity development
Job creation and digital public infrastructure
Explanation
He links small AI to job creation and stresses the need for digital public infrastructure to sustain a skilled workforce.
Evidence
“And this is where some of the foundational investment in BPI, the digital public infrastructure, needs to happen to create that ecosystem and the ability for the ecosystem to then work with the private sector, the local communities, to be able to create those jobs” [160]. “Our North Star is job creation” [162].
Major discussion point
AI’s Role in Development Goals & Economic Impact
Topics
Capacity development | Financial mechanisms
World Bank AI Repository of use cases
Explanation
He mentions the creation of an AI Repository to catalog small AI use cases for scaling and replication.
Evidence
“It’s called AI Repository” [59]. “we launched a small AI use case repository” [181].
Major discussion point
Partnerships, Open Data & Community Involvement
Topics
Data governance | Enabling environment for digital development
Audience
Speech speed
106 words per minute
Speech length
216 words
Speech time
121 seconds
Competitions to develop models in Africa
Explanation
An audience member notes that competitions are used to spur model development for African contexts.
Evidence
“We run competitions to develop models, especially in Africa” [5].
Major discussion point
Organizational Initiatives & Real‑World Applications
Topics
Artificial intelligence | Capacity development
Question on open‑source, open‑weight models for low‑resource languages
Explanation
The audience asks about practicality of using open‑weight LLMs for domain‑specific, under‑resourced language models.
Evidence
“And I actually had a question for Wasim about kind of the technical implications, the size implications, the practicality of using… Open source, open weight models, you know, large language models to train very specific, domain specific, you know, language, you know, under‑resourced language models” [14].
Major discussion point
Challenges for Low‑Resource Languages & Domains
Topics
Artificial intelligence | Data governance
Announcer
Speech speed
84 words per minute
Speech length
224 words
Speech time
158 seconds
Opening framing of small AI for social impact
Explanation
The announcer introduces the panel, describing small AI as data‑efficient, cheap to run, edge‑centric models that serve underserved communities.
Evidence
“When we at Wadwani AI brainstormed about this panel, we thought it would reflect in some ways the ethos of our own work, making models that are data efficient, that are cheap to run, that sit on the edge, and most importantly are meaningful to the communities that we serve, which are underserved communities, mostly in rural India” [64].
Major discussion point
Definition and Principles of Small AI
Topics
Artificial intelligence | Closing all digital divides
Agreements
Agreement points
Problem-first approach and contextual relevance over generic solutions
Speakers
– Alpan Rawal
– Aisha Walcott-Bryant
– Zameer Brey
Arguments
Small AI should be data efficient, cheap to run, edge-deployable, and meaningful to underserved communities rather than providing generic outputs
Problem-first approach is essential – if a simple solution exists, don’t overcomplicate with AI; leverage compute and expertise for problems requiring innovation
AI tools must be evaluated based on whether they work for whom, where, and at what scale, considering real-world contexts like district hospitals and rural classrooms
Summary
All three speakers emphasize that AI solutions should be designed with specific contexts and problems in mind, prioritizing meaningful impact over technological sophistication
Topics
Artificial intelligence | Social and economic development | Closing all digital divides
Partnership-driven and community-centered approaches
Speakers
– Aisha Walcott-Bryant
– Wassim Hamidouche
– Antoine Tesniere
Arguments
Partnership-driven approaches with local communities are essential for creating high-quality datasets that respect cultural context
Open-source approach and partnership with NGOs, governments, and local communities enables global deployment of AI solutions
Innovation ecosystems bringing together researchers, doctors, patients, startups, and institutions create comprehensive communities for AI development
Summary
All speakers advocate for collaborative approaches that involve local communities, governments, and diverse stakeholders in AI development and deployment
Topics
The enabling environment for digital development | Human rights and the ethical dimensions of the information society | Social and economic development
Small AI is not inferior but contextually appropriate
Speakers
– Illango Patchamuthu
– Antoine Tesniere
– Wassim Hamidouche
Arguments
Small AI is not inferior or second-class but represents a means to an end that can solve problems and fast-track development outcomes
Small AI models in healthcare for image analysis, radiology, and dermatology are already validated and deployable on small computers, often working offline
SPARO system for biodiversity monitoring and Alert California for wildfire detection demonstrate global applicability of small AI solutions
Summary
These speakers collectively argue that small AI models are proven, effective solutions that can address real-world problems and should not be viewed as inferior alternatives
Topics
Artificial intelligence | Social and economic development | Information and communication technologies for development
Resource constraints drive innovation and appropriate solutions
Speakers
– Aisha Walcott-Bryant
– Antoine Tesniere
– Illango Patchamuthu
Arguments
Weather nowcasting across Africa addresses critical agricultural needs given 95% rain-fed agriculture and limited weather radar infrastructure
Data-efficient learning systems and edge-native AI are crucial for deployment in resource-constrained environments
Small AI applications should focus on job creation and enhancement rather than automation, requiring supportive ecosystems powered by local private sector
Summary
All three speakers recognize that resource constraints in developing regions necessitate innovative, efficient AI solutions that work within existing infrastructure limitations
Topics
Social and economic development | Closing all digital divides | The enabling environment for digital development
Similar viewpoints
Both speakers emphasize the critical importance of reliability and transparency in AI systems, particularly in high-stakes applications like healthcare, and advocate for human-AI collaboration rather than full automation
Speakers
– Zameer Brey
– Antoine Tesniere
Arguments
AI systems must work toward zero error rates and verifiable AI that provides transparent, auditable logic chains rather than black box solutions
AI provides information rather than making decisions in healthcare, and combination of algorithms with human intelligence produces the best outcomes
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Both speakers recognize the systematic challenges facing low-resource language AI development and emphasize the need for community partnerships to address cultural and linguistic representation gaps
Speakers
– Wassim Hamidouche
– Aisha Walcott-Bryant
Arguments
Low-resource languages face challenges including limited internet data representation, lack of benchmarks, performance gaps, and safety alignment issues
Partnership-driven approaches with local communities are essential for creating high-quality datasets that respect cultural context
Topics
Closing all digital divides | Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers advocate for systemic approaches that combine technological solutions with institutional reforms and collaborative partnerships to enable effective AI deployment at scale
Speakers
– Illango Patchamuthu
– Wassim Hamidouche
Arguments
Digital public infrastructure and business process reforms are fundamental prerequisites for effective AI deployment in emerging economies
Open-source approach and partnership with NGOs, governments, and local communities enables global deployment of AI solutions
Topics
The enabling environment for digital development | Information and communication technologies for development | Social and economic development
Unexpected consensus
Current healthcare performance baselines justify AI deployment despite imperfections
Speakers
– Zameer Brey
– Antoine Tesniere
Arguments
Current diagnostic accuracy of 50% across eight countries for simple conditions shows that AI models often perform better than existing standards
AI provides information rather than making decisions in healthcare, and combination of algorithms with human intelligence produces the best outcomes
Explanation
Despite coming from different organizational backgrounds, both speakers unexpectedly agreed that current healthcare delivery standards are so poor (50% diagnostic accuracy) that even imperfect AI systems represent significant improvements, challenging common assumptions about AI needing to be perfect before deployment
Topics
Social and economic development | Artificial intelligence | Monitoring and measurement
Platform competition should focus on collective impact rather than market dominance
Speakers
– Aisha Walcott-Bryant
– Illango Patchamuthu
– Wassim Hamidouche
Arguments
Humility and relating to communities on human level, recognizing shared challenges, improves solution development and adoption
Small AI applications should focus on job creation and enhancement rather than automation, requiring supportive ecosystems powered by local private sector
Open-source approach and partnership with NGOs, governments, and local communities enables global deployment of AI solutions
Explanation
Representatives from major competing tech companies (Google, Microsoft) and the World Bank unexpectedly showed consensus that competition should be collaborative rather than zero-sum, emphasizing shared human challenges and collective problem-solving over market dominance
Topics
The digital economy | Artificial intelligence | The enabling environment for digital development
Overall assessment
Summary
The panel demonstrated remarkable consensus across several key areas: the importance of problem-first approaches over technology-first thinking, the value of community partnerships and local context, the legitimacy of small AI as effective rather than inferior solutions, and the need for collaborative rather than competitive approaches to global challenges. There was also unexpected agreement on accepting current performance baselines as justification for AI deployment and on prioritizing collective impact over market competition.
Consensus level
High level of consensus with significant implications for AI development strategy. The agreement suggests a maturing field that prioritizes practical impact over technological sophistication, emphasizes inclusive development approaches, and recognizes the importance of contextual appropriateness. This consensus could influence funding priorities, development methodologies, and policy frameworks for AI deployment in developing regions.
Differences
Different viewpoints
AI reliability standards and acceptable error rates in healthcare
Speakers
– Zameer Brey
– Antoine Tesniere
Arguments
AI systems must work toward zero error rates and verifiable AI that provides transparent, auditable logic chains rather than black box solutions
AI provides information rather than making decisions in healthcare, and combination of algorithms with human intelligence produces the best outcomes
Summary
Zameer Brey advocates for near-zero error rates and verifiable AI systems, using airplane safety analogies to argue that 90-99% accuracy is insufficient. Antoine Tesniere takes a more pragmatic approach, emphasizing that AI should provide information to support human decision-making rather than achieve perfect accuracy, and that the combination of AI and human intelligence produces the best results.
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Social and economic development
Platform competition versus collaboration in AI development
Speakers
– Aisha Walcott-Bryant
– Illango Patchamuthu
– Wassim Hamidouche
Arguments
Humility and relating to communities on human level, recognizing shared challenges, improves solution development and adoption
Small AI applications should focus on job creation and enhancement rather than automation, requiring supportive ecosystems powered by local private sector
Open-source approach and partnership with NGOs, governments, and local communities enables global deployment of AI solutions
Summary
When asked about platform competition, the speakers showed different perspectives. Aisha emphasized that it’s not a zero-sum game and there are enough problems for everyone to solve. Illango focused on the scale of unmet needs (3 billion offline, 3.5 billion without healthcare access). Wassim emphasized collective efforts and open-source collaboration. While not directly contradictory, they represent different philosophies about competition versus collaboration.
Topics
The digital economy | Artificial intelligence | The enabling environment for digital development
Unexpected differences
Acceptable performance thresholds for AI deployment
Speakers
– Zameer Brey
– Antoine Tesniere
Arguments
AI systems must work toward zero error rates and verifiable AI that provides transparent, auditable logic chains rather than black box solutions
AI provides information rather than making decisions in healthcare, and combination of algorithms with human intelligence produces the best outcomes
Explanation
This disagreement was unexpected because both speakers work in healthcare applications where reliability is crucial. However, they have fundamentally different views on what constitutes acceptable AI performance. Zameer’s insistence on near-zero error rates conflicts with Antoine’s more pragmatic acceptance that AI should augment rather than replace human decision-making, even if not perfect.
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Social and economic development
Overall assessment
Summary
The panel showed remarkable consensus on the importance of small AI for social impact, with most disagreements being philosophical rather than fundamental. The main tension was between perfectionist versus pragmatic approaches to AI reliability in healthcare applications.
Disagreement level
Low to moderate disagreement level. The speakers largely agreed on core principles (community-centered approach, partnership-driven development, addressing underserved populations) but differed on implementation details and acceptable performance standards. This level of disagreement is constructive and reflects healthy debate about best practices rather than fundamental conflicts about goals or values.
Partial agreements
Partial agreements
Both speakers agree that AI can improve upon current healthcare standards (Zameer cites 50% diagnostic accuracy, Antoine mentions AI often performs better than current capabilities), but they disagree on implementation approach. Zameer pushes for near-perfect reliability and verifiable systems, while Antoine advocates for AI as a decision-support tool combined with human judgment.
Speakers
– Zameer Brey
– Antoine Tesniere
Arguments
Current diagnostic accuracy of 50% across eight countries for simple conditions shows that AI models often perform better than existing standards
AI provides information rather than making decisions in healthcare, and combination of algorithms with human intelligence produces the best outcomes
Topics
Artificial intelligence | Social and economic development | Building confidence and security in the use of ICTs
Both speakers agree on the importance of community partnerships and moving beyond general data collection, but they emphasize different aspects. Aisha focuses on cultural context and community co-creation, while Wassim emphasizes technical performance improvements through domain-specific data collection.
Speakers
– Aisha Walcott-Bryant
– Wassim Hamidouche
Arguments
Partnership-driven approaches with local communities are essential for creating high-quality datasets that respect cultural context
Domain-specific and application-specific data collection is now prioritized over general data collection for better performance in targeted use cases
Topics
Artificial intelligence | Closing all digital divides | Human rights and the ethical dimensions of the information society
Similar viewpoints
Both speakers emphasize the critical importance of reliability and transparency in AI systems, particularly in high-stakes applications like healthcare, and advocate for human-AI collaboration rather than full automation
Speakers
– Zameer Brey
– Antoine Tesniere
Arguments
AI systems must work toward zero error rates and verifiable AI that provides transparent, auditable logic chains rather than black box solutions
AI provides information rather than making decisions in healthcare, and combination of algorithms with human intelligence produces the best outcomes
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Both speakers recognize the systematic challenges facing low-resource language AI development and emphasize the need for community partnerships to address cultural and linguistic representation gaps
Speakers
– Wassim Hamidouche
– Aisha Walcott-Bryant
Arguments
Low-resource languages face challenges including limited internet data representation, lack of benchmarks, performance gaps, and safety alignment issues
Partnership-driven approaches with local communities are essential for creating high-quality datasets that respect cultural context
Topics
Closing all digital divides | Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers advocate for systemic approaches that combine technological solutions with institutional reforms and collaborative partnerships to enable effective AI deployment at scale
Speakers
– Illango Patchamuthu
– Wassim Hamidouche
Arguments
Digital public infrastructure and business process reforms are fundamental prerequisites for effective AI deployment in emerging economies
Open-source approach and partnership with NGOs, governments, and local communities enables global deployment of AI solutions
Topics
The enabling environment for digital development | Information and communication technologies for development | Social and economic development
Takeaways
Key takeaways
Small AI should prioritize real-world impact over technical benchmarks, focusing on data efficiency, edge deployment, and meaningful outcomes for underserved communities rather than generic solutions
Partnership-driven approaches with local communities are essential for successful AI deployment, requiring co-creation, cultural sensitivity, and understanding of local contexts
Small AI is not inferior to large foundation models but represents a strategic choice for solving specific problems efficiently and cost-effectively in resource-constrained environments
Current healthcare diagnostic accuracy of 50% for simple conditions across eight countries demonstrates that AI models often outperform existing standards, making the case for AI adoption
Low-resource languages face significant challenges including limited data representation, lack of benchmarks, and safety alignment issues, requiring targeted solutions and community involvement
Successful AI deployment requires supporting ecosystems including digital public infrastructure, business process reforms, and local private sector engagement to create jobs rather than eliminate them
Domain-specific and application-specific data collection should be prioritized over general data collection for better performance in targeted use cases
AI systems must work toward zero error rates and verifiable, transparent logic chains rather than black box solutions to build community trust
Resolutions and action items
World Bank launched a Small AI Use Case Repository with 100 cases covering health, education, agriculture, and job creation applications, hosted publicly and continuously updated
Microsoft announced Lingua Africa initiative allocating $5.5 million to support data collection for African languages in partnership with Gates Foundation and FCDU
Lingua Africa will be led by Masakani African Languages Hub and focus on domain-specific, application-specific data collection rather than general data
World Bank committed to expanding small AI applications across different states in India and other countries, prioritizing digital literacy, STEM education, and reskilling programs
Google Research Africa will continue expanding their voice language dataset beyond the current 27 African languages through partnership-driven data collection
Unresolved issues
The challenge of device affordability for the bottom 40% of populations who need access to AI-enabled computing devices
How to achieve truly zero-error AI systems and implement verifiable AI with transparent logic chains across different domains
Scaling successful pilot projects to larger populations while maintaining effectiveness and community trust
Addressing the unpredictable failure patterns of AI models compared to human decision-making in healthcare contexts
Managing the tension between AI performance optimization and the need for human oversight in critical decision-making
Ensuring long-term sustainability and maintenance of AI systems deployed in remote, resource-constrained environments
Bridging the gap between the 7,000+ global languages and the limited 300 languages that have even basic AI benchmarks
Suggested compromises
AI should provide information and support rather than make final decisions, particularly in healthcare, with human professionals maintaining ultimate decision-making authority
Focus on achieving better performance than current standards rather than perfect performance, recognizing that incremental improvements can save lives
Combine algorithmic intelligence with human intelligence to achieve optimal outcomes rather than replacing human expertise entirely
Use multilingual base models with continual pre-training and fine-tuning rather than training from scratch for low-resource languages due to data limitations
Employ both text and speech-based approaches to address low-resource languages, recognizing that many languages may be better served through speech technologies
Balance competition between major tech platforms while recognizing that the scale of global challenges requires collaborative rather than zero-sum approaches
Thought provoking comments
I was thinking about the traffic that we’ve been experiencing the last few days in Delhi. And I thought to myself, would anyone, given the traffic here, design something so big as an aeroplane to try and get across the city? No. I think we would design something that’s a lot smaller, faster, sharper, cost-effective, and gets us from point A to point B.
Speaker
Zameer Brey
Reason
This analogy brilliantly reframes the entire AI development paradigm by using a relatable, everyday experience to illustrate why context-appropriate solutions matter more than raw capability. It challenges the assumption that bigger is always better in AI.
Impact
This comment established the conceptual foundation for the entire panel discussion, providing a memorable framework that other panelists could reference. It shifted the conversation from technical specifications to practical utility and appropriateness.
If I said to you, the plane has a high probability of leaving Delhi and landing safely wherever that’s going to be, and that probability was 90%. 95%. Would you get on that flag? 99%? … And the point there is that I do think we’ve got to work towards models that have zero error, right? So much so that I think that we are trying to wrap our heads around is there a concept of verifiable AI where it shifts the narrative from a black box to a glass box.
Speaker
Zameer Brey
Reason
This comment introduces a critical paradigm shift from probabilistic AI to deterministic, verifiable AI. It challenges the field’s acceptance of statistical accuracy and pushes toward transparency and accountability, especially crucial for healthcare applications.
Impact
This fundamentally changed the discussion from performance metrics to reliability standards, leading Antoine to elaborate on how AI assists rather than replaces human decision-making in healthcare, and sparked a deeper conversation about the real-world implications of AI errors.
A really important World Bank study from a few years back showed that on a set of five very simple conditions, the diagnostic accuracy was 50% across eight countries. 50%. What illnesses are we talking about? Acute diarrhea, upper respiratory tract infection, maternal hypertension.
Speaker
Zameer Brey
Reason
This statistic provides crucial context that reframes the entire AI-in-healthcare debate. It reveals that the baseline human performance in many healthcare settings is far lower than assumed, making even imperfect AI potentially transformative.
Impact
This data point shifted the conversation from theoretical AI performance to practical comparative advantage, leading Antoine to elaborate on how AI+human combinations outperform either alone, and grounding the discussion in real-world healthcare realities.
Small AI is not second class. No. Small AI can solve problems. It means to an end. And if it can actually fast-track development outcomes… this is an opportunity where this development technology, if it can be put to use in the right context in the right way, I think we can achieve faster [development goals].
Speaker
Illango Patchamuthu
Reason
This comment directly addresses a critical perception problem in the field – the assumption that ‘small’ means ‘inferior.’ It reframes small AI as a strategic choice rather than a compromise, emphasizing outcome-focused thinking over technology-focused thinking.
Impact
This validation helped legitimize the entire panel’s premise and shifted the discussion toward practical development applications, reinforcing that the goal is solving problems, not showcasing technological sophistication.
I always start with, you know, I’m a scientist, but I’m also a mother, right? And that’s a thread that I’ve been following for a long time. And that’s a thread that binds so many of us. When you think of that, you also think a lot of the solutions that we’re building, it’s not for them, for us.
Speaker
Aisha Walcott-Bryant
Reason
This comment introduces a powerful perspective shift from ‘us vs. them’ to ‘us for us’ in technology development. It emphasizes the importance of lived experience and personal stake in the communities being served, challenging the traditional outsider-helper dynamic.
Impact
This humanized the technical discussion and established the importance of community connection and co-creation, influencing how other panelists framed their work in terms of partnership rather than service delivery.
What very few people actually know is that the actual performance of what we do at the moment is not 99.999%. So most of the time… it’s actually better than what we have. And this is really important in your example. Is it good enough compared to what we can actually do at the moment?
Speaker
Antoine Tesniere
Reason
This comment challenges perfectionist thinking by introducing the concept of comparative improvement rather than absolute performance. It’s a pragmatic reframing that considers real-world baselines rather than theoretical ideals.
Impact
This practical perspective helped ground the reliability discussion in reality, supporting the case for deploying imperfect but improved AI solutions, especially in resource-constrained settings where current alternatives may be significantly worse.
Three billion people are offline, so there is space for everybody to compete. Second, in health sector alone, three and a half billion people don’t have access to healthcare, so there is enough scope for all kinds of applications.
Speaker
Illango Patchamuthu
Reason
This response reframes competitive thinking by highlighting the massive scale of unmet need. It shifts from zero-sum competition to abundance thinking, emphasizing that the challenge is so vast that collaboration is more important than competition.
Impact
This comment effectively closed the panel by refocusing on the humanitarian mission rather than commercial competition, reinforcing the theme that small AI is about solving real problems for underserved populations.
Overall assessment
These key comments fundamentally shaped the discussion by establishing several important conceptual frameworks: the appropriateness paradigm (right-sized solutions for context), the reliability imperative (moving beyond statistical accuracy to verifiable outcomes), the baseline reality check (comparing AI to actual current performance rather than theoretical ideals), and the human-centered approach (building ‘with us’ rather than ‘for them’). Together, these insights elevated the conversation from a technical discussion about model efficiency to a philosophical examination of how AI should be developed, deployed, and evaluated in real-world contexts. The comments created a coherent narrative that small AI isn’t a compromise but a strategic choice that prioritizes practical impact over technological impressiveness, ultimately making the case that context-appropriate AI solutions may be more transformative than large foundation models for addressing global development challenges.
Follow-up questions
How to achieve verifiable AI with zero error rates instead of probabilistic models
Speaker
Zameer Brey
Explanation
Critical for healthcare applications where 90-99% accuracy is insufficient, need for glass box models that expose logic chains for auditing and repeatability
How to address the performance gap and safety alignment for low-resource languages in LLMs
Speaker
Wassim Hamidouche
Explanation
Current foundation models are trained primarily on English and high-resource languages, leaving safety vulnerabilities and performance gaps for 6,000+ languages without proper benchmarks
How to scale from pilots to larger population centers with proper KPIs for small AI applications
Speaker
Illango Patchamuthu
Explanation
Need systematic approach to move beyond pilot projects to sustainable deployment across villages and communities in health, education, and agriculture
How to ensure AI creates rather than automates away jobs in emerging economies
Speaker
Illango Patchamuthu
Explanation
Critical for transition of emerging economies to advanced economies, requires ecosystem development and business process reforms
How to address the unpredictable failure patterns of AI models compared to humans in healthcare
Speaker
Alpan Rawal
Explanation
While models may have better average accuracy, their failure modes are less predictable than human errors, requiring better understanding for safe deployment
How to make computing devices affordable for the bottom 40% of populations
Speaker
Alpan Rawal
Explanation
Hardware costs remain a barrier to AI adoption in rural and low-income communities, limiting access to beneficial technologies
How to effectively combine speech technologies with text-based LLMs for low-resource languages
Speaker
Wassim Hamidouche
Explanation
Many low-resource languages may be better served through speech-to-text and text-to-speech models rather than text-only approaches
How to build robust algorithms for precision medicine that work on individual patients rather than populations
Speaker
Antoine Tesniere
Explanation
Personalized medicine requires AI models that can adapt to individual characteristics while working with limited data and computing resources
How to clean and prepare noisy healthcare data at scale in countries like India
Speaker
Antoine Tesniere and Alpan Rawal
Explanation
While data volume exists, quality and preparation remain challenges for robust AI analysis in healthcare applications
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

