How Small AI Solutions Are Creating Big Social Change
20 Feb 2026 15:00h - 16:00h
How Small AI Solutions Are Creating Big Social Change
Summary
The panel, moderated by AlpanâŻRawal, examined how âsmallâŻAIâ-data-efficient, low-cost models that run at the edge and are tailored to local contexts-can generate large social impact, especially for underserved communities in the global south [15-19]. ZameerâŻBrey stressed that AIâs value lies in its relevance to specific settings such as district hospitals in Telangana, smallholder farmers in Zambia, or classrooms in rural Senegal, warning against designs that ignore usersâ environments [27-34]. AishaâŻWalcott-Bryant described Google Research Africaâs âAfrica-for-Africaâ approach, highlighting problem-first projects like continent-wide weather forecasting that compensate for the scarcity of radar stations [50-57] and the creation of open voice-data sets for 27 African languages to enable edge-ready models on laptops and tablets [61-65]. WassimâŻHamidouche outlined Microsoftâs AIâŻforâŻGood Lab, citing two open-source small-AI systems: SPARO, a solar-powered acoustic sensor network for biodiversity monitoring in remote areas [83-88], and AlertâŻCalifornia, a 1,300-camera network that detects wildfires early using on-device AI [91-97]. AntoineâŻTesnière explained that health care already relies on validated small-AI tools for radiology, dermatology and ophthalmology, many of which can operate offline on modest hardware and complement rather than replace clinician judgment [102-109]. IllangoâŻPatchamuthu of the World Bank framed AI as a means to reduce poverty, arguing that simple, low-resource models are easier to scale across villages, and that replicating successful pilots requires clear KPIs, trust-building and partnership with multilateral institutions [111-124][243-267]. The discussion also addressed technical challenges: low-resource languages suffer from data scarcity, limited benchmarks, performance gaps and safety-alignment issues, prompting Microsoft to target pilot languages (Inuktitut, Chichewa, MÄori) and launch the âLinguaâŻAfricaâ initiative with $5.5âŻmillion funding for data collection [181-210][220-228]. To improve reliability, speakers advocated for verifiable âglass-boxâ models, domain-specific data collection and continuous pre-training, noting that small models can achieve near-human accuracy in targeted tasks such as community health-worker decision support [153-166][233-238]. Consensus emerged that smallâŻAI is not a second-class technology; when designed responsibly and deployed with local ecosystems, it can accelerate development outcomes, create jobs and complement larger foundation models rather than compete with them [120-126][244-250]. Participants highlighted the importance of digital literacy, upskilling and STEM education to build a cadre capable of developing and maintaining small-AI solutions, especially where threeâŻbillion people remain offline [350-353]. The moderators concluded that competition among platforms should be viewed as healthy and complementary, with the ultimate goal of delivering trustworthy, context-appropriate AI to end users rather than determining a single âwinnerâ [386-393].
Keypoints
Major discussion points
– What âsmallâŻAIâ means and why it matters – The moderator frames âsmallâŻAIâ as data-efficient, low-cost models that run at the edge and are tailored to local contexts rather than generic, large-scale foundation modelsâŻ[15-19]. Zameer reinforces this with a traffic-analogy, arguing that solutions should be âsmaller, faster, sharper, cost-effectiveâ for the environments they serveâŻ[32-34].
– Concrete small-AI projects across sectors
– Google Research Africa builds continent-specific weather-forecasting tools and releases a multilingual voice dataset, emphasizing open-weight models that can run on laptops or tabletsâŻ[50-55][60-65][144-145].
– Microsoft AIâŻforâŻGood showcases SPARO (solar-powered acoustic monitoring for biodiversity) and AlertâŻCalifornia (camera network for early wildfire detection), both open-source and deployable worldwideâŻ[82-98].
– Low-resource language work at Microsoft targets languages such as Inuktitut, Chichewa and MÄori, launches the âLenguaâŻAfricaâ data-collection fund, and partners with the Gates Foundation to support African language modelsâŻ[181-190][210-218][219-228].
– Healthcare small-AI in France uses validated, offline models for radiology, dermatology and ophthalmology, stressing that AI only augments clinician decisionsâŻ[102-108][300-306].
– The central role of partnerships, community involvement and open resources – Aisha notes that the African voice dataset was co-created with local partners and that open models enable âpartnership-ledâ solutionsâŻ[60-65]. Illango (World Bank) stresses replicating proven pilots, building an AI use-case repository, and collaborating with NGOs, governments and tech firms to scale impactâŻ[111-120][258-262]. Microsoftâs language initiatives rely on community data collection through the âMasakani African Languages HubââŻ[210-218][219-226].
– Key technical and deployment challenges, and proposed strategies – Wassim outlines four hurdles for low-resource languages: data scarcity, lack of benchmarks, performance gaps, and safety/alignment issuesâŻ[181-200]. Zameer highlights the need for âverifiable AIâ that reduces black-box errors, citing a maternal-health case where a small on-device model could have saved livesâŻ[153-166][167-174]. Antoine points out limited, siloed health data and the necessity of efficient, offline algorithms that run on modest hardwareâŻ[280-298]. Across the board, speakers recommend domain-specific data collection, open-weight models, and edge-native deployment to overcome these barriersâŻ[233-238][280-298].
– Future outlook and policy implications for development – Illango envisions AI as a catalyst for job creation, stressing digital-literacy, up-skilling and a robust private-sector ecosystem; he also announces a publicly accessible AI use-case repositoryâŻ[243-262][268-272]. The moderator and panelists caution against a zero-sum âAI warsâ narrative, emphasizing healthy competition and context-driven relevanceâŻ[384-393][394-398].
Overall purpose / goal of the discussion
The panel was convened to explore how âsmallâŻAIâ-data-efficient, locally-adapted models-can generate tangible social impact for underserved and rural communities, especially in the Global South. Each speaker shared organizational experiences, highlighted non-foundation-model approaches, and discussed how to scale such solutions responsibly.
Overall tone
The conversation remained professional, collaborative and optimistic, with speakers celebrating successes (e.g., open-source biodiversity tools, multilingual datasets). When addressing reliability, safety and scalability, the tone shifted to a more cautionary, problem-solving stance, underscoring the need for rigorous validation and community trust. Throughout, the dialogue stayed constructive, focusing on partnership-driven pathways rather than competition.
Speakers
– Illango Patchamuthu – World Bank Group Director of Strategy and Operations, Digital and AI Vice Presidency; Acting Director for Data and AIâŻ[ S1 ]
– Announcer – Event announcer/moderator who introduced the panelistsâŻ[ S2 ][ S3 ][ S4 ]
– Alpan Rawal – Chief AIâŻ/âŻML Scientist at Wadwani AI; moderator of the panelâŻ[ S5 ]
– Aisha Walcott-Bryant – Senior Staff Research Scientist and Head of Google Research Africa, GoogleâŻ[ S7 ][ S8 ]
– Antoine Tesniere – French Professor of Medicine, entrepreneur, anesthesiologist at GeorgesâŻPompidou European Hospital; co-founder of ILEMENTS; Director of Paris-Saint-Denis CampusâŻ[ S10 ]
– Zameer Brey – Panelist (organization not specified in the transcript)âŻ[ S12 ]
– Wassim Hamidouche – Principal Research Scientist, AI for Good Lab, Microsoft (specializing in computer vision, NLP, multimodal AI, low-resource languages)âŻ[ S14 ]
– Audience – Various participants from the public; no specific titles or roles mentioned
Additional speakers:
– Neha Butts – Associate Director, Human Resources (mentioned at the close of the session)
– Selena – CEO and Co-founder of Zindi, runs competitions to develop AI models in Africa
– Irish Kumar – Representative from the CSC Winnie Ocean Center on solar energy (asked a question during the Q&A)
– Dr. Ravi Singh – Participant from Miami who posed a question about platform competition
The panel, moderated by DrâŻAlpanâŻRawal, opened by defining smallâŻAI as data-efficient, low-cost, edge-native models that are built for specific local contexts rather than generic, large-scale foundation models. Rawal emphasized that relevance to the end-userâs environment is the key criterion for impactâŻ[15-19]. This definition set the tone for the discussion, prompting each panelist to illustrate how their work embodies these principles.
ZameerâŻBrey reinforced the need for context-appropriate solutions with a vivid traffic analogy, arguing that, just as Delhiâs congestion would never justify an aeroplane for short trips, AI should be âsmaller, faster, sharper, cost-effectiveâ and suited to the specific settingâŻ[32-34]. He warned that designers often focus on benchmark performance without considering how a model fits into a district hospital in Telangana, a smallholder farm in Zambia, or a rural classroom in SenegalâŻ[29-31]. After noting the importance of âverifiableâ or âglass-boxâ AI, Zameer cited a World-Bank study showing 50âŻ% diagnostic accuracy across five common conditions in eight countries, underscoring the gap that reliable on-device models must closeâŻ[409-410].
Domain-specific small-AI projects
Google Research Africa highlighted two flagship initiatives. First, the team built a continent-wide weather-forecasting system that compensates for Africaâs severe radar shortage – only 37 stations compared with roughly 300 in North America and EuropeâŻ[55-57]. By innovating around this constraint, they delivered more accurate forecasts for rain-fed agriculture, a critical need for millions of smallholder farmersâŻ[50-54]. Second, they released an open multilingual voice dataset covering 27 African languages (out of an estimated 2âŻ000), enabling âpartnership-ledâ development of edge-ready models that run on laptops or tabletsâŻ[61-65][144-145].
Microsoftâs AIâŻforâŻGood Lab presented two open-source, globally deployable tools. SPARO (Solar-Powered Acoustic and Remote Observation) combines solar-powered cameras with an HAA model to detect animal species in remote habitats, transmitting data via satellite where infrastructure is lackingâŻ[82-88]. AlertâŻCalifornia operates a network of 1âŻ300 cameras with on-device AI that detects early wildfire signatures, allowing rapid emergency responseâŻ[91-97]. Both solutions exemplify smallâŻAI that is cheap to run, edge-deployable, and openly shared for reuse worldwideâŻ[84-88][96-97].
AntoineâŻTesnièreâs health-innovation ecosystem illustrated healthcare applications, noting that validated small-AI tools already support radiology, dermatology and ophthalmology analyses on modest hardware, providing information to clinicians while preserving human decision-makingâŻ[102-108][300-306]. He stressed that data in health is often scarce and siloed, requiring data-efficient algorithms that can operate offline on smartphones or simple computers, especially in low- and middle-income settingsâŻ[280-298][332-340]. Antoine clarified that these models are better than current practice, and that the combination of algorithmâŻ+âŻhuman decision-making is the most effective tool, rather than claiming they outperform clinicians outrightâŻ[311-313][332-340].
WassimâŻHamidouche described work on low-resource languages, identifying four systemic challenges: dominance of English in internet data (>60âŻ%), a dearth of benchmarks (only ~300 languages have any, most limited to translation tasks), a performance gap between high- and low-resource languages, and safety-alignment work that is largely English-centricâŻ[184-200]. To address these, Microsoft is piloting models for Inuktitut, Chichewa and MÄori, achieving a 12âŻ% performance gain through continual pre-training and instruction fine-tuningâŻ[210-218]. The âLinguaâŻAfricaâ initiative, funded with US$5.5âŻmillion in partnership with the Gates Foundation and the Masakani African Languages Hub, will support data collection for ten African languages, extending the earlier âLinguaâŻEuropeâ programmeâŻ[219-228]. He also emphasized that speech-to-text and text-to-speech are essential for many low-resource languages, where written corpora are limitedâŻ[416-418].
Technical and reliability challenges
Zameer argued for âverifiableâ AI, insisting that models used in critical health contexts must approach zero error and provide auditable logic chains to prevent catastrophic failuresâŻ[160-165]. Antoine offered a counterpoint, acknowledging that while current small-AI models are not 99.999âŻ% accurate, they already improve over existing practice and are acceptable when combined with human oversightâŻ[300-311][332-340]. This tension highlighted a broader disagreement on the acceptable error tolerance for health-focused smallâŻAI.
Wassim advocated shifting from generic data collection to domain-specific, use-case-driven pipelines, arguing that such focus improves reliability for applications in agriculture, education and healthâŻ[233-238]. He also noted the importance of speech technologies for low-resource languagesâŻ[416-418].
Partnerships, co-creation and open resources
All speakers underscored the necessity of multi-stakeholder collaboration. IllangoâŻPatchamuthu of the World Bank described AI as a means to reduce poverty, insisting that simple, low-resource models are easier to scale and must be replicated through clear KPIs and trust-building with NGOs, governments and local communitiesâŻ[111-124][115-124]. He announced an open-access AI use-case repository containing about 100 curated examples in health, education, agriculture and job creation, hosted on the World Bank platformâŻ[258-262][268-272]. Illango explicitly stated that âsmall AI is not inferior, it is not second class.ââŻ[411-413] He added that once legal issues are resolved, anyone will be able to submit use-cases to the repository, subject to filteringâŻ[414-415]. Aisha echoed this collaborative ethos, noting that the African voice dataset was collected with partners across the continent and that open-weight models such as Gemma enable edge deploymentâŻ[60-65][144-145]. Microsoftâs language initiatives similarly rely on community-driven data collection through the Masakani African Languages HubâŻ[210-218][219-226].
Scalability and development impact
Illango highlighted the importance of turning pilots into plug-and-play solutions that can expand from a single village to larger regions, citing ongoing projects in UttarâŻPradesh and Maharashtra that aim to improve agricultural productivity, market access and creditâŻ[117-119]. He positioned AI as a catalyst for job creation, arguing that smallâŻAI should augment-not replace-employment, and that building digital literacy, up-skilling and STEM capacity is essential for emerging economiesâŻ[350-353][244-250].
Audience interactions
During the Q&A, IrishâŻKumar asked how smallâŻAI can support youth, agriculture and renewable energy. Illango responded that digital-literacy programmes, up-skilling initiatives, and sector-specific pilots in India are already being rolled out to empower young people and promote sustainable agriculture and clean-energy solutionsâŻ[400-402]. Selena questioned the technical feasibility of using open-source, open-weight LLMs for low-resource languages. Wassim explained that selecting a strong multilingual base model, augmenting it with monolingual or bilingual data, and leveraging speech-to-text/text-to-speech pipelines are key to achieving good performanceâŻ[403-405]. DrâŻRaviâŻSingh raised the âAI warsâ concern. Alpan affirmed that healthy competition drives innovation, Illango reminded that threeâŻbillion people remain offline-leaving ample space for diverse AI approaches-and Wassim emphasized that collective effort across sectors is essential to avoid fragmented developmentâŻ[406-408].
Future outlook and policy considerations
Rawal concluded that competition among platforms is healthy and not a zero-sum game; the âwinnerâ will be the solution that best fits the userâs contextâŻ[384-393][386-392]. Illango reinforced this view, noting the large offline population as an opportunity for inclusive AIâŻ[394-398]. The panel collectively agreed that smallâŻAI is not a second-class technology; when responsibly designed, it can fast-track development outcomes and complement larger foundation modelsâŻ[120-124][15-19].
Action items and unresolved issues
– The World Bank will maintain and expand the open-access AI repository, and will open submissions to external contributors once legal clearance is obtainedâŻ[258-262][414-415].
– Microsoft will roll out the LinguaâŻAfrica initiative to fund domain-specific data collection for African languagesâŻ[219-228].
– Google Research Africa will keep releasing open-weight models and multilingual voice datasets, supporting edge deploymentâŻ[144-145].
– SPARO and AlertâŻCalifornia will remain open-source for global adoptionâŻ[84-88][96-97].
– All participants committed to prioritising domain-specific data, co-creation with local partners and scaling pilots into reusable modulesâŻ[233-238][115-124][259-262].
Remaining challenges include achieving near-zero error rates and verifiable audit trails for critical health applications, establishing robust benchmarks and safety evaluations for low-resource languages, ensuring affordable edge hardware for the poorest populations, and defining concrete digital-literacy and up-skilling programmes at scale. These issues were identified as priorities for future collaboration and research.
The discussion demonstrated strong consensus that lightweight, context-aware AI, built through open-source practices and local partnerships, can deliver meaningful social impact while complementing, rather than competing with, large foundation models. The panelâs insights chart a clear pathway toward inclusive, trustworthy AI deployment in underserved regions. Alpan invited NehaâŻButts to hand out mementos and requested a group photo to close the eventâŻ[419].
Please, I would request you to take your seat on the panel. Wassim Hamidouche, who’s a principal research scientist at Microsoft’s AI for Good Lab, specializing in computer vision, NLP, and multimodal AI with a focus on low -resource languages. Requesting you to please take your seat. Illango, who’s a World Bank Group Director of Strategy and Operations in the Digital and AI Vice Presidency and also serving as Acting Director for Data and AI. Requesting you to please join the panel. Thank you. Aisha Walcott, who is a senior staff research scientist and head of Google Research Africa, focused on AI development, addressing the continent’s most pressing challenges. She holds a PhD in electrical engineering and computer science and holds leadership roles in the IEEE Robotics and Automation Society.
Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneur, specializing in health innovation and crisis management and an anesthesiologist at the Georges Pompidou European Hospital. He co -founded ILEMENTS, coordinated France’s national COVID response, and since 2021 has served as director of Paris -Saint -Denis Campus. Thank you so much for being here. Requesting you to join the panel. And Dr. Alpan Rawal, who’s chief AI ML scientist at Wadwani AI, will be moderating today’s session. Alpan, requesting. Thank you. handing it over to you
yes thank you everyone for coming requesting those at the back if you could close the door so that we can reduce the noise a little bit it’s full ok great well if you could just calm down a bit and settle down thank you welcome to all our esteemed panelists and our panel the topic of our panel as you know is on small AI for big social impact like to deeply thank our panelists for making it all the way for the summit and making it to this panel so what do we mean by small AI I think different people have different definitions and we are sort of open to how each panelist chooses to interpret small AI When we at Wadwani AI brainstormed about this panel, we thought it would reflect in some ways the ethos of our own work, making models that are data efficient, that are cheap to run, that sit on the edge, and most importantly are meaningful to the communities that we serve, which are underserved communities, mostly in rural India.
But it’s increasingly clear that small AI means a lot more. And I see a lot of people talking about small AI in the summit. More generally, I think it encapsulates any AI that meaningfully impacts individuals while taking into account and respecting their very local context, rather than providing generic outputs. So anything like that could rightly be called small AI, and we’re going to hear from our panelists about their experiences with AI models like that. So with that small introduction, let’s now avoid further ado and speak to our panelists. panelists. So, can you hear me at the back? Yeah, okay. So we can start. I have a common question for every panelist. Each of you represents a different and important aspect of AI work that’s happening outside of the mainstream excitement that focuses on large foundation models for a primarily global north audience.
Can you tell us briefly about your organization’s work and perhaps your thoughts on non -foundation AI models in general? Maybe we can start with Zamir.
Thanks Alpan. Yeah. Thanks. Thank you. Thank you. Thank you. we really see the opportunity for AI to reduce inequality and our starting point with AI tools is really does this work for whom where and at what scale so those are some of the departing points for us and so really looking beyond the model against a benchmark but how is this going to work in a district hospital in Telangana or a small older farmer in Zambia or a classroom in rural Senegal and you know part of what we’ve in some ways got caught up with is is the performance of the model on its own. And we’ve forgotten how does this fit into the lives and the context that it operates in.
And in doing so, part of what we need to think about is who’s designing the model and what’s it designed for? And I was thinking about the traffic that we’ve been experiencing the last few days in Delhi. And I thought to myself, would anyone, given the traffic here, design something so big as an aeroplane to try and get across the city? No. Yeah. I think we would design something that’s a lot smaller, faster, sharper, cost -effective, and gets us from point A to point B. Without a first -class airbender. us on
I think that’s a great analogy. Aisha, can you tell us a bit about your work at Google Africa?
Yes. Thank you. So I lead our Google Research Africa team. We have two sites, one in Ghana and one in Kenya, so representing East and West. But the work that we do is essentially from Africa for Africa and the world. Much of our work is scaling from the uniqueness of the continent. Turns out that a lot of the challenges are similar, definitely across the global south and generally worldwide. Our work, so kind of leaning into the next part of your question and thinking about how we approach this type of work, it’s very much interesting. It’s very much problem first. I always say if there’s a red button that… that you can press and it’s a one error zero, just build the red button.
We don’t need to bring AI or technology. So it’s really important to be very thoughtful about the type of problem. Coming from Google Research, we want to leverage our compute, our AI expertise and capabilities, and then our mandate, which is the societal impact at scale, to think about the types of problems that we work on. I’ll give two good examples of those problems. One is around weather now casting, which we launched last year across the continent of Africa. So to have much more accurate weather forecast is absolutely essential, given that much of the continent and as well as in India rely on agriculture for labor. And we are rain fed primarily, 95 % in Africa. So having much more accurate weather forecast is essential in that case.
And at the same time, on the technical challenges side, we know in North America and in Europe, there’s about 300 or so weather radar stations. And in Africa, there’s only 37, I believe. You know you can fit both North America and Europe in Africa. So when you think about that, you have to innovate. And so those constraints of the environment that you were alluding to in the intro are part of the motivation of having a research team in the continent. And so that was one way that we innovated and made solutions that were available to the continent. And then the other one is a complementary side, which is working with the ecosystem, working with partners in Africa, including Macquarie University, Digital Umaga, and Uganda around Africa.
African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so languages. This is the start. Most importantly is it’s partnership -led and driven, and this is because it’s voice, it is about accessibility and about reaching those rural villages as well. So, and enabling the ecosystem to build the solutions from there, whether they’re smaller models or larger models. So, making that type of data open and available is another way that we are leveraging this notion of smaller AI.
Thank you. Thank you. Great, great insights. Zamir, can you tell us about your work at Microsoft?
Yeah. Thank you. Sorry. Yeah, Massim. Thank you for the invitation, and it’s a great pleasure to be here today. So, first, what is AI for Good? AI for Good labs. AI for Good lab is the phylogenetic research of… Microsoft. We are employing advanced AI technology to solve real world problem with real societal impact. This is very important. And how our team and the researcher work, we closely collaborate with NGOs, governments, nonprofit organization, and local communities around the world. And together we are building AI solutions on multiple domains. We are interested about agriculture, food security, healthcare, education, culture, and so on. So this is about AI for Good Lab at Microsoft. Now I am scientist, so I would like to give you two concrete examples where we use small AI and also there are two global solutions to tackle global challenges.
So they are valid for both, they are valid for both global and international north and global south. So the first project in biodiversity is called SPARO. SPARO for solar powered acoustic and remote recording observation. It is an AI powered open source solution designed to track and monitor biodiversity in the most remote and hard to reach region in the world. So SPARO is camera tracks with HAA model that enable to detect animal species and this observation are then transmitted using wireless connectivity and satellite where we don’t have infrastructures to transmit this information. And this SPARO solution is already deployed around the world in many countries. I can cite Colombia, Peru, United States, Tanzania and it’s really enable practitioners and the researcher to understand species present and the ecosystem.
at scale, supporting more timely and informed decision to protect biodiversity. The second project focuses on wildfires. As you know, wildfires becomes real threat, global threat with devastating, impacting lives, communities and ecosystems and even economies. And around the world, firewires are increasingly in both frequency and intensity, making early detection and rapid responses more critical than ever. So through Alert California, we are addressing this challenge using AI. So what is Alert California? Alert California is the network of 1300 cameras operating 24 -7. So what is Alert California? Alert California is the network of 1300 cameras operating 24 -7. And we are developing AI tools that runs on the top of this effort. structure enabling to detect early fire and this will enable emergency responder to act quick quickly and stop fires before it’s split.
So Sparrow and Alert California as I said they are two global solutions for global problems that can be deployed anywhere around the world and we are providing them open source that anyone can embrace them and deploy them. Thank you.
Thank you. And when I think you’re the only member of this group that doesn’t work in the global south. So if you tell us a bit about the work at Paris and how you’re using maybe non foundation models. Right. Well
thank you Alpan for this invitation and I’m happy to be the outsider of the panel. I’m working actually in health care and I’m leading a new kind of innovation ecosystem for health care where we gather researchers. We. We have doctors patients. We have doctors patients. startups and industrial together as well as institutions so the idea is to really create a whole community of innovation and engage into the use of data and artificial intelligence and healthcare is probably one of the fields where AI has a long standing history and the world has discovered AI with the rise of Gen AI but there were a number of small AI models designed for a long time and this is why we already have a number of validated tools that we can use in healthcare answering the question not only does it work but is it reliable which is very important for our patients so before we have the proof of efficiency of LLMs in the medical field which is not fully clear yet, we use machine learning tools which are actually small AI models in very specific areas actually works really nice today is the image analysis or pattern analysis so you can think of radiology for example chest x -ray or fractures in the emergency room are fully analyzed by small AI models and small AI tools that are easily deployable on small computers you can also think of picture analysis in dermatology in ophthalmology etc so these are very concrete example of already validated small AI models in healthcare that are used on a daily basis at least in France and Europe we’ll get back in the discussions on how we need data efficacy on this topic but it’s really important to understand that these models are already deployable and some of them can actually work offline which is really important in some environments Thank you.
Ilango from World Bank perspective. What is your view on these types of models? Thank
you very much for the opportunity to be here. Coming right at the end, I don’t know what new things I can say, but to basically reinforce the messages that have been said. For us at the World Bank, we see AI as a means to an end, and very much of an AI agenda is shaped by the mission of the World Bank, which is to reduce poverty and grow prosperity in the world. And when you take that lens and you apply it, we have to keep it simple. Not all countries have the ability to have the compute power, the electricity, the talent, and the data. So therefore, taking on tested small AI applications to scale and replicating them around the world is something that we see as a mission priority.
So in that respect, what Badwani AI is doing here is pioneering. And what I’ve heard this morning from Dr. Sunil Badwani himself about what you’re doing in TB, what you’re doing in out -of -school children, this is all… tremendous and it has great potential for application and often what happens is we focus a lot on pilots and then what happens the pilots tend to kind of once the sheen wears off people forget the pilot I think what we need to do is and what we are doing at the World Bank is to see those pilots whether it’s in health education agriculture on the small AI setting it works in rural communities where offline where data is not that rich talent is not readily available and it can also not require a lot of electricity it’s plug plug and play then how do we get the right KPIs which then allows us to go from a village of community of 50 villages to a larger population center and to see how best we can help them say in agriculture to improve productivity you better inputs We are now working in UP in partnership with Google, and we are doing the same thing in Maharashtra.
Household income, the inputs get better, and to see how they can access the markets and agriculture credit. Similar in health and education, it’s in great practices that we are seeing in Africa, in Ghana, in Kenya. So how do we take these models and replicate it? So I’d like to assure everybody, and this is something people think, small AI is inferior. No. It’s not second class. No. Small AI can solve problems. It means to an end. And if it can actually fast -track development outcomes, you know, we’ve known the problems with millennium development goals. We’ve known the problems with sustainable development goals, and many countries are lagging behind. And this is an opportunity where this development technology, if it can be put to use in the right context in the right way, I think we can achieve faster
Thank you. That’s really interesting. Thank you. Ayesha, I’m going to come back to you and ask you more specifically how does the work that Google Research does in Africa impact rural communities specifically? How does one bring the benefits of technologies like these big foundation models to devices that may only have patchy Internet and supply very little data?
Thanks, that was a loaded question. Two parts there. So I think, so first and foremost, in general, just approaching these challenges with humility and relating. So I always start with, you know, I’m a scientist, but I’m also a mother, right? And that’s a thread that I’ve been following for a long time. And that’s a thread that binds so many of us. When you think of that, you also think a lot of the solutions that we’re building, it’s not for them. for us. I’m using the same health systems that you all are developing, interesting tools and models for, you know, and we have many of the challenges around weather as well. So I think the first thing is to kind of have that base human layer as we think about our work and to connect with those communities, whether they’re rural or urban, right?
A lot of the work that we do, we’re looking at, you know, these large populations that, you know, if you think about agriculture, for example, where it has a large part of the labor force, you know, there’s many different ways that this is, you know, people are part of that value chain, whether they’re actually doing the growing or providing the inputs or making those decisions and the risks along the way. So that relationship of getting out in the community, getting out in the community, getting out in the community, getting out in the community is a very important part of work. that we do is to connect with those and then really think about you know kind of coming home as Google research you know where is our unique value proposition we’re not necessarily going to solve this whole problem alone usually it requires behavior change policy and many pieces of the puzzle how do we best fit our role and we do this in co -creation with partnerships so that’s kind of the the second layer of fabric on on how we reach these rural communities and then on the other side
Do you have an example of that?
oh yeah yeah absolutely so if you think about okay I’ll do two ways so for example the the languages work that I was talking about wall and wall is a word it’s a Senegalese word that it’s wall of that means to speak and the way we wanted to create this you know we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we the community.
So if you have partners who are across the continent, let them be a part of the process of collecting the data, of understanding their language and their local context to get these high quality data sets. And so I think being partnership driven and knowing our role and our place was what was very successful for that. And then the last point I’ll just say on the second question that you threw in there is really our open models, our open weight models, Gemma, are made for a lot of these solutions that are more closer to the edge. So we have nano models that can run on your laptop and tablets and so forth,
Do you actually use them in Africa?
Oh, yes. Yes, yes, yes, yes.
Great. Thank you. The next question is for you. Much of your work at the foundation is about reducing inequities through promoting safe and responsible use of AI. So what role, in your view, do small and custom AI models have to play in this? And if you can provide examples, that would be great.
Sure, Alpan. You know, I think I do want to just touch a little bit on the issue of reliability because my colleague over here spoke about it. And I think it’s a critical issue. I’m sorry if I’m going to repeat this example from one of my previous panels. But I asked the audience, Alpan, because I’m going to be on the plane later, so it’s a bad idea, but I asked the audience anyway. If I said to you, the plane has a high probability of leaving Delhi and landing safely wherever that’s going to be, and that probability was 90%. 95%. Would you get on that flag? 99 %? No? No. I did have one guy that he kind of thought about it and he said no.
And the point there is that I do think we’ve got to work towards models that have zero error, right? So much so that I think that we are trying to wrap our heads around is there a concept of verifiable AI where it shifts the narrative from a black box to a glass box. It actually exposes the logic. So for a particular set of inputs, you can follow the logic chain and it gives you a set of outputs that you can really track. You can audit. You can see that it’s repeatable. And you can prevent some of the kind of fundamental errors that we start… to see. I think, you know, and I want to go back to a very real example, Alpan, because when I think about small models, I’m coming back to the user, the community alky worker that tried to help a mother.
And we have one of our grantees who shared this very personal story of a first -time mother who presented, she was six months pregnant, and she said her hands and her feet started to get swollen. And the community alky worker looked and said, you’re pregnant, this is normal. Four weeks later, she started having a headache and blur vision. I think colleagues will know where the story goes. Unfortunately, that mother had severe gestational proteinuric hypertension. It was missed. And the mother and the baby didn’t make it. But in that moment, what inspired our grantee was if the community health care worker had a small model that worked on her device, which was a low -cost smartphone, still had patchy internet, but was just built small enough to help her to make good decisions at that point of care.
Today, we actually would be sitting with a very different outcome today. And so I think small models present us those
Very interesting. Very good points. Wasim, you spoke about, you know, in a general sense about the research done at the AI for Good Lab at Microsoft. Again, are there specific examples from your work where you see the benefits of building domain -specific models to realize impact? And are there research… lessons that we can take away from this? I think it would be good for the audience and us to understand what are the research directions of the future that can come out of this work.
models from 4 billion to 15 billion. And once we select the best LLM for one target language, we do all these recipes to boost the performance for these low resource languages. But I wanted to get back to all the challenges we are facing for these low resource languages. So when we train these foundation models, we train them on internet data. And internet data represented by more than 60 % is English and followed by some high resource languages like French, Mondarian, Portuguese, etc. So this low resource language is, even if they represent more than 7 ,000 languages, they represent only a tiny portion of internet data. This is the first challenge. And the second one is the benchmarks. When we build LLM, we benchmark them, we evaluate the performance on benchmarks.
And we have seen, like, there are only at least one benchmarks for only 300 languages. So we can see, like, more than 6 ,000 or 7 ,000. They don’t have that many benchmarks. So we can see, like, more than 6 ,000 or 7 ,000. They don’t have even one benchmark. Even in these 300 benchmarks, most of them are just translation from English to this low -resource language. They have nothing to do with the culture and the context of these languages. The third challenge is the performance gap. Of course, there is a performance gap for these LLMs, even the frontier models between high -resource and low -resource languages. The fourth one is safety. When we build LLMs, usually we do some safety alignments with reinforcement learning, but these safety are mainly done in English and some of high -resource languages.
Now, when we build LLMs for low -resource languages, it becomes very strong for these low -resource languages. It raises some other issues with safety. We have to evaluate these LLMs for safety on this language and do all these alignments, reinforcement learning in the target language also. In this PO, we addressed some of these issues. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments.
We have been targeting three pilot languages, which are Inuktitut, spoken in north of Canada, indigenous language, Chichewa in Malawi in Africa, and the Maori in New Zealand. Why we have selected these three languages? Just because we have access to local community to help us to get data. So we gathered data from this community. Then we used some continual pre -training, instruction fine -tuning to boost the performance of open weights LLM, and we were able to gain 12 % balance gain, closing the gap with English. So now what’s next? Next steps, actually we are trying to expand this to more languages. We have some collaboration, for example, in North America with Paraguay to develop LLM for Guarani, and we want to extend this to other languages.
But most importantly, we have launched an initiative to help the community to get the best of the language. We have an initiative called Lengua Europe. We have a project called Lengua Europe. to fund data collection in Europe for 10 languages in Europe. It was released in last September. It was very successful. We have received many applications and 10 have been selected. And now we will start working with them. And it was that successful that now we are extending this initiative to Africa through Lingua Africa. It has been announced just today in the AI Summit. And we will be allocating 5 .5 million to support data collection for African languages. And this is in partnership with Gates Foundation and Microsoft Air for Good and FCDU.
And this initiative will be led by Masakani African Languages Hub.
Sorry, just to follow up. So for people who are working on these small language models or domain, specific language models even, you know, say for healthcare domain or some other domain. Are there, you know, strategies? that they should pursue that you can recommend?
Yeah, this is very important, and this is also related to the call for Lingua Africa because many efforts have been done in the past to collect general proposed data. Now we have enough, I think, general proposed data, but we evaluate the performance of these AI tools for application -specific, for example, healthcare, education, agriculture. They don’t work as we want, as expected. So what we want today is rather than, instead of focusing on general data collection, we will be focusing on domain -specific, application -specific, use case -specific data collections and building AI tools for specific domains. At least for this all reliability issues, we will have a model that performs good in that target level resource language in that application that we can deploy and we can be used by local communities and local communities.
This is really a priority for the next…
Thank you. Ilango, let me come to you. You have vast experience in international development. Can you give us a view of the future as it relates to using AI for developmental goals? Do you think AI will have meaningful role to play in transition of emerging economies to advanced economies?
So I do think the prospects are good and our North Star is job creation. And so we need to support countries that AI doesn’t automate jobs away, but AI actually supports the creation and enhancement of jobs. And this is where small AI becomes imperative, unlike the foundation models. Good question. Which will have implications. So the second is how are we going to go about it. And in some sense, whether it’s large language models or small AI solutions, you need an ecosystem. And that ecosystem needs to be powered by the local private sector. And often what we see even now, whether the AI revolution is before us or not, small enterprises, whether in the SME space or in the larger space, struggle for a variety of reasons.
And if the countries don’t reform business processes, make it easier for permitting, which AI can do, you’re going to see that AI actually is not going to play an effective role. So there are some fundamental reforms. And this is where some of the foundational investment in BPI, the digital public infrastructure, needs to happen to create that ecosystem and the ability for the ecosystem to then work with the private sector, the local communities, to be able to create those jobs. And this is what we… seeing everywhere, if that happens and here too you see this whole vibrancy around the startup ecosystem is why? Because everyone, the young people see opportunities and this momentum can drive everywhere in the world.
Whether it be in India, whether it be rest of South Asia or Africa or Latin America or even in the Pacific region. So how do you go about it? And what we did was we joined hands with a number of multilateral development banks and last couple of days we launched this small AI use case repository. It’s a good 100 cases. It explains in health education and agriculture and job creation how AI can be leveraged to the maximum advantage of communities. Both in terms of service delivery, productivity gains, household income gains. All this eventually leads to better jobs, better employment and better income prospects. So we are very much upbeat about small AI but I do take the point about community trust.
Once it fails, the community is not going to believe it. So it’s very important that whatever we put in place work with others, partners including the MDBs or Microsoft, Google, Gates and everyone. We have to ensure that whatever we leave behind in small communities is something trustworthy, reliable and it doesn’t end of the day hallucinate and give them something that the farmer struggles and ends up with other challenges. Thank you.
So this report you mentioned, is it open access?
On the World Bank we are hosting it. It’s called AI Repository. Just type and you’ll be able to access it. It’s got 100 and we’ll continue to update this and once we’re able to sort out some legal issues then we’ll also allow anyone to submit their use case repository obviously into the repository obviously we’ll go through a filtering process to ensure that the right ones are there.
Great. Thank you. Antoine, coming to you. You have an organization that uses AI to advance health outcomes through research and commercialization. Are data -efficient and hardware -integrated AI models important for the work that’s happening at Parasante? And do you see these models as sort of potentially being deployed in low – and middle -income countries like India?
Yes, so clearly they are very important for us for different reasons. Of course, we’ll get back to the scalability and the use in low – and middle -income countries. But at first, what is the reality in healthcare is that data is scarce and siloed. And so you need to work on what you have, actually. So sometimes it’s a large set of data. Sometimes it’s a very small set of data. But you need to have tools that allow you to build relevant algorithms and relevant analysis. on small data sets. In the meantime, of course, we’re building larger data sets. Sometimes it’s at a level of one department in one hospital. Sometimes it’s one hospital. Sometimes it’s a group of hospitals.
At the end, what we are reaching out in Europe is the constitution of a large European health data space. 450 million citizens joining their health data in digital public infrastructure organized in 27 countries, which will be a world premiere. But in the meantime, we need to work on that reality of scarce data. Second thing is that not only data is limited, but also when you want to enter the new revolution in medicine, which is what we call precision medicine, personalized medicine, you need to work on very efficient algorithms because they need to adapt to one person and not only to a whole population. So you need also to get that into account in building the algorithm. The last thing is that You also have to work with what is existing in the healthcare systems, which is sometimes not supercomputers or high calculation power that exists in servers remotely.
But when you’re in a room of a patient or working in hospitals, it’s a very simple computer. And you need to have efficient algorithms and tools that you can have running on that kind of computers. And so, of course, you go all the way to a smartphone at some point if you go into remote areas. So this is why we actually work on this kind of approach, making sure that, of course, we have research on LLMs and large computing power. But we also have this work on small data, very efficient algorithm.
Can you give examples?
Well, yes. I mean, I already gave some examples about radiology. We are able to. We have a radiology algorithm running on small computer machine. And getting back to your example, which I think is really important, it provides me the. opportunity to put two very important facts. One is that the AI that we use is providing information. It’s not making decisions in healthcare. So of course we target high level of reliability but at the end it’s a human decision and this is very important I think. Second one is that we’ve been trying to compare the performance of the algorithm that we’ve been designing with the existing performance. And of course you’re reaching to 99 .999 % etc. But what very few people actually know is that the actual performance of what we do at the moment is not 99 .999 % So most of the time and I won’t say the numbers but most of the times it’s actually better than what we have.
And this is really important in your example. Is it good enough compared to what we can actually do at the moment? And I think it’s particularly important in low – and middle -income countries because a very simple solution, offline LLMs, et cetera, can solve many, many issues.
Alban, can I pick up quickly? I think it’s really important, and actually I’m going to name the number if it’s okay. Okay, so a really important World Bank study from a few years back showed that on a set of five very simple conditions, the diagnostic accuracy was 50 % across eight countries. 50%. What illnesses are we talking about? Acute diarrhea, upper respiratory tract infection, maternal hypertension. And the point about that is I don’t think, any of us would be happy with 50%, the equivalent of tossing a coin and saying that’s okay. And so I completely understand that today there’s a big gap between what the models can offer. And I think the question about are the models performing better than the average clinician, that’s done.
Sorry, I can’t resist the follow -up question. So often you find that average accuracy of models is far better. But models seem to fail more unpredictably than humans. At least that’s sort of the understanding in health care. Do you agree with that or do you think that’s not true? Anyone who wants to answer this.
Well, so I think we need another hour to discuss this. So what you say is absolutely true. But then you need to look at every pathology or every symptoms that you’re looking. Because the performance. The performance of diagnostic can be a little bit higher in certain places, in certain situations, a little bit lower, et cetera. But we get to the right to the same point, which is what we are building. is actually better than what we are able to do at the moment. And what we show in the scientific literature is that actually the combination between algorithm and natural intelligence, I would say the doctor, is actually the best tool so far. So the question, getting back to your question, how do we deploy this in low – and middle -income countries, I think it’s really important.
We need to have a model that are able to run on small devices. That are able to run offline. And sometimes it’s a very limited set of data, very limited set of algorithm. But if you, we were actually discussing in Paris about examples of remote LLM providing answer on the 10 most important questions for healthcare in low – and middle -income countries. That doesn’t need LLMs online with super calculation power. So that’s one first point, edge native AI. We also need to have data -efficient learning systems because most of the time in low – and middle -income countries, we have a limited amount of data available. So this is what I discussed earlier.
We have a lot of data in India, but it tends to be noisy.
Yes, but we need to get the time to actually get them together, clean them, and get them prepared for robust analysis. So I know you are leapfrogging and going very fast, but by the time you will scale, this will create a real power of analysis. And then we need also to understand how we can couple hardware with software and algorithm to design reduced costs so that they can very easily scale. Thank you.
Great. That was fantastic insight. I’d actually like to give some time. to the audience to ask questions to our panelists so yes please
thank you very much I’m Irish Kumar from the CSC Winnie Ocean Center on solar energy particularly in basement I’m belong to Rajesh son question a question to World Bank president very thanks to the World Bank in Rajesh on 60 % population rural areas and totally based on the agricultural domain 40 % population in the youth how our bank is increase the capacity of AI application to the youth as well agricultural domain so the economic changes more productivity more economy more you inclusion in climate change and renewable energy domain
Thank you for that question, which I think is a very foundational question to ask any policymaker in terms of what kind of an AI strategy or implementation you want to have at any geography in the world. So obviously the first thing is you need digital literacy. Second, you need to skill up so that everybody is upskilled and reskilled on AI -related capabilities. Third is improving the STEM capability in schools and universities. So you do create a future cadre of people who can work on these topics. And then the sectors you mentioned, which are our priorities, agriculture, health, and education, obviously this is where we see the greatest potential for small AI. But particularly on Rajasthan, right now I don’t have any information, but I’m happy to share that with you.
But certainly we are working across different states in India like we’re doing elsewhere in the world. And we do prioritize literacies, skilling. STEM and applications in priority sectors like agriculture, health and education.
But having said that, I also want to say one point. I mean, just to respond to devices that can do computing, devices are expensive for the bottom 40%.
Yeah. Hi, my name is Selena. I’m the CEO and co -founder of Zindi. We run competitions to develop models, especially in Africa. And I actually had a question for Wasim about kind of the technical implications, the size implications, the practicality of using… Open source, open weight models, you know, large language models to train very specific, domain specific, you know, language, you know, under -resourced language models. How have you seen that play out?
Yeah, I think what we have seen, like the selection of the base model is very important. Because what is true, what is real, that we cannot train from scratch in LLM, even if small or large language model. We cannot train it from scratch for lower social languages because we don’t have this 15 trillion tokens to train. So it is very important to select the best multilingual model that has the right tokenizer that can be adapted to many lower social languages. This is very important. And then get the data that we need. And what we have seen also, monolingual data helps, but also bilingual data can help. And also translating English into this lower social language can also help to boost the performance.
So in our paper, we are providing all these three CPs to follow to get the best boost in terms of performance. What I would like to add also, with all these level source languages we have, text cannot solve them all. Many of these languages will be solved by speech. It’s very important. ISR models, speech -to -text, text -to -speech will build a very large role into unlocking all these level source languages in addition to LLM that can operate into level source language or in English.
I think we have time for one really short question.
Hi, this is Dr. Ravi Singh. I’m from Miami, and it was a great panel, so a lot of great insights. It’s for Google, Microsoft, and the World Bank. Here’s the scenario. If there’s compliance across all of these platforms, which platform will win the AI wars?
That’s a loaded question. Anyone want to answer? I’m not. So first of all, I think healthy competition is how we’ve been able to develop incredible technologies just over time. So the competition is healthy, and this is great. I don’t see it as a zero -sum game. There’s too many people on the planet, and there’s too many challenging, unique problems that need to be solved. So if we’re making it useful and bringing joy and happiness for all, that’s in the â I just love it, that’s in the theme here, then it’s not necessarily going to be who wins whatever platform. It’s what is relevant to the context of the end user. So taking it back to a more, like, human, personal perspective.
That’s my thinking.
First, three billion people are offline, so there is space for everybody to compete. Second, in health sector alone, three and a half billion people don’t have access to healthcare, so there is enough scope for all kinds of applications.
Just I want to add, many people have been asking me to, if this all efforts we are doing for language, if it is enough to make this model as good as English. I would say maybe not, but without all these efforts we would never reach this objective. So we have all these collective efforts will get us to this objective.
Thank you so much, everyone. I would now like to invite Neha Butts, Associate Director, Human Resources, to just hand out the mementos to all our speakers. And we will just take one group photo. Thank you. One group photo, please. Requesting the speakers to just take one group photo, please. Thank you so much everyone Thank you everyone for joining Thank you Thank you
“It’s what is relevant to the context of the end user”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change?diplo-deep-link-text=Thank+you….
EventIoanna Ntinou: I think that my question will be, as a researcher, if we focus so much on having smaller models, if we actually neglect all the progress that has been done so far with the large languag…
EventAppropriate technology solutions for developing countries Zutt advocates for a focus on ‘small AI’ rather than large-scale AI solutions, emphasizing practical applications that can work in environmen…
Event-Hyperlocal and Multi-Modal Forecasting: There was significant discussion about developing AI systems capable of providing highly localized predictions (down to 1km resolution or finer) by combining m…
Eventtry to invest in the township planning and the implementation. Also, we can have a water supply road project that can be connected to the industrial parks so that private sector can invest in the digi…
Event_reporting– 3.1 Encourage public-private sectors competition, promote entrepreneurship and innovation in the fields of Knowledge Economy development, promote industrial innovations, and set up lega…
Resource‘Foster the use of AI in vital developmental sectors using partnerships with local beneficiaries and local or foreign tech partners to ensure knowledge transfer while addressing Egypt’s development ne…
ResourceAudience: Good evening, everyone. Is it? Okay. My name is Lydia Lamisa Akamvareba from Ghana. I’m looking at the team up there. AI readiness in Africa in a shifting geopolitical landscape. Yes, it’s t…
EventLarge language models can be run on personal laptops
EventAlphonso Wilson:Yeah, shortly, I think I’ll be very brief. Yeah, in response to whatever, in the issues of the climate change, I think there have not really been much emphasis or discussions on the im…
Event– Use of remote sensing and geospatial platforms for analyzing drought, water stress, and crop management Dongyu Qu: Excellencies, ladies, gentlemen, good morning. A year ago, we all gathered for the…
EventDevelopment | Sustainable development In the pilot program, rangers intercepted two logging crews before the first tree fell. The system ingests satellite alerts, news feeds, even radio transcripts a…
EventResearchers at the University of Bradford arepreparingto pilot an AI-enabled wildfire detection system that uses robotic dogs, drones, and emerging 6G networks to identify early signs of fire and aler…
UpdatesLingua Africa initiative launched to collect local data with communities for spoken languages in partnership with Gates Foundation ML Commons and Microsoft announced collaboration on expanding safety…
Event### The Low-Resource Language Crisis Dhanaraj Thakur: Yeah, great. Thank you, Marlena. And thanks for the invitation to join this conversation. Yeah, so I’m Dhanaraj Thakur. I’m research director at …
Event“The solution is a system or a framework that reasons across modalities and refers to previous conclusions, contradicts them, and finally describes them all in an understandable manner rather than a c…
EventGouvernance de l’IA dans le domaine de la santĂŠ L’intelligence artificielle est dĂŠjĂ prĂŠsente dans le domaine de la santĂŠ, notamment en radiologie avec des outils validĂŠs, mais son utilisation prĂŠsen…
EventDeshni Govender: Sure. I think it’s important also to point out that when we mention the concept of extractive practices, that it’s not always a foreign versus local context. And it’s not a cross-bord…
EventCanada’s AI for Development projects in Africa and Latin America have been highly appreciated for their positive impact. These projects have supported the growth of communities in creating language da…
EventSupporting open evaluation benchmarks for low-resource languages
BlogVikalp Sahni identified key technical challenges including building systems that work across multiple languages and generating verifiable data for large-scale model training. He also raised the import…
EventThe tension between Ball’s emphasis on frontier AI capabilities and Ramos’s focus on addressing market concentration represents different theories about how technological benefits diffuse through soci…
EventPanellists offered different outlooks on employment implications. Rees-Jones maintained optimism about AI tutoring enhancing reskilling opportunities and creating space for creative work, arguing that…
EventTalent development and future outlook
EventModerate disagreement with significant implications. While speakers agree on the fundamental opportunity that open source AI presents for Africa, their different approaches to governance, control, and…
EventâThe moderator defines small AI as technology that must be meaningful for the endâuserâs local context rather than generic solutions.â
The knowledge base states the moderator defines small AI based on relevance to the end-userâs context [S1].
âGoogle Research Africa released an open multilingual voice dataset covering 27 African languages out of an estimated 2âŻ000.â
The knowledge base notes a released dataset of 27 voice languages for Africa, referencing the continentâs roughly 2âŻ000 languages [S6].
âGoogle Research Africa built a continentâwide weatherâforecasting system that compensates for Africaâs severe radar shortage â only 37 stations compared with roughly 300 in North America and Europe.â
The knowledge base discusses hyper-local, multi-modal forecasting that combines satellite, ground sensors and cameras to achieve fine-grained predictions, providing context on the technical approach to address data gaps [S20].
The panel displayed strong consensus that small AI should be lightweight, dataâefficient, edgeâdeployable, and coâcreated with local stakeholders to address specific community needs. Participants agreed on the importance of domainâspecific data, openâsource models, transparency, and scaling pilots into reusable solutions. There was also a shared belief that small AI is not inferior but can reliably accelerate development outcomes.
High consensus across technical, ethical, and development dimensions, indicating a unified vision that small, contextâaware AI, built through partnerships and open practices, can play a pivotal role in achieving inclusive social and economic development.
The panel largely converged on the importance of small, contextâaware AI for underserved communities, agreeing on goals such as accessibility, partnership, and scalability. The principal point of contention concerned the acceptable level of reliability for healthâfocused small AI, with Zameer demanding nearâzero error and full auditability, while Antoine argued that current, imperfect models already provide net benefits and are suitable for deployment in lowâresource settings.
Overall disagreement was low; the debate centered on a single technical nuance (reliability standards) rather than fundamental strategic differences, suggesting that consensus on the broader vision of small AI is strong, with only modest implications for implementation pathways.
The discussion was anchored by Alpanâs definition of small AI, which established a shared lens for all participants. Key turning pointsâZameerâs traffic analogy, Aishaâs radarâstation disparity, the gestationalâhypertension story, and Wassimâs breakdown of lowâresource language challengesâeach introduced new dimensions (contextual relevance, data scarcity, realâworld impact, and multilingual barriers) that redirected the conversation toward concrete technical and policy solutions. Illangoâs emphasis on scalability, replication, and job creation broadened the scope from technology to development outcomes, while Antoineâs examples of existing edgeâAI in healthcare grounded the debate in current practice. Collectively, these insightful comments moved the panel from abstract notions of âsmall AIâ to actionable strategies, highlighting the necessity of data efficiency, verifiability, local partnership, and ecosystem building to achieve meaningful social impact.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

