How Small AI Solutions Are Creating Big Social Change

20 Feb 2026 15:00h - 16:00h

How Small AI Solutions Are Creating Big Social Change

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by Alpan Rawal, examined how “small AI”-data-efficient, low-cost models that run at the edge and are tailored to local contexts-can generate large social impact, especially for underserved communities in the global south [15-19]. Zameer Brey stressed that AI’s value lies in its relevance to specific settings such as district hospitals in Telangana, smallholder farmers in Zambia, or classrooms in rural Senegal, warning against designs that ignore users’ environments [27-34]. Aisha Walcott-Bryant described Google Research Africa’s “Africa-for-Africa” approach, highlighting problem-first projects like continent-wide weather forecasting that compensate for the scarcity of radar stations [50-57] and the creation of open voice-data sets for 27 African languages to enable edge-ready models on laptops and tablets [61-65]. Wassim Hamidouche outlined Microsoft’s AI for Good Lab, citing two open-source small-AI systems: SPARO, a solar-powered acoustic sensor network for biodiversity monitoring in remote areas [83-88], and Alert California, a 1,300-camera network that detects wildfires early using on-device AI [91-97]. Antoine Tesnière explained that health care already relies on validated small-AI tools for radiology, dermatology and ophthalmology, many of which can operate offline on modest hardware and complement rather than replace clinician judgment [102-109]. Illango Patchamuthu of the World Bank framed AI as a means to reduce poverty, arguing that simple, low-resource models are easier to scale across villages, and that replicating successful pilots requires clear KPIs, trust-building and partnership with multilateral institutions [111-124][243-267]. The discussion also addressed technical challenges: low-resource languages suffer from data scarcity, limited benchmarks, performance gaps and safety-alignment issues, prompting Microsoft to target pilot languages (Inuktitut, Chichewa, Māori) and launch the “Lingua Africa” initiative with $5.5 million funding for data collection [181-210][220-228]. To improve reliability, speakers advocated for verifiable “glass-box” models, domain-specific data collection and continuous pre-training, noting that small models can achieve near-human accuracy in targeted tasks such as community health-worker decision support [153-166][233-238]. Consensus emerged that small AI is not a second-class technology; when designed responsibly and deployed with local ecosystems, it can accelerate development outcomes, create jobs and complement larger foundation models rather than compete with them [120-126][244-250]. Participants highlighted the importance of digital literacy, upskilling and STEM education to build a cadre capable of developing and maintaining small-AI solutions, especially where three billion people remain offline [350-353]. The moderators concluded that competition among platforms should be viewed as healthy and complementary, with the ultimate goal of delivering trustworthy, context-appropriate AI to end users rather than determining a single “winner” [386-393].


Keypoints


Major discussion points


What “small AI” means and why it matters – The moderator frames “small AI” as data-efficient, low-cost models that run at the edge and are tailored to local contexts rather than generic, large-scale foundation models [15-19]. Zameer reinforces this with a traffic-analogy, arguing that solutions should be “smaller, faster, sharper, cost-effective” for the environments they serve [32-34].


Concrete small-AI projects across sectors


Google Research Africa builds continent-specific weather-forecasting tools and releases a multilingual voice dataset, emphasizing open-weight models that can run on laptops or tablets [50-55][60-65][144-145].


Microsoft AI for Good showcases SPARO (solar-powered acoustic monitoring for biodiversity) and Alert California (camera network for early wildfire detection), both open-source and deployable worldwide [82-98].


Low-resource language work at Microsoft targets languages such as Inuktitut, Chichewa and Māori, launches the “Lengua Africa” data-collection fund, and partners with the Gates Foundation to support African language models [181-190][210-218][219-228].


Healthcare small-AI in France uses validated, offline models for radiology, dermatology and ophthalmology, stressing that AI only augments clinician decisions [102-108][300-306].


The central role of partnerships, community involvement and open resources – Aisha notes that the African voice dataset was co-created with local partners and that open models enable “partnership-led” solutions [60-65]. Illango (World Bank) stresses replicating proven pilots, building an AI use-case repository, and collaborating with NGOs, governments and tech firms to scale impact [111-120][258-262]. Microsoft’s language initiatives rely on community data collection through the “Masakani African Languages Hub” [210-218][219-226].


Key technical and deployment challenges, and proposed strategies – Wassim outlines four hurdles for low-resource languages: data scarcity, lack of benchmarks, performance gaps, and safety/alignment issues [181-200]. Zameer highlights the need for “verifiable AI” that reduces black-box errors, citing a maternal-health case where a small on-device model could have saved lives [153-166][167-174]. Antoine points out limited, siloed health data and the necessity of efficient, offline algorithms that run on modest hardware [280-298]. Across the board, speakers recommend domain-specific data collection, open-weight models, and edge-native deployment to overcome these barriers [233-238][280-298].


Future outlook and policy implications for development – Illango envisions AI as a catalyst for job creation, stressing digital-literacy, up-skilling and a robust private-sector ecosystem; he also announces a publicly accessible AI use-case repository [243-262][268-272]. The moderator and panelists caution against a zero-sum “AI wars” narrative, emphasizing healthy competition and context-driven relevance [384-393][394-398].


Overall purpose / goal of the discussion


The panel was convened to explore how “small AI”-data-efficient, locally-adapted models-can generate tangible social impact for underserved and rural communities, especially in the Global South. Each speaker shared organizational experiences, highlighted non-foundation-model approaches, and discussed how to scale such solutions responsibly.


Overall tone


The conversation remained professional, collaborative and optimistic, with speakers celebrating successes (e.g., open-source biodiversity tools, multilingual datasets). When addressing reliability, safety and scalability, the tone shifted to a more cautionary, problem-solving stance, underscoring the need for rigorous validation and community trust. Throughout, the dialogue stayed constructive, focusing on partnership-driven pathways rather than competition.


Speakers

Illango Patchamuthu – World Bank Group Director of Strategy and Operations, Digital and AI Vice Presidency; Acting Director for Data and AI [ S1 ]


Announcer – Event announcer/moderator who introduced the panelists [ S2 ][ S3 ][ S4 ]


Alpan Rawal – Chief AI / ML Scientist at Wadwani AI; moderator of the panel [ S5 ]


Aisha Walcott-Bryant – Senior Staff Research Scientist and Head of Google Research Africa, Google [ S7 ][ S8 ]


Antoine Tesniere – French Professor of Medicine, entrepreneur, anesthesiologist at Georges Pompidou European Hospital; co-founder of ILEMENTS; Director of Paris-Saint-Denis Campus [ S10 ]


Zameer Brey – Panelist (organization not specified in the transcript) [ S12 ]


Wassim Hamidouche – Principal Research Scientist, AI for Good Lab, Microsoft (specializing in computer vision, NLP, multimodal AI, low-resource languages) [ S14 ]


Audience – Various participants from the public; no specific titles or roles mentioned


Additional speakers:


Neha Butts – Associate Director, Human Resources (mentioned at the close of the session)


Selena – CEO and Co-founder of Zindi, runs competitions to develop AI models in Africa


Irish Kumar – Representative from the CSC Winnie Ocean Center on solar energy (asked a question during the Q&A)


Dr. Ravi Singh – Participant from Miami who posed a question about platform competition


Full session reportComprehensive analysis and detailed insights

The panel, moderated by Dr Alpan Rawal, opened by defining small AI as data-efficient, low-cost, edge-native models that are built for specific local contexts rather than generic, large-scale foundation models. Rawal emphasized that relevance to the end-user’s environment is the key criterion for impact [15-19]. This definition set the tone for the discussion, prompting each panelist to illustrate how their work embodies these principles.


Zameer Brey reinforced the need for context-appropriate solutions with a vivid traffic analogy, arguing that, just as Delhi’s congestion would never justify an aeroplane for short trips, AI should be “smaller, faster, sharper, cost-effective” and suited to the specific setting [32-34]. He warned that designers often focus on benchmark performance without considering how a model fits into a district hospital in Telangana, a smallholder farm in Zambia, or a rural classroom in Senegal [29-31]. After noting the importance of “verifiable” or “glass-box” AI, Zameer cited a World-Bank study showing 50 % diagnostic accuracy across five common conditions in eight countries, underscoring the gap that reliable on-device models must close [409-410].


Domain-specific small-AI projects

Google Research Africa highlighted two flagship initiatives. First, the team built a continent-wide weather-forecasting system that compensates for Africa’s severe radar shortage – only 37 stations compared with roughly 300 in North America and Europe [55-57]. By innovating around this constraint, they delivered more accurate forecasts for rain-fed agriculture, a critical need for millions of smallholder farmers [50-54]. Second, they released an open multilingual voice dataset covering 27 African languages (out of an estimated 2 000), enabling “partnership-led” development of edge-ready models that run on laptops or tablets [61-65][144-145].


Microsoft’s AI for Good Lab presented two open-source, globally deployable tools. SPARO (Solar-Powered Acoustic and Remote Observation) combines solar-powered cameras with an HAA model to detect animal species in remote habitats, transmitting data via satellite where infrastructure is lacking [82-88]. Alert California operates a network of 1 300 cameras with on-device AI that detects early wildfire signatures, allowing rapid emergency response [91-97]. Both solutions exemplify small AI that is cheap to run, edge-deployable, and openly shared for reuse worldwide [84-88][96-97].


Antoine Tesnière’s health-innovation ecosystem illustrated healthcare applications, noting that validated small-AI tools already support radiology, dermatology and ophthalmology analyses on modest hardware, providing information to clinicians while preserving human decision-making [102-108][300-306]. He stressed that data in health is often scarce and siloed, requiring data-efficient algorithms that can operate offline on smartphones or simple computers, especially in low- and middle-income settings [280-298][332-340]. Antoine clarified that these models are better than current practice, and that the combination of algorithm + human decision-making is the most effective tool, rather than claiming they outperform clinicians outright [311-313][332-340].


Wassim Hamidouche described work on low-resource languages, identifying four systemic challenges: dominance of English in internet data (>60 %), a dearth of benchmarks (only ~300 languages have any, most limited to translation tasks), a performance gap between high- and low-resource languages, and safety-alignment work that is largely English-centric [184-200]. To address these, Microsoft is piloting models for Inuktitut, Chichewa and Māori, achieving a 12 % performance gain through continual pre-training and instruction fine-tuning [210-218]. The “Lingua Africa” initiative, funded with US$5.5 million in partnership with the Gates Foundation and the Masakani African Languages Hub, will support data collection for ten African languages, extending the earlier “Lingua Europe” programme [219-228]. He also emphasized that speech-to-text and text-to-speech are essential for many low-resource languages, where written corpora are limited [416-418].


Technical and reliability challenges

Zameer argued for “verifiable” AI, insisting that models used in critical health contexts must approach zero error and provide auditable logic chains to prevent catastrophic failures [160-165]. Antoine offered a counterpoint, acknowledging that while current small-AI models are not 99.999 % accurate, they already improve over existing practice and are acceptable when combined with human oversight [300-311][332-340]. This tension highlighted a broader disagreement on the acceptable error tolerance for health-focused small AI.


Wassim advocated shifting from generic data collection to domain-specific, use-case-driven pipelines, arguing that such focus improves reliability for applications in agriculture, education and health [233-238]. He also noted the importance of speech technologies for low-resource languages [416-418].


Partnerships, co-creation and open resources

All speakers underscored the necessity of multi-stakeholder collaboration. Illango Patchamuthu of the World Bank described AI as a means to reduce poverty, insisting that simple, low-resource models are easier to scale and must be replicated through clear KPIs and trust-building with NGOs, governments and local communities [111-124][115-124]. He announced an open-access AI use-case repository containing about 100 curated examples in health, education, agriculture and job creation, hosted on the World Bank platform [258-262][268-272]. Illango explicitly stated that “small AI is not inferior, it is not second class.” [411-413] He added that once legal issues are resolved, anyone will be able to submit use-cases to the repository, subject to filtering [414-415]. Aisha echoed this collaborative ethos, noting that the African voice dataset was collected with partners across the continent and that open-weight models such as Gemma enable edge deployment [60-65][144-145]. Microsoft’s language initiatives similarly rely on community-driven data collection through the Masakani African Languages Hub [210-218][219-226].


Scalability and development impact

Illango highlighted the importance of turning pilots into plug-and-play solutions that can expand from a single village to larger regions, citing ongoing projects in Uttar Pradesh and Maharashtra that aim to improve agricultural productivity, market access and credit [117-119]. He positioned AI as a catalyst for job creation, arguing that small AI should augment-not replace-employment, and that building digital literacy, up-skilling and STEM capacity is essential for emerging economies [350-353][244-250].


Audience interactions

During the Q&A, Irish Kumar asked how small AI can support youth, agriculture and renewable energy. Illango responded that digital-literacy programmes, up-skilling initiatives, and sector-specific pilots in India are already being rolled out to empower young people and promote sustainable agriculture and clean-energy solutions [400-402]. Selena questioned the technical feasibility of using open-source, open-weight LLMs for low-resource languages. Wassim explained that selecting a strong multilingual base model, augmenting it with monolingual or bilingual data, and leveraging speech-to-text/text-to-speech pipelines are key to achieving good performance [403-405]. Dr Ravi Singh raised the “AI wars” concern. Alpan affirmed that healthy competition drives innovation, Illango reminded that three billion people remain offline-leaving ample space for diverse AI approaches-and Wassim emphasized that collective effort across sectors is essential to avoid fragmented development [406-408].


Future outlook and policy considerations

Rawal concluded that competition among platforms is healthy and not a zero-sum game; the “winner” will be the solution that best fits the user’s context [384-393][386-392]. Illango reinforced this view, noting the large offline population as an opportunity for inclusive AI [394-398]. The panel collectively agreed that small AI is not a second-class technology; when responsibly designed, it can fast-track development outcomes and complement larger foundation models [120-124][15-19].


Action items and unresolved issues

– The World Bank will maintain and expand the open-access AI repository, and will open submissions to external contributors once legal clearance is obtained [258-262][414-415].


– Microsoft will roll out the Lingua Africa initiative to fund domain-specific data collection for African languages [219-228].


– Google Research Africa will keep releasing open-weight models and multilingual voice datasets, supporting edge deployment [144-145].


– SPARO and Alert California will remain open-source for global adoption [84-88][96-97].


– All participants committed to prioritising domain-specific data, co-creation with local partners and scaling pilots into reusable modules [233-238][115-124][259-262].


Remaining challenges include achieving near-zero error rates and verifiable audit trails for critical health applications, establishing robust benchmarks and safety evaluations for low-resource languages, ensuring affordable edge hardware for the poorest populations, and defining concrete digital-literacy and up-skilling programmes at scale. These issues were identified as priorities for future collaboration and research.


The discussion demonstrated strong consensus that lightweight, context-aware AI, built through open-source practices and local partnerships, can deliver meaningful social impact while complementing, rather than competing with, large foundation models. The panel’s insights chart a clear pathway toward inclusive, trustworthy AI deployment in underserved regions. Alpan invited Neha Butts to hand out mementos and requested a group photo to close the event [419].


Session transcriptComplete transcript of the session
Announcer

Please, I would request you to take your seat on the panel. Wassim Hamidouche, who’s a principal research scientist at Microsoft’s AI for Good Lab, specializing in computer vision, NLP, and multimodal AI with a focus on low -resource languages. Requesting you to please take your seat. Illango, who’s a World Bank Group Director of Strategy and Operations in the Digital and AI Vice Presidency and also serving as Acting Director for Data and AI. Requesting you to please join the panel. Thank you. Aisha Walcott, who is a senior staff research scientist and head of Google Research Africa, focused on AI development, addressing the continent’s most pressing challenges. She holds a PhD in electrical engineering and computer science and holds leadership roles in the IEEE Robotics and Automation Society.

Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneur, specializing in health innovation and crisis management and an anesthesiologist at the Georges Pompidou European Hospital. He co -founded ILEMENTS, coordinated France’s national COVID response, and since 2021 has served as director of Paris -Saint -Denis Campus. Thank you so much for being here. Requesting you to join the panel. And Dr. Alpan Rawal, who’s chief AI ML scientist at Wadwani AI, will be moderating today’s session. Alpan, requesting. Thank you. handing it over to you

Alpan Rawal

yes thank you everyone for coming requesting those at the back if you could close the door so that we can reduce the noise a little bit it’s full ok great well if you could just calm down a bit and settle down thank you welcome to all our esteemed panelists and our panel the topic of our panel as you know is on small AI for big social impact like to deeply thank our panelists for making it all the way for the summit and making it to this panel so what do we mean by small AI I think different people have different definitions and we are sort of open to how each panelist chooses to interpret small AI When we at Wadwani AI brainstormed about this panel, we thought it would reflect in some ways the ethos of our own work, making models that are data efficient, that are cheap to run, that sit on the edge, and most importantly are meaningful to the communities that we serve, which are underserved communities, mostly in rural India.

But it’s increasingly clear that small AI means a lot more. And I see a lot of people talking about small AI in the summit. More generally, I think it encapsulates any AI that meaningfully impacts individuals while taking into account and respecting their very local context, rather than providing generic outputs. So anything like that could rightly be called small AI, and we’re going to hear from our panelists about their experiences with AI models like that. So with that small introduction, let’s now avoid further ado and speak to our panelists. panelists. So, can you hear me at the back? Yeah, okay. So we can start. I have a common question for every panelist. Each of you represents a different and important aspect of AI work that’s happening outside of the mainstream excitement that focuses on large foundation models for a primarily global north audience.

Can you tell us briefly about your organization’s work and perhaps your thoughts on non -foundation AI models in general? Maybe we can start with Zamir.

Zameer Brey

Thanks Alpan. Yeah. Thanks. Thank you. Thank you. Thank you. we really see the opportunity for AI to reduce inequality and our starting point with AI tools is really does this work for whom where and at what scale so those are some of the departing points for us and so really looking beyond the model against a benchmark but how is this going to work in a district hospital in Telangana or a small older farmer in Zambia or a classroom in rural Senegal and you know part of what we’ve in some ways got caught up with is is the performance of the model on its own. And we’ve forgotten how does this fit into the lives and the context that it operates in.

And in doing so, part of what we need to think about is who’s designing the model and what’s it designed for? And I was thinking about the traffic that we’ve been experiencing the last few days in Delhi. And I thought to myself, would anyone, given the traffic here, design something so big as an aeroplane to try and get across the city? No. Yeah. I think we would design something that’s a lot smaller, faster, sharper, cost -effective, and gets us from point A to point B. Without a first -class airbender. us on

Alpan Rawal

I think that’s a great analogy. Aisha, can you tell us a bit about your work at Google Africa?

Aisha Walcott-Bryant

Yes. Thank you. So I lead our Google Research Africa team. We have two sites, one in Ghana and one in Kenya, so representing East and West. But the work that we do is essentially from Africa for Africa and the world. Much of our work is scaling from the uniqueness of the continent. Turns out that a lot of the challenges are similar, definitely across the global south and generally worldwide. Our work, so kind of leaning into the next part of your question and thinking about how we approach this type of work, it’s very much interesting. It’s very much problem first. I always say if there’s a red button that… that you can press and it’s a one error zero, just build the red button.

We don’t need to bring AI or technology. So it’s really important to be very thoughtful about the type of problem. Coming from Google Research, we want to leverage our compute, our AI expertise and capabilities, and then our mandate, which is the societal impact at scale, to think about the types of problems that we work on. I’ll give two good examples of those problems. One is around weather now casting, which we launched last year across the continent of Africa. So to have much more accurate weather forecast is absolutely essential, given that much of the continent and as well as in India rely on agriculture for labor. And we are rain fed primarily, 95 % in Africa. So having much more accurate weather forecast is essential in that case.

And at the same time, on the technical challenges side, we know in North America and in Europe, there’s about 300 or so weather radar stations. And in Africa, there’s only 37, I believe. You know you can fit both North America and Europe in Africa. So when you think about that, you have to innovate. And so those constraints of the environment that you were alluding to in the intro are part of the motivation of having a research team in the continent. And so that was one way that we innovated and made solutions that were available to the continent. And then the other one is a complementary side, which is working with the ecosystem, working with partners in Africa, including Macquarie University, Digital Umaga, and Uganda around Africa.

African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so languages. This is the start. Most importantly is it’s partnership -led and driven, and this is because it’s voice, it is about accessibility and about reaching those rural villages as well. So, and enabling the ecosystem to build the solutions from there, whether they’re smaller models or larger models. So, making that type of data open and available is another way that we are leveraging this notion of smaller AI.

Alpan Rawal

Thank you. Thank you. Great, great insights. Zamir, can you tell us about your work at Microsoft?

Wassim Hamidouche

Yeah. Thank you. Sorry. Yeah, Massim. Thank you for the invitation, and it’s a great pleasure to be here today. So, first, what is AI for Good? AI for Good labs. AI for Good lab is the phylogenetic research of… Microsoft. We are employing advanced AI technology to solve real world problem with real societal impact. This is very important. And how our team and the researcher work, we closely collaborate with NGOs, governments, nonprofit organization, and local communities around the world. And together we are building AI solutions on multiple domains. We are interested about agriculture, food security, healthcare, education, culture, and so on. So this is about AI for Good Lab at Microsoft. Now I am scientist, so I would like to give you two concrete examples where we use small AI and also there are two global solutions to tackle global challenges.

So they are valid for both, they are valid for both global and international north and global south. So the first project in biodiversity is called SPARO. SPARO for solar powered acoustic and remote recording observation. It is an AI powered open source solution designed to track and monitor biodiversity in the most remote and hard to reach region in the world. So SPARO is camera tracks with HAA model that enable to detect animal species and this observation are then transmitted using wireless connectivity and satellite where we don’t have infrastructures to transmit this information. And this SPARO solution is already deployed around the world in many countries. I can cite Colombia, Peru, United States, Tanzania and it’s really enable practitioners and the researcher to understand species present and the ecosystem.

at scale, supporting more timely and informed decision to protect biodiversity. The second project focuses on wildfires. As you know, wildfires becomes real threat, global threat with devastating, impacting lives, communities and ecosystems and even economies. And around the world, firewires are increasingly in both frequency and intensity, making early detection and rapid responses more critical than ever. So through Alert California, we are addressing this challenge using AI. So what is Alert California? Alert California is the network of 1300 cameras operating 24 -7. So what is Alert California? Alert California is the network of 1300 cameras operating 24 -7. And we are developing AI tools that runs on the top of this effort. structure enabling to detect early fire and this will enable emergency responder to act quick quickly and stop fires before it’s split.

So Sparrow and Alert California as I said they are two global solutions for global problems that can be deployed anywhere around the world and we are providing them open source that anyone can embrace them and deploy them. Thank you.

Alpan Rawal

Thank you. And when I think you’re the only member of this group that doesn’t work in the global south. So if you tell us a bit about the work at Paris and how you’re using maybe non foundation models. Right. Well

Antoine Tesniere

thank you Alpan for this invitation and I’m happy to be the outsider of the panel. I’m working actually in health care and I’m leading a new kind of innovation ecosystem for health care where we gather researchers. We. We have doctors patients. We have doctors patients. startups and industrial together as well as institutions so the idea is to really create a whole community of innovation and engage into the use of data and artificial intelligence and healthcare is probably one of the fields where AI has a long standing history and the world has discovered AI with the rise of Gen AI but there were a number of small AI models designed for a long time and this is why we already have a number of validated tools that we can use in healthcare answering the question not only does it work but is it reliable which is very important for our patients so before we have the proof of efficiency of LLMs in the medical field which is not fully clear yet, we use machine learning tools which are actually small AI models in very specific areas actually works really nice today is the image analysis or pattern analysis so you can think of radiology for example chest x -ray or fractures in the emergency room are fully analyzed by small AI models and small AI tools that are easily deployable on small computers you can also think of picture analysis in dermatology in ophthalmology etc so these are very concrete example of already validated small AI models in healthcare that are used on a daily basis at least in France and Europe we’ll get back in the discussions on how we need data efficacy on this topic but it’s really important to understand that these models are already deployable and some of them can actually work offline which is really important in some environments Thank you.

Alpan Rawal

Ilango from World Bank perspective. What is your view on these types of models? Thank

Illango Patchamuthu

you very much for the opportunity to be here. Coming right at the end, I don’t know what new things I can say, but to basically reinforce the messages that have been said. For us at the World Bank, we see AI as a means to an end, and very much of an AI agenda is shaped by the mission of the World Bank, which is to reduce poverty and grow prosperity in the world. And when you take that lens and you apply it, we have to keep it simple. Not all countries have the ability to have the compute power, the electricity, the talent, and the data. So therefore, taking on tested small AI applications to scale and replicating them around the world is something that we see as a mission priority.

So in that respect, what Badwani AI is doing here is pioneering. And what I’ve heard this morning from Dr. Sunil Badwani himself about what you’re doing in TB, what you’re doing in out -of -school children, this is all… tremendous and it has great potential for application and often what happens is we focus a lot on pilots and then what happens the pilots tend to kind of once the sheen wears off people forget the pilot I think what we need to do is and what we are doing at the World Bank is to see those pilots whether it’s in health education agriculture on the small AI setting it works in rural communities where offline where data is not that rich talent is not readily available and it can also not require a lot of electricity it’s plug plug and play then how do we get the right KPIs which then allows us to go from a village of community of 50 villages to a larger population center and to see how best we can help them say in agriculture to improve productivity you better inputs We are now working in UP in partnership with Google, and we are doing the same thing in Maharashtra.

Household income, the inputs get better, and to see how they can access the markets and agriculture credit. Similar in health and education, it’s in great practices that we are seeing in Africa, in Ghana, in Kenya. So how do we take these models and replicate it? So I’d like to assure everybody, and this is something people think, small AI is inferior. No. It’s not second class. No. Small AI can solve problems. It means to an end. And if it can actually fast -track development outcomes, you know, we’ve known the problems with millennium development goals. We’ve known the problems with sustainable development goals, and many countries are lagging behind. And this is an opportunity where this development technology, if it can be put to use in the right context in the right way, I think we can achieve faster

Alpan Rawal

Thank you. That’s really interesting. Thank you. Ayesha, I’m going to come back to you and ask you more specifically how does the work that Google Research does in Africa impact rural communities specifically? How does one bring the benefits of technologies like these big foundation models to devices that may only have patchy Internet and supply very little data?

Aisha Walcott-Bryant

Thanks, that was a loaded question. Two parts there. So I think, so first and foremost, in general, just approaching these challenges with humility and relating. So I always start with, you know, I’m a scientist, but I’m also a mother, right? And that’s a thread that I’ve been following for a long time. And that’s a thread that binds so many of us. When you think of that, you also think a lot of the solutions that we’re building, it’s not for them. for us. I’m using the same health systems that you all are developing, interesting tools and models for, you know, and we have many of the challenges around weather as well. So I think the first thing is to kind of have that base human layer as we think about our work and to connect with those communities, whether they’re rural or urban, right?

A lot of the work that we do, we’re looking at, you know, these large populations that, you know, if you think about agriculture, for example, where it has a large part of the labor force, you know, there’s many different ways that this is, you know, people are part of that value chain, whether they’re actually doing the growing or providing the inputs or making those decisions and the risks along the way. So that relationship of getting out in the community, getting out in the community, getting out in the community, getting out in the community is a very important part of work. that we do is to connect with those and then really think about you know kind of coming home as Google research you know where is our unique value proposition we’re not necessarily going to solve this whole problem alone usually it requires behavior change policy and many pieces of the puzzle how do we best fit our role and we do this in co -creation with partnerships so that’s kind of the the second layer of fabric on on how we reach these rural communities and then on the other side

Alpan Rawal

Do you have an example of that?

Aisha Walcott-Bryant

oh yeah yeah absolutely so if you think about okay I’ll do two ways so for example the the languages work that I was talking about wall and wall is a word it’s a Senegalese word that it’s wall of that means to speak and the way we wanted to create this you know we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we the community.

So if you have partners who are across the continent, let them be a part of the process of collecting the data, of understanding their language and their local context to get these high quality data sets. And so I think being partnership driven and knowing our role and our place was what was very successful for that. And then the last point I’ll just say on the second question that you threw in there is really our open models, our open weight models, Gemma, are made for a lot of these solutions that are more closer to the edge. So we have nano models that can run on your laptop and tablets and so forth,

Alpan Rawal

Do you actually use them in Africa?

Aisha Walcott-Bryant

Oh, yes. Yes, yes, yes, yes.

Alpan Rawal

Great. Thank you. The next question is for you. Much of your work at the foundation is about reducing inequities through promoting safe and responsible use of AI. So what role, in your view, do small and custom AI models have to play in this? And if you can provide examples, that would be great.

Zameer Brey

Sure, Alpan. You know, I think I do want to just touch a little bit on the issue of reliability because my colleague over here spoke about it. And I think it’s a critical issue. I’m sorry if I’m going to repeat this example from one of my previous panels. But I asked the audience, Alpan, because I’m going to be on the plane later, so it’s a bad idea, but I asked the audience anyway. If I said to you, the plane has a high probability of leaving Delhi and landing safely wherever that’s going to be, and that probability was 90%. 95%. Would you get on that flag? 99 %? No? No. I did have one guy that he kind of thought about it and he said no.

And the point there is that I do think we’ve got to work towards models that have zero error, right? So much so that I think that we are trying to wrap our heads around is there a concept of verifiable AI where it shifts the narrative from a black box to a glass box. It actually exposes the logic. So for a particular set of inputs, you can follow the logic chain and it gives you a set of outputs that you can really track. You can audit. You can see that it’s repeatable. And you can prevent some of the kind of fundamental errors that we start… to see. I think, you know, and I want to go back to a very real example, Alpan, because when I think about small models, I’m coming back to the user, the community alky worker that tried to help a mother.

And we have one of our grantees who shared this very personal story of a first -time mother who presented, she was six months pregnant, and she said her hands and her feet started to get swollen. And the community alky worker looked and said, you’re pregnant, this is normal. Four weeks later, she started having a headache and blur vision. I think colleagues will know where the story goes. Unfortunately, that mother had severe gestational proteinuric hypertension. It was missed. And the mother and the baby didn’t make it. But in that moment, what inspired our grantee was if the community health care worker had a small model that worked on her device, which was a low -cost smartphone, still had patchy internet, but was just built small enough to help her to make good decisions at that point of care.

Today, we actually would be sitting with a very different outcome today. And so I think small models present us those

Alpan Rawal

Very interesting. Very good points. Wasim, you spoke about, you know, in a general sense about the research done at the AI for Good Lab at Microsoft. Again, are there specific examples from your work where you see the benefits of building domain -specific models to realize impact? And are there research… lessons that we can take away from this? I think it would be good for the audience and us to understand what are the research directions of the future that can come out of this work.

Wassim Hamidouche

models from 4 billion to 15 billion. And once we select the best LLM for one target language, we do all these recipes to boost the performance for these low resource languages. But I wanted to get back to all the challenges we are facing for these low resource languages. So when we train these foundation models, we train them on internet data. And internet data represented by more than 60 % is English and followed by some high resource languages like French, Mondarian, Portuguese, etc. So this low resource language is, even if they represent more than 7 ,000 languages, they represent only a tiny portion of internet data. This is the first challenge. And the second one is the benchmarks. When we build LLM, we benchmark them, we evaluate the performance on benchmarks.

And we have seen, like, there are only at least one benchmarks for only 300 languages. So we can see, like, more than 6 ,000 or 7 ,000. They don’t have that many benchmarks. So we can see, like, more than 6 ,000 or 7 ,000. They don’t have even one benchmark. Even in these 300 benchmarks, most of them are just translation from English to this low -resource language. They have nothing to do with the culture and the context of these languages. The third challenge is the performance gap. Of course, there is a performance gap for these LLMs, even the frontier models between high -resource and low -resource languages. The fourth one is safety. When we build LLMs, usually we do some safety alignments with reinforcement learning, but these safety are mainly done in English and some of high -resource languages.

Now, when we build LLMs for low -resource languages, it becomes very strong for these low -resource languages. It raises some other issues with safety. We have to evaluate these LLMs for safety on this language and do all these alignments, reinforcement learning in the target language also. In this PO, we addressed some of these issues. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments.

We have been targeting three pilot languages, which are Inuktitut, spoken in north of Canada, indigenous language, Chichewa in Malawi in Africa, and the Maori in New Zealand. Why we have selected these three languages? Just because we have access to local community to help us to get data. So we gathered data from this community. Then we used some continual pre -training, instruction fine -tuning to boost the performance of open weights LLM, and we were able to gain 12 % balance gain, closing the gap with English. So now what’s next? Next steps, actually we are trying to expand this to more languages. We have some collaboration, for example, in North America with Paraguay to develop LLM for Guarani, and we want to extend this to other languages.

But most importantly, we have launched an initiative to help the community to get the best of the language. We have an initiative called Lengua Europe. We have a project called Lengua Europe. to fund data collection in Europe for 10 languages in Europe. It was released in last September. It was very successful. We have received many applications and 10 have been selected. And now we will start working with them. And it was that successful that now we are extending this initiative to Africa through Lingua Africa. It has been announced just today in the AI Summit. And we will be allocating 5 .5 million to support data collection for African languages. And this is in partnership with Gates Foundation and Microsoft Air for Good and FCDU.

And this initiative will be led by Masakani African Languages Hub.

Alpan Rawal

Sorry, just to follow up. So for people who are working on these small language models or domain, specific language models even, you know, say for healthcare domain or some other domain. Are there, you know, strategies? that they should pursue that you can recommend?

Wassim Hamidouche

Yeah, this is very important, and this is also related to the call for Lingua Africa because many efforts have been done in the past to collect general proposed data. Now we have enough, I think, general proposed data, but we evaluate the performance of these AI tools for application -specific, for example, healthcare, education, agriculture. They don’t work as we want, as expected. So what we want today is rather than, instead of focusing on general data collection, we will be focusing on domain -specific, application -specific, use case -specific data collections and building AI tools for specific domains. At least for this all reliability issues, we will have a model that performs good in that target level resource language in that application that we can deploy and we can be used by local communities and local communities.

This is really a priority for the next…

Alpan Rawal

Thank you. Ilango, let me come to you. You have vast experience in international development. Can you give us a view of the future as it relates to using AI for developmental goals? Do you think AI will have meaningful role to play in transition of emerging economies to advanced economies?

Illango Patchamuthu

So I do think the prospects are good and our North Star is job creation. And so we need to support countries that AI doesn’t automate jobs away, but AI actually supports the creation and enhancement of jobs. And this is where small AI becomes imperative, unlike the foundation models. Good question. Which will have implications. So the second is how are we going to go about it. And in some sense, whether it’s large language models or small AI solutions, you need an ecosystem. And that ecosystem needs to be powered by the local private sector. And often what we see even now, whether the AI revolution is before us or not, small enterprises, whether in the SME space or in the larger space, struggle for a variety of reasons.

And if the countries don’t reform business processes, make it easier for permitting, which AI can do, you’re going to see that AI actually is not going to play an effective role. So there are some fundamental reforms. And this is where some of the foundational investment in BPI, the digital public infrastructure, needs to happen to create that ecosystem and the ability for the ecosystem to then work with the private sector, the local communities, to be able to create those jobs. And this is what we… seeing everywhere, if that happens and here too you see this whole vibrancy around the startup ecosystem is why? Because everyone, the young people see opportunities and this momentum can drive everywhere in the world.

Whether it be in India, whether it be rest of South Asia or Africa or Latin America or even in the Pacific region. So how do you go about it? And what we did was we joined hands with a number of multilateral development banks and last couple of days we launched this small AI use case repository. It’s a good 100 cases. It explains in health education and agriculture and job creation how AI can be leveraged to the maximum advantage of communities. Both in terms of service delivery, productivity gains, household income gains. All this eventually leads to better jobs, better employment and better income prospects. So we are very much upbeat about small AI but I do take the point about community trust.

Once it fails, the community is not going to believe it. So it’s very important that whatever we put in place work with others, partners including the MDBs or Microsoft, Google, Gates and everyone. We have to ensure that whatever we leave behind in small communities is something trustworthy, reliable and it doesn’t end of the day hallucinate and give them something that the farmer struggles and ends up with other challenges. Thank you.

Alpan Rawal

So this report you mentioned, is it open access?

Illango Patchamuthu

On the World Bank we are hosting it. It’s called AI Repository. Just type and you’ll be able to access it. It’s got 100 and we’ll continue to update this and once we’re able to sort out some legal issues then we’ll also allow anyone to submit their use case repository obviously into the repository obviously we’ll go through a filtering process to ensure that the right ones are there.

Alpan Rawal

Great. Thank you. Antoine, coming to you. You have an organization that uses AI to advance health outcomes through research and commercialization. Are data -efficient and hardware -integrated AI models important for the work that’s happening at Parasante? And do you see these models as sort of potentially being deployed in low – and middle -income countries like India?

Antoine Tesniere

Yes, so clearly they are very important for us for different reasons. Of course, we’ll get back to the scalability and the use in low – and middle -income countries. But at first, what is the reality in healthcare is that data is scarce and siloed. And so you need to work on what you have, actually. So sometimes it’s a large set of data. Sometimes it’s a very small set of data. But you need to have tools that allow you to build relevant algorithms and relevant analysis. on small data sets. In the meantime, of course, we’re building larger data sets. Sometimes it’s at a level of one department in one hospital. Sometimes it’s one hospital. Sometimes it’s a group of hospitals.

At the end, what we are reaching out in Europe is the constitution of a large European health data space. 450 million citizens joining their health data in digital public infrastructure organized in 27 countries, which will be a world premiere. But in the meantime, we need to work on that reality of scarce data. Second thing is that not only data is limited, but also when you want to enter the new revolution in medicine, which is what we call precision medicine, personalized medicine, you need to work on very efficient algorithms because they need to adapt to one person and not only to a whole population. So you need also to get that into account in building the algorithm. The last thing is that You also have to work with what is existing in the healthcare systems, which is sometimes not supercomputers or high calculation power that exists in servers remotely.

But when you’re in a room of a patient or working in hospitals, it’s a very simple computer. And you need to have efficient algorithms and tools that you can have running on that kind of computers. And so, of course, you go all the way to a smartphone at some point if you go into remote areas. So this is why we actually work on this kind of approach, making sure that, of course, we have research on LLMs and large computing power. But we also have this work on small data, very efficient algorithm.

Alpan Rawal

Can you give examples?

Antoine Tesniere

Well, yes. I mean, I already gave some examples about radiology. We are able to. We have a radiology algorithm running on small computer machine. And getting back to your example, which I think is really important, it provides me the. opportunity to put two very important facts. One is that the AI that we use is providing information. It’s not making decisions in healthcare. So of course we target high level of reliability but at the end it’s a human decision and this is very important I think. Second one is that we’ve been trying to compare the performance of the algorithm that we’ve been designing with the existing performance. And of course you’re reaching to 99 .999 % etc. But what very few people actually know is that the actual performance of what we do at the moment is not 99 .999 % So most of the time and I won’t say the numbers but most of the times it’s actually better than what we have.

And this is really important in your example. Is it good enough compared to what we can actually do at the moment? And I think it’s particularly important in low – and middle -income countries because a very simple solution, offline LLMs, et cetera, can solve many, many issues.

Zameer Brey

Alban, can I pick up quickly? I think it’s really important, and actually I’m going to name the number if it’s okay. Okay, so a really important World Bank study from a few years back showed that on a set of five very simple conditions, the diagnostic accuracy was 50 % across eight countries. 50%. What illnesses are we talking about? Acute diarrhea, upper respiratory tract infection, maternal hypertension. And the point about that is I don’t think, any of us would be happy with 50%, the equivalent of tossing a coin and saying that’s okay. And so I completely understand that today there’s a big gap between what the models can offer. And I think the question about are the models performing better than the average clinician, that’s done.

Alpan Rawal

Sorry, I can’t resist the follow -up question. So often you find that average accuracy of models is far better. But models seem to fail more unpredictably than humans. At least that’s sort of the understanding in health care. Do you agree with that or do you think that’s not true? Anyone who wants to answer this.

Antoine Tesniere

Well, so I think we need another hour to discuss this. So what you say is absolutely true. But then you need to look at every pathology or every symptoms that you’re looking. Because the performance. The performance of diagnostic can be a little bit higher in certain places, in certain situations, a little bit lower, et cetera. But we get to the right to the same point, which is what we are building. is actually better than what we are able to do at the moment. And what we show in the scientific literature is that actually the combination between algorithm and natural intelligence, I would say the doctor, is actually the best tool so far. So the question, getting back to your question, how do we deploy this in low – and middle -income countries, I think it’s really important.

We need to have a model that are able to run on small devices. That are able to run offline. And sometimes it’s a very limited set of data, very limited set of algorithm. But if you, we were actually discussing in Paris about examples of remote LLM providing answer on the 10 most important questions for healthcare in low – and middle -income countries. That doesn’t need LLMs online with super calculation power. So that’s one first point, edge native AI. We also need to have data -efficient learning systems because most of the time in low – and middle -income countries, we have a limited amount of data available. So this is what I discussed earlier.

Alpan Rawal

We have a lot of data in India, but it tends to be noisy.

Antoine Tesniere

Yes, but we need to get the time to actually get them together, clean them, and get them prepared for robust analysis. So I know you are leapfrogging and going very fast, but by the time you will scale, this will create a real power of analysis. And then we need also to understand how we can couple hardware with software and algorithm to design reduced costs so that they can very easily scale. Thank you.

Alpan Rawal

Great. That was fantastic insight. I’d actually like to give some time. to the audience to ask questions to our panelists so yes please

Audience

thank you very much I’m Irish Kumar from the CSC Winnie Ocean Center on solar energy particularly in basement I’m belong to Rajesh son question a question to World Bank president very thanks to the World Bank in Rajesh on 60 % population rural areas and totally based on the agricultural domain 40 % population in the youth how our bank is increase the capacity of AI application to the youth as well agricultural domain so the economic changes more productivity more economy more you inclusion in climate change and renewable energy domain

Illango Patchamuthu

Thank you for that question, which I think is a very foundational question to ask any policymaker in terms of what kind of an AI strategy or implementation you want to have at any geography in the world. So obviously the first thing is you need digital literacy. Second, you need to skill up so that everybody is upskilled and reskilled on AI -related capabilities. Third is improving the STEM capability in schools and universities. So you do create a future cadre of people who can work on these topics. And then the sectors you mentioned, which are our priorities, agriculture, health, and education, obviously this is where we see the greatest potential for small AI. But particularly on Rajasthan, right now I don’t have any information, but I’m happy to share that with you.

But certainly we are working across different states in India like we’re doing elsewhere in the world. And we do prioritize literacies, skilling. STEM and applications in priority sectors like agriculture, health and education.

Alpan Rawal

But having said that, I also want to say one point. I mean, just to respond to devices that can do computing, devices are expensive for the bottom 40%.

Audience

Yeah. Hi, my name is Selena. I’m the CEO and co -founder of Zindi. We run competitions to develop models, especially in Africa. And I actually had a question for Wasim about kind of the technical implications, the size implications, the practicality of using… Open source, open weight models, you know, large language models to train very specific, domain specific, you know, language, you know, under -resourced language models. How have you seen that play out?

Wassim Hamidouche

Yeah, I think what we have seen, like the selection of the base model is very important. Because what is true, what is real, that we cannot train from scratch in LLM, even if small or large language model. We cannot train it from scratch for lower social languages because we don’t have this 15 trillion tokens to train. So it is very important to select the best multilingual model that has the right tokenizer that can be adapted to many lower social languages. This is very important. And then get the data that we need. And what we have seen also, monolingual data helps, but also bilingual data can help. And also translating English into this lower social language can also help to boost the performance.

So in our paper, we are providing all these three CPs to follow to get the best boost in terms of performance. What I would like to add also, with all these level source languages we have, text cannot solve them all. Many of these languages will be solved by speech. It’s very important. ISR models, speech -to -text, text -to -speech will build a very large role into unlocking all these level source languages in addition to LLM that can operate into level source language or in English.

Alpan Rawal

I think we have time for one really short question.

Audience

Hi, this is Dr. Ravi Singh. I’m from Miami, and it was a great panel, so a lot of great insights. It’s for Google, Microsoft, and the World Bank. Here’s the scenario. If there’s compliance across all of these platforms, which platform will win the AI wars?

Alpan Rawal

That’s a loaded question. Anyone want to answer? I’m not. So first of all, I think healthy competition is how we’ve been able to develop incredible technologies just over time. So the competition is healthy, and this is great. I don’t see it as a zero -sum game. There’s too many people on the planet, and there’s too many challenging, unique problems that need to be solved. So if we’re making it useful and bringing joy and happiness for all, that’s in the – I just love it, that’s in the theme here, then it’s not necessarily going to be who wins whatever platform. It’s what is relevant to the context of the end user. So taking it back to a more, like, human, personal perspective.

That’s my thinking.

Illango Patchamuthu

First, three billion people are offline, so there is space for everybody to compete. Second, in health sector alone, three and a half billion people don’t have access to healthcare, so there is enough scope for all kinds of applications.

Wassim Hamidouche

Just I want to add, many people have been asking me to, if this all efforts we are doing for language, if it is enough to make this model as good as English. I would say maybe not, but without all these efforts we would never reach this objective. So we have all these collective efforts will get us to this objective.

Alpan Rawal

Thank you so much, everyone. I would now like to invite Neha Butts, Associate Director, Human Resources, to just hand out the mementos to all our speakers. And we will just take one group photo. Thank you. One group photo, please. Requesting the speakers to just take one group photo, please. Thank you so much everyone Thank you everyone for joining Thank you Thank you

Related ResourcesKnowledge base sources related to the discussion topics (26)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The moderator defines small AI as technology that must be meaningful for the end‑user’s local context rather than generic solutions.”

The knowledge base states the moderator defines small AI based on relevance to the end-user’s context [S1].

Confirmedhigh

“Google Research Africa released an open multilingual voice dataset covering 27 African languages out of an estimated 2 000.”

The knowledge base notes a released dataset of 27 voice languages for Africa, referencing the continent’s roughly 2 000 languages [S6].

Additional Contextmedium

“Google Research Africa built a continent‑wide weather‑forecasting system that compensates for Africa’s severe radar shortage – only 37 stations compared with roughly 300 in North America and Europe.”

The knowledge base discusses hyper-local, multi-modal forecasting that combines satellite, ground sensors and cameras to achieve fine-grained predictions, providing context on the technical approach to address data gaps [S20].

External Sources (114)
S1
How Small AI Solutions Are Creating Big Social Change — – Aisha Walcott-Bryant- Antoine Tesniere- Illango Patchamuthu – Illango Patchamuthu- Antoine Tesniere- Wassim Hamidouch…
S2
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S5
How Small AI Solutions Are Creating Big Social Change — Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneu…
S6
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneu…
S7
How Small AI Solutions Are Creating Big Social Change — – Aisha Walcott-Bryant- Antoine Tesniere- Illango Patchamuthu – Aisha Walcott-Bryant- Wassim Hamidouche- Antoine Tesnie…
S8
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Do you actually use them in Africa? Aisha Walcott-Bryant: Oh, yes. Yes, yes, yes, yes. I think that’s a great analogy….
S9
THE IMPACT OF RAPID TECHNOLOGICAL CHANGE ON SUSTAINABLE DEVELOPMENT — – Abdus Salam International Centre for Theoretical Physics (2018). New Internet of things Doctoral Programme: ICTP suppo…
S10
How Small AI Solutions Are Creating Big Social Change — -Antoine Tesniere- French professor of medicine and entrepreneur, specializing in health innovation and crisis managemen…
S11
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneu…
S12
How Small AI Solutions Are Creating Big Social Change — – Zameer Brey- Antoine Tesniere
S13
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Ken Ichiro Natsume- Prokar Dasgupta- Zameer Brey- Alain Labrique – Zameer Brey- Alain Labrique – Zameer Brey- Payden…
S14
How Small AI Solutions Are Creating Big Social Change — – Aisha Walcott-Bryant- Wassim Hamidouche- Antoine Tesniere – Illango Patchamuthu- Antoine Tesniere- Wassim Hamidouche
S15
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S16
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S17
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S18
Why smaller AI models may be the smarter choice — Most everyday jobs do not actually need the most powerful, cutting-edge AI models, argues Jovan Kurbalija in his blog po…
S19
Digital democracy and future realities | IGF 2023 WS #476 — Current regulations may not fully consider the practices and needs of these platforms, which can impede their ability to…
S20
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Hyperlocal and Multi-Modal Forecasting: There was significant discussion about developing AI systems capable of providi…
S21
Toward Collective Action_ Roundtable on Safe & Trusted AI — Cool. So I think we just have to be very, very careful here of the sort of, you know, the Silicon Valley approach of mov…
S22
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — “The black box of data must become a glass box.”[11]. “the commander taking a decision based on an AI -enabled system bu…
S23
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — But you can figure it out. But basically what we need… to do is essentially teach the kid learning to learn using AI, …
S24
AI that serves communities, not the other way round — At theWSIS+20 High-Level Eventin Geneva, a vivid discussion unfolded around how countries in the Global South can build …
S25
How AI Drives Innovation and Economic Growth — Appropriate technology solutions for developing countries Zutt advocates for a focus on ‘small AI’ rather than large-sc…
S26
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S27
Open Forum #54 Advancing Lesothos Digital Transformation Policies — Funding constraints limit many initiatives to pilot phases, with the digital skills training programme initially reachin…
S28
Global Perspectives on Openness and Trust in AI — And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. A…
S29
Shaping the Future AI Strategies for Jobs and Economic Development — So I think they should start small and have a few small scales. quick impact projects so that they can build on proven s…
S30
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — Jobs within the agri-food value chain, such as advisory services, should be maintained to promote decent work and econom…
S31
Redrawing the Geography of Jobs / Davos 2025 — Using technology to supplement rather than replace existing jobs and skills, especially in informal economies
S32
A Digital Future for All (afternoon sessions) — There is a need to build AI capacity in developing countries to ensure they can participate in and benefit from AI advan…
S33
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — In summary, digital health has achieved technical maturity but lacks organizational maturity. Comprehensive understandin…
S34
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Ioanna Ntinou: I think that my question will be, as a researcher, if we focus so much on having smaller models, if we ac…
S35
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Development | Sociocultural Emphasis on building use cases in key sectors and creating shareable repositories across ge…
S36
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Large language models can be run on personal laptops
S37
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — Moderator: Thank you very much for that. Let me go, because we are running out of time now. Africa is a specific cont…
S38
Stronger digital voices from Africa — 330 German Agency for International Cooperation [GIZ]. (2019). Background paper on Open Forum to present Ethical Polic…
S39
AI for Good – food and agriculture — – Use of remote sensing and geospatial platforms for analyzing drought, water stress, and crop management Dongyu Qu: Ex…
S40
WS #219 Generative AI Llms in Content Moderation Rights Risks — ### The Low-Resource Language Crisis Dhanaraj Thakur provided extensive analysis of how language inequities create syst…
S41
Democratizing AI Building Trustworthy Systems for Everyone — “So we’re pleased to announce a Lingua Africa initiative where we are working with local communities in partnership with…
S42
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S43
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Portugal offers something increasingly rare, agility with stability. A country large enough to scale, yet compact enough…
S44
Small states, big ambitions: How startups and nations are shaping the future of AI — At theInternet Governance Forum 2025in Lillestrøm, Norway, a dynamic discussion unfolded on how small states and startup…
S45
[Parliamentary Session 4] Fostering Inclusive Digital Innovation and Transformation — Gong Ke: Thank you. Thank you so much. I think this year’s IGF is one of the important international events after the Un…
S46
AI that serves communities, not the other way round — At theWSIS+20 High-Level Eventin Geneva, a vivid discussion unfolded around how countries in the Global South can build …
S47
Elections and the Internet: free, fair and open? | IGF 2023 Town Hall #39 — Data needed for policy making needs to reflect their specific local contexts
S48
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Sofiya Zahova: Thank you, Davide. I’m honored and delighted to join you today on this important panel, but even more ple…
S49
Main Session 1: Global Access, Global Progress: Managing the Challenges of Global Digital Adoption — Shivnath Thukra: Thanks to you and thanks for inviting me, Meta from India on this panel. I will, in the spirit of bein…
S50
WS #219 Generative AI Llms in Content Moderation Rights Risks — ### The Low-Resource Language Crisis Dhanaraj Thakur provided extensive analysis of how language inequities create syst…
S51
AI as a tech ally in saving endangered languages — Funding community-led data collection and annotation projects Supporting open evaluation benchmarks for low-resource la…
S52
Earth’s Wisdom Keepers — Creating trust between communities and policymakers and valuing indigenous knowledge is crucial for successful collabora…
S53
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — International collaboration is essential for developing countries, requiring customization, learning, and evidence-based…
S54
Informal Stakeholder Consultation Session — Digital transformation affects every sector, so coordinated policymaking helps ensure coherence and better outcomes for …
S55
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Jigar Halani articulated the complexity of trust requirements across different user groups: while IT professionals might…
S56
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Introduction and Context Setting ## Sectoral Applications: Healthcare Insights However, Flanagan highlighted a fund…
S57
NRIs MAIN SESSION: DATA GOVERNANCE — Furthermore, it is noted that support for data systems should not be limited to the private sector. The analysis suggest…
S58
Data Policy in the Fourth Industrial Revolution: Insights on personal data — Assessing risk requires those setting policies to consider the context in which data is collected and processed.
S59
AI and Data Driving India’s Energy Transformation for Climate Solutions — Data ecosystem challenges and need for granular, interoperable data Data governance | Capacity development | Monitoring…
S60
Conversational AI in low income & resource settings | IGF 2023 — Additionally, the potential of AI and chatbots in low-resource settings is acknowledged. The analysis suggests that thes…
S61
Empowering communities through bottom-up AI: The example of ThutoHealth — Community trust: Ensuring AI tools are culturally relevant, i.e. available in local languages and aligned with tradition…
S62
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Development | Economic | Future of work Sectoral Applications and Global Development World Bank president’s presentati…
S63
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S64
WSIS Action Line C7: E-Agriculture — Both speakers recognize that while pilot projects are valuable for testing solutions, they often fail to scale without p…
S65
Building Climate-Resilient Systems with AI — The time for action is immediate – moving from research and pilots to deployment and impact is essential
S66
The Future of Digital Agriculture: Process for Progress — Technologies must be easily accessible, economically viable for the lowest-income groups, relevant to the context, and s…
S67
How Small AI Solutions Are Creating Big Social Change — “It’s what is relevant to the context of the end user”[6]. “So what role, in your view, do small and custom AI models ha…
S68
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Ioanna Ntinou: I think that my question will be, as a researcher, if we focus so much on having smaller models, if we ac…
S69
How AI Drives Innovation and Economic Growth — Appropriate technology solutions for developing countries Zutt advocates for a focus on ‘small AI’ rather than large-sc…
S70
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Hyperlocal and Multi-Modal Forecasting: There was significant discussion about developing AI systems capable of providi…
S71
https://dig.watch/event/india-ai-impact-summit-2026/regional-leaders-discuss-ai-ready-digital-infrastructure — try to invest in the township planning and the implementation. Also, we can have a water supply road project that can be…
S72
Strategy outline — – 3.1 Encourage public-private sectors competition, promote entrepreneurship and innovation in the fields of…
S73
Strategy — ‘Foster the use of AI in vital developmental sectors using partnerships with local beneficiaries and local or foreign te…
S74
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Audience: Good evening, everyone. Is it? Okay. My name is Lydia Lamisa Akamvareba from Ghana. I’m looking at the team up…
S75
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Large language models can be run on personal laptops
S76
Webinar :Using current and emerging cyber tools for disaster management in Africa — Alphonso Wilson:Yeah, shortly, I think I’ll be very brief. Yeah, in response to whatever, in the issues of the climate c…
S77
AI for Good – food and agriculture — – Use of remote sensing and geospatial platforms for analyzing drought, water stress, and crop management Dongyu Qu: Ex…
S78
AI for Good Impact Awards — Development | Sustainable development In the pilot program, rangers intercepted two logging crews before the first tree…
S79
UK researchers test robotic dogs and AI for early wildfire detection — Researchers at the University of Bradford arepreparingto pilot an AI-enabled wildfire detection system that uses robotic…
S80
Democratizing AI Building Trustworthy Systems for Everyone — Lingua Africa initiative launched to collect local data with communities for spoken languages in partnership with Gates …
S81
WS #219 Generative AI Llms in Content Moderation Rights Risks — ### The Low-Resource Language Crisis Dhanaraj Thakur: Yeah, great. Thank you, Marlena. And thanks for the invitation to…
S82
AI Innovation in India — “The solution is a system or a framework that reasons across modalities and refers to previous conclusions, contradicts …
S83
Ateliers : rapports restitution et séance de clôture — Gouvernance de l’IA dans le domaine de la santé L’intelligence artificielle est déjà présente dans le domaine de la san…
S84
WS #323 New Data Governance Models for African Nlp Ecosystems — Deshni Govender: Sure. I think it’s important also to point out that when we mention the concept of extractive practices…
S85
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Canada’s AI for Development projects in Africa and Latin America have been highly appreciated for their positive impact….
S86
Leveraging AI4All_ Pathways to Inclusion — Language and Low‑Resource Context Challenges
S87
AI as a tech ally in saving endangered languages — Supporting open evaluation benchmarks for low-resource languages
S88
Transforming Health Systems with AI From Lab to Last Mile — Vikalp Sahni identified key technical challenges including building systems that work across multiple languages and gene…
S89
Panel Discussion Inclusion Innovation & the Future of AI — The tension between Ball’s emphasis on frontier AI capabilities and Ramos’s focus on addressing market concentration rep…
S90
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Panellists offered different outlooks on employment implications. Rees-Jones maintained optimism about AI tutoring enhan…
S91
Scaling AI for Billions_ Building Digital Public Infrastructure — Talent development and future outlook
S92
Open Forum #67 Open-source AI as a Catalyst for Africa’s Digital Economy — Moderate disagreement with significant implications. While speakers agree on the fundamental opportunity that open sourc…
S93
Developing capacities for bottom-up AI in the Global South: What role for the international community? — The discussion explored alternatives to mainstream Western AI approaches. Gurumurthy highlighted the BRICS AI declaratio…
S94
Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content — Hamleh creates unique AI models designed specifically for their regional context, built on localized words, terms, defin…
S95
Building an Enabling Environment for Indigenous, Rural and Remote Connectivity — A key point of agreement among speakers was the necessity of making connectivity affordable and accessible. The cost of …
S96
Panel Discussion: 01 — “You know, when you think about the journey that we’ve had till now, the global community has had till now with AI, a lo…
S97
Main Session on Sustainability & Environment | IGF 2023 — Citizens need access to information that enables them to make environmentally responsible choices. It is important for i…
S98
GermanAsian AI Partnerships Driving Talent Innovation the Future — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers demonstrated mutual resp…
S99
Main Session | Policy Network on Meaningful Access — The session began with Vint Cerf emphasising that the definition of meaningful access changes over time and depends on n…
S100
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkab…
S101
Responsible AI in India Leadership Ethics & Global Impact part1_2 — This set the foundational tone for the entire panel discussion, moving away from abstract principles to practical implem…
S102
Heathrow explores AI to ease air traffic congestion — Heathrow Airport, one of the world’s busiest, is trialling an advanced AI system named ‘Amy’ to assist air traffic contr…
S103
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare — However, Prince expressed optimism about this transition, arguing that it presents an opportunity to correct longstandin…
S104
Responsible AI for Shared Prosperity — This comment provided empirical validation for the urgency of the initiatives being discussed and introduced the concept…
S105
State of play of major global AI Governance processes — These regulations are context-sensitive, harmonised to varying degrees as needed; traffic regulations in the UK, for exa…
S106
From Technical Safety to Societal Impact Rethinking AI Governanc — impact. Across global AI discussion, safety is too often being framed in technical terms. Model alignment, red teaming, …
S107
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-bharats-health_-addressing-a-billion-clinical-realities — which can be developed by IKAK and other health startups. Where ABDM created the federated architecture, where the model…
S108
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — The session emphasised the benefits of these collaborative approaches, which enable regulators to stay updated on the la…
S109
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S110
Resilient and Responsible AI | IGF 2023 Town Hall #105 — Audience:I have three interventions and I’m going to do it in two minutes. One is at national level, one is at continent…
S111
Google and Cassava expand Gemini access in Africa — Googleannounceda partnership with Cassava Technologies to widen access to Gemini across Africa. The deal includes data-f…
S112
Google boosts AI and connectivity in Africa — Google has announcednew investments to expand connectivity, AI access and skills training across Africa, aiming to accel…
S113
DIGITAL DIVIDENDS — and US$36 billion a year. 45 Data on river fl ows are essential for disaster risk planning and for planning and…
S114
High Level Leaders Session 3 | IGF 2023 — Audience:Honorable Ministers, Excellencies, distinguished panelists, ladies and gentlemen, it is a great honor to join y…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Alpan Rawal
2 arguments127 words per minute1158 words546 seconds
Argument 1
Small AI is data‑efficient, cheap to run, edge‑deployable and context‑aware (Alpan Rawal)
EXPLANATION
Alpan described small AI as models that require minimal data, are inexpensive to operate, can run on edge devices, and are tailored to the specific local contexts of the communities they serve. He emphasized that such AI should be meaningful for underserved populations, especially in rural India.
EVIDENCE
Alpan explained that small AI should be data-efficient, inexpensive to operate, capable of running on edge devices, and most importantly produce outcomes that are meaningful for the specific communities they serve, particularly underserved rural populations in India [15-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The definition and benefits of small AI, including data efficiency, low cost, edge deployment and local context awareness, are discussed in [S1] and the advantages of smaller models for everyday tasks are highlighted in [S18].
MAJOR DISCUSSION POINT
Defining small AI
AGREED WITH
Zameer Brey, Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche, Antoine Tesniere
Argument 2
Healthy competition among platforms drives innovation; the winner is the solution that fits the user’s context (Alpan Rawal)
EXPLANATION
Alpan argued that competition between AI platforms is beneficial and not a zero‑sum game; the most successful platform will be the one that best addresses the specific needs and context of end users. He highlighted that many problems remain unsolved, providing space for multiple solutions.
EVIDENCE
Alpan stated that healthy competition has driven incredible technological advances, that there are many challenges to solve, and that the platform that best fits the user’s context will be the most relevant, rather than a single winner [386-392].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of competition and multiple platform choices for fostering innovation is emphasized in [S19], while the need for solutions that fit local contexts is underscored in [S24].
MAJOR DISCUSSION POINT
Future outlook and competition among platforms
Z
Zameer Brey
2 arguments123 words per minute795 words385 seconds
Argument 1
Design small AI for specific local problems rather than generic large models (Zameer Brey)
EXPLANATION
Zameer used a traffic analogy to argue that AI solutions should be small, fast, cost‑effective, and suited to local constraints rather than large, generic models that are ill‑suited to specific environments. He emphasized designing AI that fits the lived context of users.
EVIDENCE
He compared Delhi traffic to airplane design, concluding that a smaller, faster, cheaper solution would be appropriate for local transport needs, illustrating the need for locally-tailored AI rather than large generic models [32-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Focusing AI on local problems and community relevance is supported by [S1] and [S24], and the call for appropriate technology in developing settings is echoed in [S25].
MAJOR DISCUSSION POINT
Design small AI for local problems
AGREED WITH
Alpan Rawal, Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche, Antoine Tesniere
Argument 2
Move from black‑box to “glass‑box” verifiable AI with audit trails to ensure repeatability (Zameer Brey)
EXPLANATION
Zameer called for AI systems that are transparent and auditable, allowing users to trace the logic behind outputs. Such “glass‑box” AI would reduce errors and increase trust by making the decision process repeatable and verifiable.
EVIDENCE
He described the need for verifiable AI that shifts from a black-box to a glass-box, exposing the logic chain for each input, enabling audits, repeatability, and prevention of fundamental errors [160-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for transparent, auditable AI systems are made in [S22] (“black box must become a glass box”) and reinforced by the cautionary stance on rapid deployment in [S21].
MAJOR DISCUSSION POINT
Strategies for building reliable, trustworthy small AI
AGREED WITH
Illango Patchamuthu, Antoine Tesniere
I
Illango Patchamuthu
7 arguments159 words per minute1217 words459 seconds
Argument 1
Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu)
EXPLANATION
Illango asserted that small AI is not inferior to larger models; it can solve real problems efficiently and accelerate development outcomes in low‑resource settings. He emphasized that small AI should be seen as a means to an end, not a second‑class solution.
EVIDENCE
He explicitly stated that small AI is not second class, can solve problems, and can fast-track development outcomes, countering the perception that it is inferior [120-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The impact of small, context-aware AI for underserved populations is described in [S1]; its non-inferior status and community focus are highlighted in [S24] and [S18].
MAJOR DISCUSSION POINT
Defining small AI and its relevance for local impact
AGREED WITH
Zameer Brey, Antoine Tesniere
Argument 2
Transform pilots into plug‑and‑play solutions that can scale from a single village to larger regions (Illango Patchamuthu)
EXPLANATION
Illango highlighted the importance of moving beyond pilot projects to scalable, replicable solutions that can be deployed across many villages and larger populations. He described the need for clear KPIs and plug‑and‑play models to enable this scaling.
EVIDENCE
He discussed the challenge of pilots losing momentum, and the World Bank’s approach of scaling tested small-AI applications from a single community to larger regions using plug-and-play solutions and appropriate KPIs [115-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Scaling from pilots to plug-and-play deployments is covered in [S1]; the transition from pilots to platforms is detailed in [S26]; challenges of scaling pilots are noted in [S27].
MAJOR DISCUSSION POINT
Deployment, scalability, and ecosystem considerations
AGREED WITH
Aisha Walcott‑Bryant, Wassim Hamidouche
Argument 3
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption (Wassim Hamidouche; Aisha Walcott‑Bryant)
EXPLANATION
Illango emphasized that collaborating with a broad ecosystem of NGOs, governments, academic institutions, and local stakeholders is essential for designing AI that fits local needs and gains community trust. Such partnerships help in data collection, contextual understanding, and implementation.
EVIDENCE
He noted the need to work with partners such as NGOs, governments, and local communities to ensure solutions are trustworthy, reliable, and do not hallucinate, reinforcing the importance of co-creation [115-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Community-driven co-creation and partnership models are highlighted in [S24]; issues of access and openness are discussed in [S28]; capacity-building collaborations are noted in [S32].
MAJOR DISCUSSION POINT
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption
Argument 4
World Bank’s AI Repository of 100 curated use cases provides an open‑access knowledge base for replication (Illango Patchamuthu)
EXPLANATION
Illango described the World Bank’s AI Repository, an openly accessible collection of around 100 use cases across health, education, and agriculture, intended to help other actors replicate successful small‑AI implementations.
EVIDENCE
He explained that the AI Repository is hosted by the World Bank, contains about 100 use cases, and will be openly accessible for others to view and submit vetted use cases [269-272].
MAJOR DISCUSSION POINT
Deployment, scalability, and ecosystem considerations
AGREED WITH
Wassim Hamidouche, Antoine Tesniere
Argument 5
Small AI should augment, not replace, jobs; it must create new employment opportunities in emerging economies (Illango Patchamuthu)
EXPLANATION
Illango argued that AI should support job creation rather than automation that eliminates jobs. Small AI can be leveraged to generate new employment opportunities in emerging economies.
EVIDENCE
He stated that the North Star is job creation and that AI should support the creation and enhancement of jobs rather than automate them away [243-246].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The job-augmenting role of AI is advocated in [S29] (start small, create impact), [S30] (AI as assistive tool), and [S31] (technology to supplement rather than replace jobs).
MAJOR DISCUSSION POINT
Role of AI in development goals and job creation
Argument 6
Building digital literacy, STEM education and up‑skilling are prerequisites for effective AI deployment (Illango Patchamuthu)
EXPLANATION
Illango highlighted that digital literacy, STEM education, and continuous up‑skilling are essential foundations for any AI strategy, ensuring that populations can develop, maintain, and benefit from AI solutions.
EVIDENCE
He listed three prerequisites: digital literacy, up-skilling on AI-related capabilities, and improving STEM capacity in schools and universities [350-354].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for digital literacy, STEM capacity and up-skilling is identified in [S32]; similar emphasis on digital literacy appears in [S6]; and capacity building for digital health is noted in [S33].
MAJOR DISCUSSION POINT
Role of AI in development goals and job creation
Argument 7
AI‑driven improvements in agriculture, health and education can raise household incomes and accelerate inclusive growth (Illango Patchamuthu)
EXPLANATION
Illango explained that deploying small AI in key sectors such as agriculture, health, and education can increase productivity, improve service delivery, and consequently raise household incomes, contributing to inclusive economic growth.
EVIDENCE
He referenced the AI Repository’s 100 use cases that demonstrate how AI can improve service delivery, productivity, and household income, leading to better jobs and inclusive growth [260-263].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s contribution to agriculture productivity and inclusive growth is documented in [S26]; impacts on health and education are discussed in [S33]; broader economic benefits of AI are highlighted in [S25] and household-income gains in [S6].
MAJOR DISCUSSION POINT
Role of AI in development goals and job creation
A
Aisha Walcott‑Bryant
3 arguments0 words per minute0 words1 seconds
Argument 1
Accurate weather‑forecasting for rain‑fed agriculture using limited radar infrastructure in Africa (Aisha Walcott‑Bryant)
EXPLANATION
Aisha described Google Research Africa’s effort to improve weather forecasts for rain‑fed agriculture, addressing the scarcity of radar stations on the continent. Better forecasts help farmers plan planting and mitigate climate risks.
EVIDENCE
She noted that Africa has only about 37 weather radar stations compared with roughly 300 in North America and Europe, and that Google launched a continent-wide weather-forecasting service to provide more accurate predictions for rain-fed agriculture [50-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hyperlocal, multi-modal weather forecasting for climate-extreme management is described in [S20]; the African continent-wide weather service initiative is mentioned in [S1].
MAJOR DISCUSSION POINT
Domain‑specific applications of small AI
Argument 2
Creation of multilingual voice datasets for African languages to improve accessibility (Aisha Walcott‑Bryant)
EXPLANATION
Aisha highlighted the development of a voice dataset covering 27 African languages, aiming to improve accessibility and enable voice‑based AI services in rural villages where literacy may be low.
EVIDENCE
She mentioned releasing a dataset of 27 voice languages out of roughly 2,000 African languages, emphasizing that the partnership-led effort focuses on accessibility and reaching rural villages [60-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The release of a 27-language African voice dataset is reported in [S1].
MAJOR DISCUSSION POINT
Domain‑specific applications of small AI
Argument 3
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption (Wassim Hamidouche; Aisha Walcott‑Bryant)
EXPLANATION
Aisha stressed that Google collaborates with local partners, NGOs, and academic institutions to co‑create AI solutions that fit local contexts, ensuring that technology is appropriate and adopted by communities.
EVIDENCE
She described partnership-led work with entities such as Macquarie University and Digital Umaga, emphasizing co-creation and local involvement in data collection and solution design [61-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Community-driven co-creation and partnership models are highlighted in [S24]; issues of access and openness are discussed in [S28]; capacity-building collaborations are noted in [S32].
MAJOR DISCUSSION POINT
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption
W
Wassim Hamidouche
6 arguments152 words per minute1552 words609 seconds
Argument 1
Open‑source SPARO biodiversity monitoring and Alert California wildfire detection as small‑AI solutions (Wassim Hamidouche)
EXPLANATION
Wassim presented two open‑source, edge‑deployable AI projects: SPARO for acoustic biodiversity monitoring in remote areas, and Alert California, a network of cameras with AI for early wildfire detection. Both are designed to run on low‑resource infrastructure.
EVIDENCE
He described SPARO as a solar-powered acoustic and remote observation system that uses AI to detect animal species and transmits data via satellite in remote regions, already deployed in countries such as Colombia, Peru, the United States, Tanzania, etc. [82-88]; and he explained Alert California as a 1,300-camera network operating 24/7 with AI tools that detect early fires to enable rapid response [89-97].
MAJOR DISCUSSION POINT
Domain‑specific applications of small AI
AGREED WITH
Alpan Rawal, Zameer Brey, Illango Patchamuthu, Aisha Walcott‑Bryant, Antoine Tesniere
Argument 2
Internet data is >60 % English, leaving low‑resource languages under‑represented (Wassim Hamidouche)
EXPLANATION
Wassim highlighted that the majority of internet text is in English, causing low‑resource languages to be severely under‑represented in training data for large language models.
EVIDENCE
He noted that more than 60 % of internet data is English, with high-resource languages like French and Portuguese following, while low-resource languages constitute only a tiny fraction despite representing over 7,000 languages [185-186].
MAJOR DISCUSSION POINT
Challenges of low‑resource languages and data scarcity
Argument 3
Few or no evaluation benchmarks and limited safety alignment for many languages (Wassim Hamidouche)
EXPLANATION
Wassim explained that most low‑resource languages lack benchmark datasets for model evaluation, and safety alignment work is primarily done for English and other high‑resource languages, leaving gaps in reliability and ethical safeguards.
EVIDENCE
He reported that only about 300 languages have at least one benchmark, many have none, and existing benchmarks focus mainly on English-to-language translation without cultural context; safety alignment is also largely limited to English and high-resource languages [187-196].
MAJOR DISCUSSION POINT
Challenges of low‑resource languages and data scarcity
Argument 4
Need to shift from generic data collection to domain‑specific, use‑case driven data gathering (Wassim Hamidouche)
EXPLANATION
Wassim argued that instead of collecting broad, general‑purpose data, efforts should focus on gathering data specific to particular domains, applications, and use‑cases to improve model performance where it matters most.
EVIDENCE
He stated that after sufficient general data collection, the next priority is domain-specific, application-specific data collection to build reliable AI tools for sectors such as healthcare, education, and agriculture [233-237].
MAJOR DISCUSSION POINT
Challenges of low‑resource languages and data scarcity
AGREED WITH
Illango Patchamuthu, Antoine Tesniere
Argument 5
Release open‑weight models and foster community‑driven data pipelines to increase transparency and trust (Wassim Hamidouche)
EXPLANATION
Wassim highlighted that making model weights open and encouraging community participation in data collection enhances transparency, trust, and enables broader deployment of small AI solutions.
EVIDENCE
He mentioned that Google’s open-weight models (e.g., Gemma) can run on laptops and tablets, and that open models and community-driven pipelines are key to successful deployment [144-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-weight models and community data pipelines are advocated in [S28]; the need for transparent “glass-box” AI is discussed in [S22]; trust and safety concerns are raised in [S21].
MAJOR DISCUSSION POINT
Strategies for building reliable, trustworthy small AI
Argument 6
Large language models and small, open‑source models will coexist; collective open‑source efforts are essential to reach parity for low‑resource languages (Wassim Hamidouche)
EXPLANATION
Wassim asserted that both large foundation models and smaller open‑source models will have roles, and that collaborative open‑source initiatives are crucial to bring low‑resource language performance closer to that of English.
EVIDENCE
He noted that while efforts may not yet make low-resource models as good as English, collective open-source work is necessary to eventually achieve that objective [396-398].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The coexistence of large and small models and the necessity of collective open-source work for low-resource languages are mentioned in [S24]; the suitability of smaller models for many tasks is argued in [S18]; and appropriate technology for developing contexts is highlighted in [S25].
MAJOR DISCUSSION POINT
Future outlook and competition among platforms
A
Antoine Tesniere
2 arguments151 words per minute1246 words492 seconds
Argument 1
Radiology, dermatology and ophthalmology AI tools that run on low‑cost hardware for point‑of‑care diagnostics (Antoine Tesniere)
EXPLANATION
Antoine described existing small AI models used in healthcare for image analysis in radiology, dermatology, and ophthalmology that can operate on inexpensive hardware at the point of care, providing reliable diagnostics.
EVIDENCE
He explained that small AI models are already validated for tasks such as chest X-ray analysis, fracture detection, dermatology and ophthalmology image analysis, and can be deployed on low-cost computers for point-of-care use [102-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Point-of-care AI for medical imaging on inexpensive hardware is discussed in [S33]; the assistive role of AI in health settings is reinforced in [S30].
MAJOR DISCUSSION POINT
Domain‑specific applications of small AI
Argument 2
Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere)
EXPLANATION
Antoine emphasized the need for AI algorithms that can run offline on modest hardware, reducing dependence on high‑performance computing and limiting unpredictable failures, especially in low‑resource settings.
EVIDENCE
He highlighted that offline-capable, edge-native AI and data-efficient learning systems are essential for low- and middle-income countries, noting that models must run on simple computers or smartphones and be robust without constant connectivity [332-340].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for robust, offline-capable AI algorithms is emphasized in [S21] and [S22]; digital health implementations requiring offline reliability are noted in [S33].
MAJOR DISCUSSION POINT
Strategies for building reliable, trustworthy small AI
AGREED WITH
Illango Patchamuthu, Zameer Brey
A
Aisha Walcott-Bryant
3 arguments167 words per minute1139 words408 seconds
Argument 1
Adopt a problem‑first approach: build simple, non‑AI solutions when they suffice and only apply AI when there is a clear, unmet need.
EXPLANATION
Aisha emphasizes that the team should start by identifying the concrete problem and consider the simplest solution, such as a literal red button, before introducing AI or complex technology.
EVIDENCE
She states, “It’s very much problem first” and illustrates the mindset by saying, “if there’s a red button that… you can press and it’s a one-error-zero, just build the red button. We don’t need to bring AI or technology.” [44-46]
MAJOR DISCUSSION POINT
Problem‑first design of AI solutions
Argument 2
Deploy open‑weight, nano‑scale models (e.g., Gemma) that can run on laptops and tablets, enabling edge AI for African communities.
EXPLANATION
Aisha highlights that Google’s open‑weight models are intentionally lightweight so they can operate on low‑cost devices, bringing AI capabilities directly to users in remote or low‑resource settings.
EVIDENCE
She notes, “our open models, our open weight models, Gemma, are made for a lot of these solutions that are more closer to the edge… we have nano models that can run on your laptop and tablets and so forth.” [144-145]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The push for open-weight, edge-compatible models and community openness is covered in [S28]; transparency and verifiability of such models are emphasized in [S22].
MAJOR DISCUSSION POINT
Edge‑deployable open‑weight AI models
Argument 3
Leverage Google’s massive compute, AI expertise, and societal‑impact mandate to address African challenges at scale.
EXPLANATION
Aisha explains that Google Research Africa uses its global computing resources and AI know‑how, aligned with a mandate for large‑scale societal impact, to tackle problems that are unique to the continent.
EVIDENCE
She says, “Coming from Google Research, we want to leverage our compute, our AI expertise and capabilities, and then our mandate, which is the societal impact at scale, to think about the types of problems that we work on.” [48-49]
MAJOR DISCUSSION POINT
Using corporate resources for societal impact
Agreements
Agreement Points
Small AI should be lightweight, data‑efficient, cheap to run, edge‑deployable and tailored to local contexts and underserved communities
Speakers: Alpan Rawal, Zameer Brey, Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche, Antoine Tesniere
Small AI is data‑efficient, cheap to run, edge‑deployable and context‑aware (Alpan Rawal) Design small AI for specific local problems rather than generic large models (Zameer Brey) Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu) Deploy open‑weight, nano‑scale models (e.g., Gemma) that can run on laptops and tablets, enabling edge AI for African communities (Aisha Walcott‑Bryant) Open‑source SPARO biodiversity monitoring and Alert California wildfire detection as small‑AI solutions (Wassim Hamidouche) Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere)
All panelists described small AI as models that require minimal data, are inexpensive, can run on edge devices, and are designed for the specific local contexts of underserved populations, whether in rural India, African villages, or low-resource health settings [15-19][32-34][120-124][144-145][84-88][332-340].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on lightweight, data-efficient, edge-deployable AI for underserved communities mirrors the partnership-led African languages initiative that stresses accessibility and local relevance [S42] and the World Bank’s promotion of edge AI models for low-income farmers in Africa [S62].
Co‑creation with local partners (NGOs, governments, academia, communities) is essential for relevance, adoption and trust
Speakers: Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption (Illango Patchamuthu) Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption (Aisha Walcott‑Bryant) Collaboration with NGOs, governments, nonprofit organization, and local communities to build AI solutions (Wassim Hamidouche)
Each speaker highlighted that working together with NGOs, governments, academic institutions and community members is crucial to design AI that fits local needs and gains trust [115-124][61-65][76-78].
POLICY CONTEXT (KNOWLEDGE BASE)
Co-creation with NGOs, governments and communities is highlighted as a core principle in multiple inclusive AI forums, including the partnership-driven approach in African language projects [S42], the community-centric AI capacity building discussed at the WSIS+20 event [S46], and OECD-style guidance on earning trust through local co-creation [S53].
Shift from generic data collection to domain‑specific, use‑case driven data gathering to improve model performance where it matters
Speakers: Wassim Hamidouche, Illango Patchamuthu, Antoine Tesniere
Need to shift from generic data collection to domain‑specific, use‑case driven data gathering (Wassim Hamidouche) World Bank’s AI Repository of 100 curated use cases provides an open‑access knowledge base for replication (Illango Patchamuthu) Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere)
The panelists agreed that after gathering general data, the priority should be collecting domain-specific data (e.g., health, agriculture, education) to build reliable small AI tools, as reflected in the AI Repository and the emphasis on data-efficient learning [233-237][258-262][332-340].
POLICY CONTEXT (KNOWLEDGE BASE)
Moving from generic to domain-specific data collection is advocated in analyses of low-resource language inequities, which call for targeted community-led data pipelines [S50] and funded annotation projects for endangered languages [S51]; similar needs for granular, policy-relevant data are noted in energy transformation discussions [S59].
Small AI is not inferior; it can reliably solve problems and accelerate development outcomes
Speakers: Illango Patchamuthu, Zameer Brey, Antoine Tesniere
Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu) Move from black‑box to “glass‑box” verifiable AI with audit trails to ensure repeatability (Zameer Brey) Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere)
All three emphasized that small AI should be seen as a first-class solution, with transparency and reliability, rather than a lesser alternative [120-124][160-165][332-340].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence that Small AI can match or exceed larger models appears in the African small-AI case study showing social impact [S42] and the World Bank’s report on edge AI delivering tangible outcomes for smallholder agriculture [S62]; scaling platforms further demonstrate its effectiveness [S63].
Scaling pilots into plug‑and‑play, replicable solutions is essential for broader impact
Speakers: Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche
Transform pilots into plug‑and‑play solutions that can scale from a single village to larger regions (Illango Patchamuthu) Our work is scaling from the uniqueness of the continent (Aisha Walcott‑Bryant) SPARO and Alert California are global solutions that can be deployed anywhere (Wassim Hamidouche)
Panelists highlighted the need to move beyond proof-of-concept pilots toward scalable, reusable small AI deployments that can be replicated across regions and countries [115-124][50-58][84-88].
POLICY CONTEXT (KNOWLEDGE BASE)
The necessity of moving from pilots to plug-and-play, replicable solutions is echoed in several sectoral roadmaps, such as the agriculture scaling framework that stresses platform-level deployment [S63], the pilot-to-scale guidance from WSIS Action Line C7 [S64], and calls for rapid transition from research to impact in climate-resilient AI [S65] and digital agriculture [S66].
Similar Viewpoints
Both see competition and diversity of solutions as beneficial, provided they are context‑appropriate and serve underserved users rather than seeking a single dominant platform [386-392][120-124].
Speakers: Alpan Rawal, Illango Patchamuthu
Healthy competition among platforms drives innovation; the winner is the solution that fits the user’s context (Alpan Rawal) Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu)
Both stress the importance of open‑weight/open‑source models that can run on low‑cost edge devices to broaden access in low‑resource settings [84-88][144-145].
Speakers: Wassim Hamidouche, Aisha Walcott‑Bryant
Open‑source SPARO biodiversity monitoring and Alert California wildfire detection as small‑AI solutions (Wassim Hamidouche) Deploy open‑weight, nano‑scale models (e.g., Gemma) that can run on laptops and tablets, enabling edge AI for African communities (Aisha Walcott‑Bryant)
Both advocate for transparent, reliable AI that can be audited and run offline to avoid unpredictable failures in critical settings [160-165][332-340].
Speakers: Zameer Brey, Antoine Tesniere
Move from black‑box to “glass‑box” verifiable AI with audit trails to ensure repeatability (Zameer Brey) Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere)
Unexpected Consensus
Use of offline, edge‑native AI models in low‑resource health and agriculture contexts
Speakers: Antoine Tesniere, Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche
Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere) Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu) Deploy open‑weight, nano‑scale models that can run on laptops and tablets (Aisha Walcott‑Bryant) Open‑source SPARO and Alert California are edge‑deployable small‑AI solutions (Wassim Hamidouche)
While each speaker focused on different domains (health, agriculture, biodiversity, wildfire detection), they all converged on the necessity of offline, low-cost, edge-compatible AI for impact in low-resource settings – a point not explicitly raised in the opening definitions but emerging across sectors [332-340][120-124][144-145][84-88].
POLICY CONTEXT (KNOWLEDGE BASE)
Offline, edge-native AI for health and agriculture aligns with the World Bank’s showcase of edge AI models for low-resource farming [S62] and the broader push for accessible, low-cost AI tools in low-income settings highlighted at IGF 2023 [S60] and in digital agriculture reports [S66].
Open‑source, community‑driven data pipelines as a strategy to improve low‑resource language models
Speakers: Wassim Hamidouche, Aisha Walcott‑Bryant
Release open‑weight models and foster community‑driven data pipelines to increase transparency and trust (Wassim Hamidouche) Partnership‑led data collection for African voice languages, making datasets openly available (Aisha Walcott‑Bryant)
Both highlighted that open, community‑generated data and model weights are key to advancing AI for low‑resource languages, a consensus that bridges corporate (Google) and corporate‑research (Microsoft) perspectives.
POLICY CONTEXT (KNOWLEDGE BASE)
Open-source, community-driven data pipelines are promoted as a remedy for the low-resource language crisis, with calls for community-led collection and open benchmarks in recent analyses [S50][S51] and exemplified by the African voice dataset initiative [S42].
Overall Assessment

The panel displayed strong consensus that small AI should be lightweight, data‑efficient, edge‑deployable, and co‑created with local stakeholders to address specific community needs. Participants agreed on the importance of domain‑specific data, open‑source models, transparency, and scaling pilots into reusable solutions. There was also a shared belief that small AI is not inferior but can reliably accelerate development outcomes.

High consensus across technical, ethical, and development dimensions, indicating a unified vision that small, context‑aware AI, built through partnerships and open practices, can play a pivotal role in achieving inclusive social and economic development.

Differences
Different Viewpoints
Required reliability and error tolerance for small AI in healthcare diagnostics
Speakers: Zameer Brey, Antoine Tesniere
Move from black‑box to “glass‑box” verifiable AI with zero error to prevent fatal mistakes (Zameer Brey) Current small AI models are not 99.999 % accurate but are still better than existing practice and acceptable for deployment in low‑resource settings (Antoine Tesniere)
Zameer stresses that small AI must achieve near-zero error and be fully auditable to avoid catastrophic outcomes, citing a maternal hypertension case where a lack of reliable AI contributed to a death [153-166][168-174]. Antoine counters that while models are not perfect, they already outperform current clinical practice and can be used effectively, especially when run offline on modest hardware, even if accuracy is below 99.999 % [300-311][332-340].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over required reliability in healthcare diagnostics reflects findings that patients demand near-zero error rates, contrasting with higher tolerance among IT professionals, as documented in the AI for Bharat’s Health discussion [S55]; the need for sandbox testing to establish evidence bases is also noted [S56].
Unexpected Differences
Overall Assessment

The panel largely converged on the importance of small, context‑aware AI for underserved communities, agreeing on goals such as accessibility, partnership, and scalability. The principal point of contention concerned the acceptable level of reliability for health‑focused small AI, with Zameer demanding near‑zero error and full auditability, while Antoine argued that current, imperfect models already provide net benefits and are suitable for deployment in low‑resource settings.

Overall disagreement was low; the debate centered on a single technical nuance (reliability standards) rather than fundamental strategic differences, suggesting that consensus on the broader vision of small AI is strong, with only modest implications for implementation pathways.

Partial Agreements
All speakers share the overarching goal of deploying AI that benefits underserved or low‑resource populations, but they diverge on the primary strategy: Alpan emphasizes data‑efficiency and edge deployment; Zameer stresses local problem‑fit and verifiability; Illango focuses on scaling plug‑and‑play solutions; Aisha advocates a problem‑first mindset and open‑weight models; Wassim calls for domain‑specific data pipelines; Antoine highlights hardware‑efficient, offline algorithms for scarce data environments. These differing pathways are reflected throughout the discussion [15-19][32-34][120-124][44-46][144-145][233-237][300-311].
Speakers: Alpan Rawal, Zameer Brey, Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche, Antoine Tesniere
Small AI should be data‑efficient, cheap to run, edge‑deployable and context‑aware (Alpan Rawal) Design small AI for specific local problems rather than generic large models (Zameer Brey) Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu) Problem‑first approach; use simple non‑AI solutions when possible and leverage open‑weight nano models for edge use (Aisha Walcott‑Bryant) Open‑source, domain‑specific data collection and community‑driven pipelines are needed for reliable small AI (Wassim Hamidouche) Data‑efficient, hardware‑integrated models are essential for healthcare, especially offline in low‑resource settings (Antoine Tesniere)
All three emphasize that multi‑stakeholder partnership and co‑creation are key to designing and deploying small AI that fits local contexts. While the wording differs, the consensus is that collaboration with NGOs, governments, academia and community actors underpins successful implementation [115-124][61-65][115-124].
Speakers: Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption (Illango Patchamuthu) Partnership‑led data collection and ecosystem engagement are essential for building appropriate solutions (Aisha Walcott‑Bryant) Collaboration with NGOs, governments and local communities is crucial for trustworthy, reliable AI deployments (Wassim Hamidouche)
Takeaways
Key takeaways
Small AI is defined as data‑efficient, low‑cost, edge‑deployable models that are tailored to local contexts rather than generic large foundation models. Designing AI for impact requires focusing on specific community problems (e.g., district hospitals, small farmers) and involving local stakeholders in model design and deployment. Domain‑specific small‑AI solutions demonstrated real impact: accurate weather‑forecasting for rain‑fed agriculture in Africa, open‑source biodiversity monitoring (SPARO) and wildfire detection (Alert California), point‑of‑care radiology/dermatology tools, and multilingual voice datasets for African languages. Low‑resource languages face major challenges: dominance of English in internet data, scarcity of benchmarks, and limited safety/alignment work. Targeted data collection and continual pre‑training can narrow performance gaps. Trustworthiness is essential; moving from black‑box to “glass‑box” verifiable AI, releasing open‑weight models, and ensuring offline, hardware‑efficient operation reduce unpredictable failures. Scalability hinges on turning pilots into plug‑and‑play solutions, co‑creating with NGOs, governments, academia, and local partners, and providing open‑access repositories of use cases (World Bank AI Repository). AI should augment—not replace—jobs; building digital literacy, STEM education, and up‑skilling are prerequisites for inclusive economic growth in emerging economies. Healthy competition among platforms (Google, Microsoft, World Bank, etc.) is beneficial; the “winner” is the solution that best fits the user’s context, and open‑source collaboration is key to reaching parity for low‑resource languages.
Resolutions and action items
World Bank to host and maintain an open‑access AI Repository of ~100 curated small‑AI use cases for health, education, agriculture, and job creation. Microsoft announced the Lingua Africa initiative (US$5.5 M) to fund domain‑specific data collection for African languages, building on the earlier Lingua Europe program. Google Research Africa committed to releasing open‑weight models (e.g., Gemini) and multilingual voice datasets, and to continue partnership‑driven data collection across the continent. Microsoft’s SPARO and Alert California solutions will remain open‑source for global deployment. Panelists emphasized the need to shift future data‑collection efforts toward domain‑specific, use‑case‑driven datasets rather than generic large‑scale corpora. All participants agreed to pursue co‑creation models with local NGOs, governments, and academic partners for future pilots.
Unresolved issues
How to achieve near‑zero error rates and verifiable audit trails for critical health applications, especially in low‑resource settings. Standardized benchmarks and safety alignment procedures for the majority of low‑resource languages remain lacking. Ensuring reliable model performance that consistently exceeds average clinician accuracy without introducing unpredictable failures. Addressing hardware affordability for the bottom‑40 % of populations who cannot currently afford edge devices. Concrete strategies for large‑scale digital literacy and up‑skilling programs across diverse emerging economies were discussed but not detailed.
Suggested compromises
Treat small AI as complementary to large foundation models: use open‑weight large models as a base, then fine‑tune with domain‑specific, low‑resource data. Adopt a “glass‑box” approach—provide transparency and auditability while still leveraging powerful pretrained models. Combine open‑source community contributions with targeted funding (e.g., Lingua Africa) to balance broad participation and focused resource allocation. Deploy pilots as plug‑and‑play modules that can be replicated and scaled, acknowledging that not every pilot will immediately become a full‑scale solution.
Thought Provoking Comments
Small AI is defined as models that are data‑efficient, cheap to run, edge‑deployable and, most importantly, meaningful to the specific local communities they serve.
Sets the conceptual framework for the entire panel, moving the conversation away from the hype around large foundation models toward concrete criteria of relevance, efficiency, and impact.
Guided all subsequent speakers to frame their work in terms of data efficiency and local relevance, establishing a common language that shaped the direction of the discussion.
Speaker: Alpan Rawal (moderator)
Would anyone, given Delhi traffic, design something as big as an aeroplane to get across the city? No – we would design something smaller, faster, cheaper, that gets us from point A to point B.
Uses a vivid, everyday analogy to illustrate why AI solutions must be appropriately scaled to the problem context, challenging the assumption that bigger models are always better.
Prompted other panelists (e.g., Aisha and Illango) to discuss concrete constraints like limited infrastructure and to emphasize designing for low‑resource settings.
Speaker: Zameer Brey
In Africa we have only 37 weather radar stations compared to 300 in North America/Europe. To provide accurate forecasts we had to innovate with far fewer resources.
Highlights a stark data‑infrastructure disparity and demonstrates how small AI can be engineered to overcome such gaps, reinforcing the panel’s theme of resource‑constrained innovation.
Shifted the conversation toward concrete technical challenges (data scarcity) and led to deeper discussion about language data collection and open‑weight models.
Speaker: Aisha Walcott‑Bryant
A community health worker missed a case of severe gestational hypertension because she lacked a small AI model on her low‑cost smartphone; with such a model the outcome could have been very different.
Provides a powerful, human‑centered story that illustrates the life‑saving potential of small, offline AI, moving the debate from abstract benefits to tangible health outcomes.
Triggered a focus on reliability and safety, prompting Zameer later to discuss ‘verifiable AI’ and influencing others (e.g., Antoine) to stress human‑in‑the‑loop decision making.
Speaker: Zameer Brey
We need to move from black‑box to glass‑box AI – models whose logic can be audited and verified, especially when zero‑error performance is required for critical decisions.
Introduces the concept of verifiable AI, challenging the prevailing acceptance of opaque models and raising the bar for accountability in low‑resource deployments.
Deepened the technical discussion, leading Wassim to talk about safety alignment in low‑resource languages and prompting audience concerns about model hallucinations.
Speaker: Zameer Brey
Low‑resource languages suffer from three major gaps: lack of training data, lack of benchmarks, and safety alignment mostly done in English. We are launching initiatives like Lingua Africa to fund data collection and domain‑specific fine‑tuning.
Systematically outlines the structural challenges of multilingual AI and presents concrete, funded initiatives, moving the conversation from problem identification to actionable solutions.
Spurred follow‑up questions about domain‑specific data collection, influenced Illango’s remarks on scaling pilots, and set the stage for audience queries about open‑weight models.
Speaker: Wassim Hamidouche
Small AI is not inferior or second‑class; it can fast‑track development outcomes, and the key is to replicate proven pilots at scale with the right KPIs.
Directly counters a common perception that smaller models are less capable, emphasizing scalability and impact measurement, which reframes the discussion toward implementation strategy.
Guided the panel toward talking about replication across regions (e.g., projects in UP, Maharashtra) and reinforced the importance of trust and reliability raised earlier.
Speaker: Illango Patchamuthu
In healthcare, small AI models already power radiology, dermatology, and ophthalmology analyses on edge devices; they provide information, not decisions, preserving the human‑in‑the‑loop model.
Shows that small AI is already mainstream in a high‑stakes domain, illustrating practical deployment and the necessity of human oversight, which adds nuance to the “small vs. large” debate.
Prompted further discussion on offline capability, data efficiency, and the balance between algorithmic assistance and clinician judgment.
Speaker: Antoine Tesniere
Job creation is the North Star for AI in development; we must build ecosystems, digital public infrastructure, and local private‑sector capacity so AI augments rather than replaces jobs.
Broadens the conversation from technical solutions to socioeconomic outcomes, linking AI deployment to sustainable development goals and policy considerations.
Shifted the tone toward macro‑level strategy, leading to mentions of the AI Repository, skilling initiatives, and the need for trustworthy, reliable models to maintain community confidence.
Speaker: Illango Patchamuthu
Healthy competition among platforms is not a zero‑sum game; relevance to the end‑user context matters more than who ‘wins’ the AI wars.
Addresses a provocative audience question with a perspective that reframes competition as collaborative innovation, reinforcing the panel’s inclusive ethos.
Closed the discussion on a unifying note, encouraging continued collaboration across Google, Microsoft, World Bank, and other stakeholders.
Speaker: Alpan Rawal
Overall Assessment

The discussion was anchored by Alpan’s definition of small AI, which established a shared lens for all participants. Key turning points—Zameer’s traffic analogy, Aisha’s radar‑station disparity, the gestational‑hypertension story, and Wassim’s breakdown of low‑resource language challenges—each introduced new dimensions (contextual relevance, data scarcity, real‑world impact, and multilingual barriers) that redirected the conversation toward concrete technical and policy solutions. Illango’s emphasis on scalability, replication, and job creation broadened the scope from technology to development outcomes, while Antoine’s examples of existing edge‑AI in healthcare grounded the debate in current practice. Collectively, these insightful comments moved the panel from abstract notions of ‘small AI’ to actionable strategies, highlighting the necessity of data efficiency, verifiability, local partnership, and ecosystem building to achieve meaningful social impact.

Follow-up Questions
How can we develop verifiable (glass‑box) AI models with near‑zero error for critical health applications?
Ensuring reliability and auditability is crucial for small AI tools used by community health workers, where mistakes can be fatal.
Speaker: Zameer Brey
What strategies should practitioners of small‑language or domain‑specific models adopt to improve performance and safety?
Guidance is needed on selecting base multilingual models, data collection (monolingual, bilingual, translation), and safety alignment for low‑resource languages.
Speaker: Wassim Hamidouche
What are effective strategies for scaling pilot small‑AI projects to larger populations while maintaining impact?
Pilots often lose momentum; defining KPIs and replication pathways is essential for broader development outcomes.
Speaker: Illango Patchamuthu
Do small AI models fail more unpredictably than human clinicians in healthcare diagnostics?
Understanding comparative failure modes informs safe integration of AI into clinical workflows.
Speaker: Alpan Rawal (directed to panel)
How can AI capacity be increased for youth in the agricultural sector of rural India to drive economic inclusion and climate‑resilient practices?
Targeted AI education and tools for young farmers can boost productivity and climate adaptation.
Speaker: Audience (Irish Kumar) – addressed by Illango Patchamuthu
What are the technical and size implications of using open‑source, open‑weight LLMs for domain‑specific, low‑resource language models?
Understanding model selection, tokenization, and data augmentation is key for practical deployment.
Speaker: Audience (Selena) – addressed by Wassim Hamidouche

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.