How nonprofits are using AI-based innovations to scale their impact
20 Feb 2026 14:00h - 15:00h
How nonprofits are using AI-based innovations to scale their impact
Summary
The panel, moderated by Manohar Sreekanth, examined a four-month AI cohort program run from September to December that was anchored by Project Tech for Dev and involved seven nonprofit organizations building AI use cases [50-53]. Erica Arya explained that the cohort model was chosen to provide hands-on mentorship, capture learnings from a small group, enable peer sharing, and reuse intellectual property while lowering barriers such as lack of engineering resources [64-71][73-79][82-84][85-94]. Tamina Madon described how the Agency Fund applied the accelerator-style cohort approach common in Silicon Valley, creating a pooled pool of technical and product-management staff to supply resources that individual NGOs could not afford [106-129][130-138].
Mainak Roy illustrated the Simple Education Foundation’s challenge of giving teachers context-specific pedagogical advice and how they built a WhatsApp-based chatbot to deliver evidence-based recommendations [155-163][164-168]. Steven Suting highlighted the Quest Alliance’s problem of heterogeneous learners and teacher shortages, prompting development of an AI system that emulates human cues to personalize learning pathways [166-180][181-186]. Preetam Sukumar recounted Avanti Fellows’ shift from improving student test reports to generating AI-summarized mentor scripts, noting pilot deployment with 15 teachers and issues with LLM hallucinations that required prompt-engineering [185-200][207-214][216-228].
The program incorporated a mentorship model pairing each nonprofit with one or two mentors, and introduced guardrails such as slur-lists and conversation filters to ensure responsible AI behavior [236-244][245-247]. Erica added that two knowledge partners-Digital Future Labs and Statle-were engaged to embed responsible-AI principles and safety considerations from the outset of the cohort [300-308][309-312]. Participants reported that having a modular blueprint and shared frameworks accelerated prototyping, prevented analysis paralysis, and facilitated cross-NGO collaboration, exemplified by health NGOs jointly building a high-risk pregnancy predictor [319-323][368-381].
Across the NGOs, a common learning was to focus on existing pain points rather than chasing AI hype, to leverage open-source tools, and to adopt evaluation frameworks such as the “golden dataset” for quality control [400-404][410-419][426-433][449-454]. Manohar noted that such early-stage collaboration is rare in the for-profit sector, underscoring the unique value of the social-sector cohort model [387-389], and invited audience members to explore the program’s published documentation for further guidance [455-456]. The discussion concluded that the cohort’s blend of mentorship, shared resources, responsible-AI integration, and collaborative learning provided a practical roadmap for nonprofits seeking to adopt AI responsibly and at scale [455-456].
Keypoints
Major discussion points
– Purpose and structure of the AI Cohort program – The moderator frames the session around a four-month AI cohort run by Project Tech for Dev, describing its goals, the number of NGOs involved, and the agenda for the discussion [50-55]. Erica then outlines the program’s origins, its open-application screening, free pilot nature, and the emphasis on mentorship and peer learning [70-78][84-93].
– Why a cohort model and funder involvement – Both Erica and Tamina explain that cohort-based learning enables deep-rooted feedback, shared resources, and faster iteration, drawing on experiences from the Data Catalyst program and Silicon-Valley accelerator models [70-78][106-118]. Tamina adds that the Agency Fund creates a pooled pool of technical staff and product managers to lower barriers for NGOs [123-130].
– AI use-case implementations in education
– Simple Teacher Buddy (Mainak Roy) – a WhatsApp-based chatbot that helps teachers select evidence-based classroom strategies [151-162]; later challenges with the bot’s “hi” trigger and need for guardrails are discussed [241-246].
– Quest Alliance (Steven Suting) – a data-driven digital learning platform that personalises pathways for learners and explores behaviour-change through AI-driven signalling [165-180][253-266].
– Avanti Fellows (Preetam Sukumar) – an AI-generated mentorship script to replace teacher-student conversations, currently in pilot/roll-out with issues around LLM hallucinations [185-200][207-214].
– Technical and ethical challenges plus learnings – Hallucinations in LLM outputs and bot glitches are highlighted as major hurdles [209-214][241-246]. The program’s built-in responsible-AI and AI-safety partners (Digital Future Labs, Statle) are cited as a way to embed safeguards from the start [300-307]. Participants stress the importance of evaluation frameworks (user, impact, “golden set”) to ensure trust and scalability [272-279][418-420].
– Practical recommendations for NGOs – Panelists advise NGOs to start from concrete pain points rather than chasing AI for its own sake, to leverage open-source tools, avoid reinventing solutions, and use collaborative learning to reduce duplication [400-404][426-433][405-407].
Overall purpose / goal
The discussion was convened to review the AI Cohort program (its genesis, design, and funding model), showcase concrete AI pilots from participating NGOs, surface the practical challenges they faced, and distil actionable insights that can guide other nonprofit organisations in adopting AI responsibly and effectively [50-55][290-307][400-404].
Tone of the conversation
– Opening – Light-hearted and experimental, with the moderator joking about delegating introductions to an AI [5-7].
– Middle – Collaborative and reflective; speakers share detailed narratives of their projects, acknowledge difficulties, and discuss technical/ethical nuances in a constructive manner [151-214][253-266][300-307].
– Closing – Summative and encouraging, emphasizing lessons learned, offering concrete advice, and inviting further engagement [400-433][455-456]. The tone remains supportive throughout, shifting from informal banter to focused problem-solving and finally to a forward-looking, advisory stance.
Speakers
– Pritam Sukumar – Technology and Research Leader, Antifellows (Avanti Fellows); focuses on AI-driven educational tools and impact measurement. [S1][S2]
– Erica Arya – CEO, Project Tech for Dev; leads tech advisory for NGOs, develops open-source platforms and cohort-based AI programs. [S3]
– Temina Madon – Co-founder, Agency Fund; works at the intersection of technology, philanthropy, and economic mobility, investing in services that empower human agency and create social impact at scale. [S4][S5]
– Steven Suting – Director of Technology and Product, Quest Alliance; oversees AI-powered digital learning platforms for youth across India, emphasizing data-driven innovation. [S6]
– Mainak Roy – Co-founder & CEO, Simple Education Foundation (Simple Education); builds AI-enabled teacher support tools (e.g., Simple Teacher Buddy) and focuses on scalable education solutions. [transcript]
– Elizabeth – AI conversational assistant used to introduce panelists; no formal title or affiliation mentioned.
– Manohar Sreekanth – Moderator; Partner and CTO at Satva Consulting, working at the intersection of tech, data, and AI for social-sector impact. [S11]
Additional speakers:
– None identified beyond the listed speakers.
The session opened with moderator Manohar Sreekanth welcoming the audience and, in keeping with the AI-centric theme of the summit, delegating the introductions of the panelists to the conversational assistant Elizabeth [1-13]. After a brief technical glitch that was quickly resolved [10-17], Manohar asked Elizabeth to introduce each participant, setting a tone of experimentation and collaboration.
Manohar then framed the core of the panel: a four-month AI cohort programme that ran from September to December, anchored by Project Tech for Dev, and involving seven nonprofit organisations developing AI use-cases [50-55]. He outlined the agenda – the genesis of the programme, its structure, the NGOs’ learnings, and actionable take-aways for the wider sector [56-59].
Rationale for a cohort model
Erica Arya explained that the cohort approach was chosen because it enables hands-on mentorship, peer learning and deep-rooted feedback from a small, focused group of NGOs [68-74]. She highlighted that the programme built on the earlier Data Catalyst cohort, which showed that intensive work with a limited set of organisations yields valuable insights that can be fed back into the platform [70-73]. The design also deliberately lowered barriers such as the lack of engineering resources by providing mentors and ensuring leadership buy-in within each NGO [84-93].
Tamina Madon reinforced this logic from a funder’s perspective. Drawing on the Silicon-Valley accelerator philosophy, she described how the Agency Fund creates a pooled pool of ten technical staff and product-management experts that can be shared across participating NGOs, a model far more efficient than each organisation hiring full-time AI engineers [106-138]. She further noted that product-management talent is scarce in the social sector, making the shared-resource approach especially valuable [135-138].
Programme design details
* Open-application & screening process – NGOs submitted applications, then participated in calls to verify fit and to confirm a requirement for a dedicated internal resource [84-93].
* Mentorship model – each nonprofit was paired with one or two mentors drawn from Project Tech for Dev [84-93].
* In-person workshops – the cohort included workshops where each NGO presented its progress in person, fostering direct peer exchange [84-93].
AI use-cases developed by the NGOs
1. Simple Teacher Buddy (Simple Education Foundation) – Mainak Roy described teachers’ need for context-specific pedagogical advice and how the team built a WhatsApp-based chatbot that delivers evidence-based recommendations [151-162][164-168]. A practical glitch where the bot failed to recognise a simple “hi” greeting broke the data-capture flow, leading the team to implement a conversation-guardrail that treats any first message as a greeting [241-246]. Mainak also noted the use of a “golden dataset” to validate AI outputs [418-419].
2. Quest Alliance’s personalised learning platform – Steven Suting outlined the challenge of heterogeneous learners and teacher shortages, prompting the development of an AI system that emulates human cues (tone, confidence, etc.) to personalise learning pathways and nudge behaviour [165-180][181-186][253-266]. He emphasized that the problem quickly became a behaviour-science issue rather than a purely technical one and warned against “pie-in-the-sky” thinking that neglects the need for behavioural expertise [258-266].
3. Avanti Fellows’ AI-generated mentorship script – Preetam Sukumar recounted an initial attempt to improve student test reports, which pivoted to an AI-summarised script guiding teacher-student conversations and providing low-touch mentorship for online learners [185-200]. The pilot involved 15 teachers conducting 57-75 scripted conversations, but the team faced LLM hallucinations (e.g., incorrect performance trends) that required intensive prompt-engineering [207-214][225-227]. Preetam also highlighted that AI can support grant-writing and code generation for internal nonprofit work [185-200].
Technical and ethical challenges
Across the projects, participants reported LLM hallucinations that undermined trust [209-214][225-227] and bot interaction glitches such as the missing “hi” trigger [241-245]. Steven noted the difficulty of translating human behavioural signals into AI responses, expanding the solution space beyond software engineering into psychology [258-266].
To address these risks, the cohort engaged two knowledge partners: Digital Future Labs, which supplied expertise on responsible-AI design, and Statle, which focused on AI safety [300-304]. Erica stressed that these safeguards were baked in from day one, rather than added later [308-312]. The panel also referenced the use of slur-lists and guard-rail plugins that filter inappropriate outputs, a practice demonstrated in the chatbot design [324-334].
Erica further described concrete open-source integrations: the team used Superset for dashboards and integrated the Glyphic chatbot with OpenAI, illustrating a pragmatic reuse of existing tools [449-454].
Collaboration and cross-NGO learning
The cohort’s collaborative ethos produced concrete synergies. Erica recounted how two health NGOs, initially working in isolation on a high-risk pregnancy predictor, discovered overlapping goals and chose to collaborate [368-374]. A third NGO developing an assessment-model was also brought into the conversation, illustrating how the programme facilitated ecosystem-wide learning beyond the immediate participants [375-381].
Evaluation and impact measurement
Tamina introduced a four-level evaluation framework (user evaluation, monitoring, impact assessment, and learning) that she argues is rare in the private sector but essential for social-impact work [270-277]. This framework links directly to the cohort’s emphasis on continuous impact measurement. Manohar described the programme’s operational checkpoints – open applications, screening, mentorship, and leadership commitment – which together create a feedback loop for ongoing learning [84-93]. The NGOs themselves employed a golden dataset to validate AI outputs, a practice highlighted by Mainak [418-419].
Practical recommendations for NGOs
Panelists converged on several actionable insights:
* Start from real pain points rather than chasing AI for its own sake – map existing inefficiencies before considering AI [400-402][405-408].
* Leverage open-source tools and existing platforms that meet most requirements before building custom solutions – use open-source platforms that satisfy 60-70 % of needs, then customise the gaps [426-433]; integrate tools like Superset and Glyphic-OpenAI [449-454].
* Embed responsible-AI safeguards early through partnerships or built-in guardrails – collaborations with Digital Future Labs and Statle [300-307]; in-house slur-lists and guard-rail plugins [324-334].
* Adopt systematic evaluation (user, monitoring, impact) and maintain a golden dataset to ensure output quality [270-277][418-419].
Differing perspectives
The panel offered differing perspectives on capacity provisioning: Tamina advocated a shared pool of technical staff [106-138], whereas Mainak and Preetam argued that scaling to thousands of users will require additional dedicated AI engineers and a curated golden dataset [418-419][225-227]. A second divergence concerned evaluation emphasis: Tamina’s detailed four-level framework [270-277] contrasted with Manohar’s focus on mentorship and leadership buy-in without an explicit evaluation stage [84-93]. Finally, the responsible-AI implementation approach varied – Erica preferred external knowledge-partner collaborations, while Steven highlighted in-house guard-rail plugins [300-304][324-334].
Key take-aways
* Cohort-based AI programmes enable hands-on mentorship, peer learning and pooled technical resources, making AI adoption feasible for NGOs with limited engineering capacity.
* Embedding responsible-AI and safety from the start (via partners and guard-rails) helps avoid later ethical and quality issues.
* Collaboration among NGOs uncovers overlapping use-cases, reducing duplication and fostering joint development (e.g., health-risk prediction, assessment models).
* Practical solutions built during the cohort included a WhatsApp chatbot for teacher guidance, a behaviour-nudging learning platform, and an AI-generated mentorship script.
* Common challenges were bot interaction glitches, LLM hallucinations, and the difficulty of capturing human behavioural signals.
* NGOs are advised to identify pain points first, reuse open-source tools, and apply systematic evaluation frameworks (including a “golden dataset”) to ensure trust and scalability.
Thought-provoking remarks
* Erica’s observation that “cohort programmes let us learn deeply from a small set of NGOs, feeding those learnings back into our platforms” set the strategic tone [64-71].
* Tamina’s comparison of the cohort to Silicon-Valley accelerators, emphasizing the loneliness of founders and the value of a shared resource pool, highlighted a novel funding model for the social sector [110-118][123-129].
* Mainak’s decision to use WhatsApp-already embedded in teachers’ workflows-demonstrated pragmatic technology adoption [155-163].
* Steven’s comment that “when we try to emulate a human teacher, the problem becomes a behaviour-science challenge rather than just a software problem” broadened the conversation to interdisciplinary design [258-266].
* Tamina’s four-level evaluation framework underscored the importance of impact measurement beyond typical private-sector metrics [270-277].
* Erica’s note on partnering with Digital Future Labs and Statle to embed responsible AI from day one illustrated proactive ethical design [300-307].
* Preetam’s reminder to “stop looking at AI use-cases first; start with pain points” refocused the panel on needs-driven innovation [400-402].
* Erica’s final advice to “don’t reinvent the wheel; start with open-source platforms that meet 60-70 % of your needs” offered a concrete, cost-effective strategy [426-433].
Follow-up questions raised
At the close, Manohar posed several follow-up questions to the panel, including inquiries about evaluation of Gemini or Claude models, the on-the-ground challenges of the WhatsApp chatbot, the scale and rollout status of the AI solutions, and the use of guard-rail plugins for AI safety [440-442][447-454].
Closing remarks
Manohar thanked the participants, highlighted the rarity of early-stage collaboration in the for-profit world, and noted that next year a digital assistant will help moderate events, reinforcing the summit’s AI theme [455-456]. He invited the audience to consult the published documentation and blogs from Project Tech for Dev and the participating NGOs for deeper insights [455-456].
Not sure why exactly you came here, but I’m glad you’re here. We’ll make it worthwhile. So my name is Manohar, and I’ll be moderating this session. Usually what happens is in the beginning of such a panel discussion, you know, we tend to introduce the panelists. And since we are at an AI summit, so I thought, and we are all in the habit of delegating things to AI these days, I thought, why not let the AI itself introduce some of the panelists, right? We don’t know what they’re going to say. Hello, Elizabeth. How are you doing today?
Hi, Manohar. I’m doing great and ready to help. What’s on your mind today?
You couldn’t hear that, right? One second. Can you speak a little louder, Elizabeth?
Of course, I can definitely speak up for you. Just let me know what you need, and I’ll match your pace.
All right. No, right? Can’t hear. Let’s try again. And I can hear you. Usually this should work.
Yes, absolutely. This usually works just fine. If there’s anything specific you want to try again or clarify, just let me know.
Okay, Elizabeth, I am sitting in front of a panel right now and I need your help to introduce the panel members. Can you help me do that?
Absolutely. I’ve got their profiles right here. Just let me know how you want to frame it and I can help you introduce them smoothly.
Okay, let’s start with Tamina.
So, introducing Tamina Madon. She’s the co -founder of the agency fund. Tamina works right where technology, philanthropy and economic mobility. She invests in services that empower human agency. create social impact at scale
all right thank you next up is erica
next we have erica aria she’s the ceo of project tech for dev which collaborates across the global south to help non -profits use tech and data at scale under her leadership they’ve supported over 200 ngos with open source platforms and advisory
very nice thank you next we have my neck
now we have my neck roy he’s the co -founder and ceo of simple education foundation my neck partners with state governments across india to enhance teaching and learning at scale he’s also exploring how ai can responsibly support teachers and public education
thank you next up is steven
up next is steven suiting you he’s the director of technology and product at quest alliance steven leads ai powered digital learning platforms that reach youth across india using data -driven innovation to improve learning outcomes.
Thank you. And then we have Preetam Sukumar.
And he is Preetam Sukumar. He’s the technology and research leader of Antifellows. Preetam drives tech and research efforts to use data and digital platforms to uplift public school students’ life outcomes across India.
Excellent. And now finally we have this chap moderating this session. His name is Manohar. Can you tell me something about him?
And finally our moderator, Manohar Srikanth. He’s partner and CTO at Satva Consulting. Manohar works at the intersection of tech, data, and AI, creating solutions that help social sector organizations scale their impact.
Excellent, Elizabeth. I think you did a stellar job. Thank you so much. You wanted to do the panel for us. Next year, I’m sure, we’ll have some. digital assistant sitting here. But anyway, I hope that was interesting and it did a good job as well. So the main theme for this discussion is AI cohort program that was run from September to December. It was about a four -month program, which was actually anchored by Project Tech for Dev. And about seven nonprofits built their AI use cases in the course of these four months. So we’re going to talk about the genesis of the program, why it was structured that way, and also some of the learnings and experiences.
And finally, depending on, I’m not sure how many NGOs are there, but if you want some takeaways on how a nonprofit can also take away approaches towards AI, that’s also something which we’ll cover. I’m going to begin with Erika, because Project Tech for Dev was the entity which came up with the idea for such a program. Erika, can you give us a brief overview? of maybe start a little bit about Tech4Dev itself, right? And then why did you think about such a program in a cohort model, right? And what are the genesis, what are the motivation for this? And something about the structure also.
Thanks, Manoj. So, good morning everyone. Hello. So, good morning everyone. The AI did a bit of introduction for me. And I’ll just add a little human element to it. So, yes, we at Project Tech4Dev, we develop open source tech platforms for the social sector. That’s one part of our work. And in addition, we do a lot of tech advisory for the non -profits. And as we had been working with, you know, more than 200 non -profits in India mostly, what we realized was that yes, we have these open source platforms which nonprofits come, they can use it as a SaaS platform, and we also provide consulting. And through our tech advisory, what we learned was that there are times when, you know, you really need to handhold nonprofits in taking that first step towards technology or those who have already taken that step to help them move to that next level.
And we have in past, you know, run a data catalyst program also, which was in collaboration with Dasra, and I’m just assuming you all would know this name, in the social sector. And they’ve been running these cohort -based programs, and we felt that, you know, when we do these deep -rooted programs with a small set of NGOs, we’re able to learn a lot from them. And that actually… That actually feeds into, you know, the platforms that we built. Because the way we… our platforms is not like a top -down, but it is always listening from the ground what the non -profits need, and I feel these cohort -based programs, at least for Tech4Dev, enables us to get those learnings.
In addition to that, you know, even this small cohort that comes together, there is a lot of peer learning and sharing that happens. It helps us see the commonalities across the use cases the NGOs are working on, and it helps them to further collaborate on them. So why do you rebuild things? Or if we are building new things, can we make it for the sector? Can we reuse the intellectual property that we have, you know, working with one or two other partners in the cohort? How can we leverage that and make it available more for the sector? So learning from our data catalyst program, which was a cohort -based program, and a program that agency funded, was also running on AI.
and it was called AI for Global Development, we felt that maybe while agency fund program was working more with the nonprofits who were much ahead in their journey in technology and in using AI, we felt that can we come down a level and work with those nonprofits who were wanting to take that step of integrating AI into their work. So they had a good use case, but maybe funding was a challenge, or maybe it was the resources were a challenge, because in the sector, especially in the NGO space, we know that having engineering resources is a very big challenge, and many nonprofits do not have that. So is that a barrier for them to take that step towards adopting AI?
So we wanted to bring down those barriers by introducing mentors into this program. So just… I’ll quickly tell you about the structure of the program. It was like we got… seven NGOs together. It was an open application. Screening was done. We had calls with the NGOs to really understand that their use case was really fitting the program. It was a free program for the NGOs because as Tech4Dev, what we believe, it was a pilot for us. So when we are running a pilot, we are learning together in it. And if we have the funds to encourage these NGOs to come together, then we should play that role. So there was a proper evaluation of the use case, also ensuring that the nonprofit has a resource who can devote time to running this for the period of the four months.
So who could anchor it within the organization. There was leadership buy -in into onboarding onto the program because for technology to get integrated, it is very important that the leadership buy -in is there because there is investment that Tech4Dev has. So I think that’s what we’re trying to do. the reason why we felt that can we, you know, just unlock these couple of barriers that these non -profits have and today we have three of such non -profits on this panel who are going to share with you, you know, what does this cohort enable for them. So I think, yeah.
Right. Thank you, Erica. I mean, you talked a lot about the goals and the intent. We will hear from the non -profits how much of that intent also was realized, I think. And you can be candid, I’m sure, all of you. Right. We’ll move on to Tamina now. Tamina from the agency fund. Do give a brief overview of agency fund itself. But also talk to us about the motivation of funding such a cohort model because, you know, typically funding you can give to specific non -profits themselves for the AI use cases. But this is a different model. What is the motivation for that? Maybe we
Sure. Thank you, everyone. I live in Silicon Valley and I started life as an engineer. I’ve never pursued that career. Over the last 20 years. One thing I saw in Silicon Valley is that a lot of new ventures are started in cohorts. Y Combinator is a quite famous incubator for profitable companies. I was working before we started the agency fund in South Park Commons, which is a venture capital fund and tech community. There’s one in Bangalore, one in San Francisco, one in New York City. And the way SPC operates is to bring founders together at the earliest stages in their journeys as they’re trying to figure out what to build and how to build it and who the user will be.
And that’s a very lonely journey. It is filled with frustrations and failures, and it helps to have a cohort of colleagues in the same stage of life or in the same stage of product development. And so we brought that philosophy into the agency fund when we started, that we bring together cohorts for accelerators. And we have participated in a lot of accelerators, including the data catalyst program that Tech4Dev has, which we have done. run the AI for Global Development Accelerator as well. The AI one is a year long for us and we bring established companies, non -profit companies together, but they’re all building AI for the first time. So it’s that same experience of needing to understand your user, figuring out what to build and going through the practice of putting together pipelines, experimenting, experimenting, iterating, refining.
It’s a journey and it’s difficult. And so I do think that the cohort support helps a lot. The other thing is as a funder, our perspective has been that while we don’t know everything, there are certain resources that are hard for non -profits to pay for themselves. And if we can create those resources in a pool format and make them available, we can perhaps be more efficient. For example, not every NGO is ready to hire two full -time AI engineers. Maybe they only need a part -time person right now as they’re building out the pipeline. So we’ve put together a pool of, of 10 technical staff who sit alongside the organizations that we fund in the accelerator.
We include product management in that as well. We’ve just added some additional people in product management because the AI technology is actually easy, especially there are a lot of talented engineers in India and elsewhere. What is difficult is to fit the technology to the pain points we all experience in life and build a product that achieves social impact. That’s really difficult. It’s so much further from what the typical Silicon Valley startup has to deal with. They just need to make money from people who already have money, right? We are trying to do something much more difficult with a population that’s much more excluded. And so, you know, there aren’t a lot of product managers in nonprofits.
If you’re in a nonprofit, raise your hand if you have a product manager. Oh, I’m surprised. Okay, there’s a decent number. But it’s not a role that commonly has been found in nonprofits. This is probably a… a special group. Some of them are from the organizations in the panel.
Okay, fair enough. Not counted. Anyway, thank you, Tamina. Alright, so that set the stage. So now you got a sense of why the organization actually, both these organizations thought of a cohort model, why they set it up and so on. Now let’s get into the trenches in terms of the experience of the non -profits themselves, right? So it would be good to start with the problem space, right? So what problems were you trying to solve in the context of this particular program? And at a high level, how did AI come into the picture and overview? We’ll get to the experiences and learnings a little later, but just to set the context of what problems because again, we should remember that AI is a means to an end, right?
So understanding the problem that you’re trying to solve and was it scale, was it inclusion, was it reach, right, in the context of your program would be good to understand. Let’s start with you, Myna Croy.
Thanks. I’m audible right? Yeah. Okay. So I’ll also add a bit to the intro that the AI did. So I’m also a teacher. And that’s how I started my journey into this space. And one of the things that as a teacher, I’d always struggled with is figuring out what is the right classroom pedagogical strategy that I need to use for a certain sort of class. And that often changed with the topics or the objectives that I was covering on the day. Right. And for a teacher, it’s often difficult when I’m doing eight lessons a day to sharpen my sort of focus and ensure that I get the perfect strategy. And that really was our sort of problem statement in many ways that can we build a platform or a tool which enables teachers to actually say that this is the class I’m teaching today.
This is the kind of classroom that I need. This is the kind of classroom that I have. And then can the. AI throw out something which is evidence based and has worked in other similar classes before and support the teacher so that’s what we wanted to do and in India I think if I asked how many of you use WhatsApp probably all of you will put your hands up and that’s the case for teachers in some of the most remote areas of the country as well so we chose WhatsApp as a platform and we used a chatbot kind of a mechanism to solve that problem for our teachers
ok thank you Manak for that intro so that’s Simple Education Foundation so they built this Simple Teacher Buddy we’ll come back to you a little later Stephen, Quest Alliance, so what was the problem you were trying to solve and high level overview of the solution
thank you it’s happening no? I’m not sure if I can yeah it’s happening so we work so we work with young people from grade 8 to class 12 and then younger people who work in the TVET who study in the TVET ecosystem who are between about 16 to 25 right And what we realized in the work that we did, and this wasn’t necessarily revolutionary, was that no two learners are the same, right? And they’re different in the way they think, in the way they learn, in the confidence levels that they have. And that’s one dimension of difference. And when you think about the demographic difference, their opportunity of access, the gender inequalities, and if you put all of this together, you have a system that’s a classroom that’s fairly heterogeneous.
These are not homogeneous systems, right? But unfortunately, the educational structure around this is a chalk and talk mechanism. It largely treats all the students in the same way, right? And as a result of that, you’re not really meeting the learner where they are. And so that’s one dimension. The other side of this problem is also that there’s a lot of stress with the availability of teachers. Because in general, there’s about a 1. 30 ratio. between student and teacher. And that’s in a good scenario, right? So what tends to happen is these students don’t necessarily have access to the information that they need. You’re not really meeting the learners at where they are. And so what we try to do or what we’re trying to do is really try and place a system that is able to hear the learners’ problems with respect to the area of career development.
Okay, so you’re directly targeting the learners, right? Whereas my next use case was targeting the teachers, right? So we have yet another education use case from Avanti Fellows. Preetam, tell us about it.
So hi, I’m Preetam from Avanti Fellows. And what Tamina said and what Stephen said kind of resonated because we struggled a lot to find a good use case for AI because we have a lot of pain points. Like our teachers have pain points, our students have pain points, our program managers have pain points, and they’re all spread across the country in remote locations. So we started with a use case to improve student report. So that because a lot of, I mean, we reach about 200 ,000 students and 98 % of those are online learners. And they don’t get useful feedback from the test report. They just get score and this and that. But whereas in our physical school, there is a teacher talking to them.
There is a mentor talking to them. So we thought we’ll improve the student report first, but that didn’t work out so well. So we, our mentor actually helped us a lot, Akilesh from Tech4Dev. And we iterated through a few use cases till we landed up on actually replacing the teacher -student mentor conversation with an AI summary that can be useful for our low -touch, I mean, our online students. So it kind of collects all the data of the student’s performance in the tests about how they’re doing, what, what, how their attendance is, what chapters they’re weak in, what chapters they’re strong in. And generates kind of a script. And for the teachers, the script is used to guide their conversation.
And for the online learners, the script can be used to like just give them a proper mentors, sort of mentorship guidance so that they have actionable things to do for their next test or whatever they’re doing.
Right, now let’s dig a little deeper. So now where are we on this journey, Pritam? And you know, how was the solution deployed? How many people are using it? Is it being in the pilot phase? Can you tell us more?
It’s, I think it’s somewhere between the pilot and the rollout. So we, around 15 teachers I think have had 57 or 75, 57 to 75 conversations with students with these scripts. And we are working on feedback because there are hallucinations as LLMs do. But the hallucinations in this case are a little weird. The numbers are correct but instead of saying increased it says decreased. So we are like, there is a lot of prompt engineering that we are doing to get this sort of fixed. And in the low touch, in the online program it is being, once the rollout is kind of ready and yet we are just waiting to fix this, these small issues. Okay. Before rolling out.
Okay, and what’s been the response from the teachers themselves who are getting these reports?
The teachers are very happy. I mean, it saves them a lot of time. Like in one of the main feedbacks, one of the main points of feedback that we got from the teacher is that it saves them really a lot of time because before such a conversation, they would have to look at the last four reports of the student. They would ask, I don’t know, what is happening at home? Like, are you getting device access and all this? Now all this, the students fill a form before the mentorship conversation and the teacher, we have the test data and we give this all in a script to the teacher. So that has been the positive side.
On the negative side, the hallucinations really bother the teachers because there isn’t 100 % trust yet in what we’ve given them. And that we are still working through. I mean, the solution is not clear because to me, maybe we need more AI engineers like Tamina said. But because it’s not clear exactly what to do with these hallucinations because to me and us, the LLMs are like a bug. Black box. You just put something, you get it. something out and you don’t know what change will bring the change that you want. Yeah, that’s where we are now.
Right. Yeah, I guess the error rate, the hit ratio and what kind of an impact it has depends on the use case. And if the teachers can be trained to address some of this, it might be one way out. Mainak, you talked about WhatsApp being the channel, right? So that’s probably something which, you know, eases adoption because they don’t have to learn something new. It’s there. It’s part of the conversation. So now they’re getting some additional feedback and advice. But at the same time, and by the way, I know this because I was mentoring some of the folks here. So by the way, this whole program had a mentorship model where each of these nonprofits are hooked up with one or two mentors and those mentors guided the teams.
Right. So I happen to be one of the mentors for Simple Education. And we did face some interesting challenges also, even if it was simplified from the WhatsApp point of view. So what were some of those challenges on the ground? Right. Because when the rubber meets the road, that’s. when you see what I mean you would have imagined something every product person imagines and starts with a hypothesis but then needs to be validated right so what were some of those things that we learned
yeah I think there are a bunch of challenges but you know my favorite challenge is actually we wanted teachers to start the conversation with the bot with a simple hi but with the moment teachers would get the QR code and scan it they would start putting their problems in right and then the bot would glitch because the bot did not get that first hi and then the whole process of the data that we needed to collect to give the teacher very specific guidance was not happening so we ended up saying that whatever is the first conversation that the teacher does the bot sort of recognizes it as a hi and then starts the conversation so it although we’ve kept telling the teachers that started with a hi but it does not happen so we’ve sort of worked around it but yeah that’s been one of the challenges and then I think I think So we’ve seen a bit of also teachers figuring out what kind of questions they can ask.
So sometimes there’s been some straying away from the kind of questions we want the teachers to ask. And because it’s AI, they would still throw out some response. Right. So over there also, we’ve had to put some guardrails around what kind of conversations the chatbot will entertain. What are the conversations that we would completely sort of say no to. So, yeah. Right.
Thank you, Stephen. In your case, I think you had this slightly more ambitious plan for behavior change. Right. You’re looking at nudging these learners towards a different kind of behavior. And you had a certain approach to that. Can you tell us a little bit about it? And how successful has it been? I
think we’re still largely pie in the sky thinking right now. So we still hope to be ambitious through the entire course of the project and not be deterred by, how many users and, you know, do we have, how many sessions do we have? We’re trying to be steadfast with that. I think for us, the fundamental premise was that we didn’t want to create a system that was really an information dissemination system, right, because there’s enough and more of that. We wanted to create something that emulated a human being or emulated a teacher. And the intent over there was really to be able to pick up signals. If you imagine a conversation between a teacher and a student, there are multiple signals that a human being is able to pick up when they’re having a conversation, right?
Sometimes it can just be the tonality. It can be the speed at which they’re speaking. You can gauge confidence levels, right? But when you’re working with an interface like a bot, for example, you have limited access to the kind of signals that you can pick up. So then how do you decrypt conversation to be able to pick up these different signals? And then how do you use those signals to then create a response that intends to or is intended to emulate a human being? So we’re going to create a system that emulates a human being. right um and and interestingly so when we started thinking like that it stopped becoming a software problem it started becoming a behavior science problem right and so for us that was really the elevation of that situation and so right now we’ve got a plethora of like people who are specialists in that space we’ve got you know the engineering and software folks to help us with the scaffolding but the heart of the problem is really in understanding that human being right no
it’s taking it really to the next level that’s so nice um so tamina you’ve now heard some of these challenges experiences and so on and i think you’ve worked with previous um cohorts as well i want to get a sense of also how these indian non -profits you’ve been working with and engaging with how does this differ from the others probably you’ve engaged with you know in the u .s and abroad in terms of adoption in terms of maturity in terms of the kind of use cases they’re looking at can you just give us a lay of the land comparing the two
then going to learn it. It’s about my belief, my confidence in myself as a new mom to take care of my baby and know what caregiving I need to provide. It’s about my role as a farmer, feeling that I’m going to be able to experiment with a new input and feeling more confident or thinking maybe I’ll try livestock because now I have better information and support. So we call that the user evaluation. That requires survey data collection. Most companies don’t do many surveys. Maybe they buy some survey data from a firm, but all of these orgs, once they’re deploying to enough people, will have monitoring and evaluation systems that really try to understand the user.
And then the fourth motor we switch on, we call impact evaluation, and that’s when you have tens of thousands, hundreds of thousands of users, and you want to understand whether the product that they’re using is helping them make a change in their health outcomes, in their education or livelihood outcomes. So that’s what we call impact evaluation. And so we almost never see that fourth stage of evaluation in the private sector. That’s something we do in the social sector. But we think that it’s a nice way, this kind of four -level framework, to think about how to continuously learn from the product you’re deploying and then prove its impact to society and hopefully bring in more funding to scale.
Right. No, thank you, Tamina. In fact, if those of you who are interested in that framework, it’s a very powerful, interesting framework. The Search for Agency Fund Evaluation Framework, and I’m sure you’ll find that. It’s a very useful resource. Right. So, in fact, this is one such topic. Evaluation is one such orthogonal topic. But there were a couple of others as well. So, Erika, also from a program structure point of view, right, you people had some knowledge partners come in, right? So it was not just these nonprofits hooked up with mentors, but you also had some others. Can you tell us a little bit more about those knowledge partners, their roles, and so on, and then we’ll get into how they impact.
the work they were doing.
Thanks, Manohar. I think this is a very critical part of the entire program design. So Tech4Dev, while we are like a bunch of engineers, CTOs in the organization, but again, we know that the problem that we are trying to solve is not something that we can do alone. And what we try to do is, in whatever we do, we don’t want to reinvent the wheel. And that’s where and we know we don’t know it all. So we always work in our programs through collaborations and partnerships with people who have expertise in those areas so that we can get the best to the cohort. And so as part of this AI cohort too, we wanted, what was the overall goal of the program?
It was that the non -profits could unlock those barriers which was keeping them away from building and deploying. AI solutions to help them bring in more efficiency, help them scale and create more impact on the ground. But we wanted, as a technologist, we could say that, okay, we’re just building something, but are we building it in the right way? That’s what we wanted to do. And so we collaborated with two knowledge partners. One was Digital Future Labs, who brought in expertise around building, you know, on integrating responsible AI principles into your design as you are working on a use case. And the second knowledge partner was Statle, which works in AI safety. So we actually wanted that when the nonprofits are working as part of this cohort, they have that thinking of responsible AI and AI safety built into their design right from the get -go.
It’s not something that you need to think. Later, okay, let me just build something. And then I’ll come to, you know, responsible AI practices or think about AI safety, you know, a couple of after I’ve done my pilot or, you know, after a couple of months when actually I see things breaking. So we wanted to build that right from the get go. And that’s why we collaborated with partners because we knew we did not know this bit. We definitely contributed through, you know, what Manohar said. This was a mentorship program. So a lot of mentors came in from Project Tech for Dev because we knew that is one piece that we feel we do good at and collaborated with these partners.
And I hope that this helps the nonprofits in designing better solutions.
Right. No, that is a very unique part of this program, actually, because also anyone who’s in software would know that quality and all these things will look later. Let’s build and get the functionality out. That’s a typical attitude. And in the context of AI, AI safety, responsible AI seems to be the most important thing. to be an afterthought, but here it was baked into it. So now let’s hear from some of the non -profits and how did this influence the way you approached, you know, the building out this and how did AI safety or responsible AI kind of thinking influence and how did it actually help on the ground itself? Stephen, would you like to say something?
Sure. I think for us, it was supremely helpful, just primarily because, you know, the artifacts that these organizations helped put together in some ways were like a blueprint for us, right? And when you have a blueprint in front of you, you have a sense of what the different moving parts are. And what that allowed for us to do is to rapid prototype in a fairly modular way, which allowed us to fail fast, right? And if you were failing fast, we were learning faster, right? So yeah, I think. and I think what can typically happen in this space of human development because the dimensionality of problems can be fairly large and this phenomenon of analysis paralysis can happen and it can be overwhelming to decide what do you pick and choose because they all seem like really important problems but having frameworks like what Tamina was talking about helps you sort of chunk things into where is this problem best suited and where is it most relevant and then you solve for that in that section and that helps influence what you’re going to do in the next step and then you kind of shuttle back and forth till you’ve crossed a certain critical threshold of it making sense to the problem that you’re solving
One of the things I remember from the AI safety part was what Tatl came up with with these plugins where they have created a slur list a list of words which the AI is not supposed to use and that’s something which they have crowdsourced and so on and there are also some guardrails plugins from Lama and so on and And the fact that those things can be plugged into your bot to make sure that it’s behaving the right way was, I thought, was an interesting thing. Did any one of you use any of those things? Not yet, Right. For the simple, yes. I mean, I remember the guardrail. So the bot which Minak was talking about.
So there you could ask any question. And then the whole thinking process behind making sure that even if a random question comes. So let’s say a teacher asks, what do I do? It’s not a question the bot is supposed to answer. But how does it respond to that? It’s something we started thinking about and baked into the whole design itself. And that was used for testing. Okay. Now, one of the unique features of this program was also the whole collaboration piece. Because. they are not working in isolation. So we had in -person workshops where each of them actually came and presented, and they had sessions where, you see, all three of them are actually working on education use cases, and there are some others as well.
So how could they learn from each other? That’s one. The other thing is, and I think I’m sure you’ll recognize, you probably are also looking at similar problems in this space. How do you not reinvent the wheel? How can you leverage some of the work that’s been done by others? So this collaboration aspect also came in in a very strong way. So I want to understand from some of you, so whom else did you speak to? How did this play out as a part of the program? How did this program structure influence the way you went about it? And how was it in comparison to, let’s say, you were going about it by yourself? What was the difference?
Anyone? Anyone?
I think. One thing that was nice about the program design was that. by nature of the conversations that we were having, there was this sense of porousness that we had, that we weren’t holding our cards too close to our chest. It was okay for me to learn from what you were doing well and what you weren’t doing well. And if I had that conversation with you, it would elevate or it would sort of catalyze and expedite certain problems that we were trying to solve. For example, when we were thinking about how do you like if you have to do responsible, if you have to do classification, if you have to do behavior. So these are all different parts of a fairly large problem.
And we had a certain approach in mind. But through the cohort, we realized that people were trying to solve similar things, but they did it in a different way. And like, for example, we didn’t think about this through the genetic way. And so then when people started saying that this is what worked for us in that model of deployment, that in many ways kind of expedited the way we solved for that problem. And I think there are multiple examples like that. Also, the idea of responsible, for example, right? Like when you say responsible, it can be a whole gamut of what you mean by responsible. But what does responsible mean for a 25 -year -old versus, say, a grade 8 student?
It’s significantly different. Now, if you’re, you know, if you’re, let’s say, for example, an Avanti and you’re working with a certain cohort that is similar to your cohort, right? What are they doing with responsible that you want to do, right? And what is that conversation that we want to have? So I feel that was certainly helpful.
Interesting. Erica?
Yeah, I’ll just add in. So while these were like seven NGOs, out of which four were from the education and three are sitting here, there were two health NGOs also, and they were both working on building a predictive model to predict high -risk pregnancies. And they didn’t know about, like, you know, each one of them building something like that. So but when the cohort came together, they discussed. And then we felt, you know, the mentors also got together, and then that’s when they felt that we are building the same thing. So then are we happy to collaborate? And that’s when, you know, the future calls that happened was getting these two NGOs together and, you know, building on at least sharing and building it together was one thing.
Second, while we were working with these seven NGOs, one of them, which is not represented here, was working on an assessment model. How can we do assessments of, say, answer sheets which are coming from students because that takes in a lot of time of the teachers. And then we also, since we’ve been working with 200 nonprofits, we were also aware of two other nonprofits who were working in, you know, trying to work on similar solution. So just going beyond the cohort, we brought all three of them together, and we just discussed on, you know, the assessment models that they were building, the rubrics. And we were seeing that there was a lot of commonality. So could we build this together, learn from each other’s mistakes?
So I’m just trying to share here that the learning and sharing goes beyond the cohort also. And that is where, you know, platforms or organizations like us who are more ecosystem players have a role to play. So, yeah, I think these are like two such examples where I could see collaboration happening with just bringing a set of seven NGOs together in a room.
Right. No, thank you, Erica. Those are very interesting and powerful examples. Also building on what Teminat said earlier, contrasting this and the corporate space. I’ve spent a couple of decades in the corporate side, and this kind of collaboration at an early stage of technology is very, very rare. You don’t see that on the for -profit side. And so it’s fascinating to see this kind of thing. And, of course, we are in a room where I’m sure we’re working on similar kind of problems. And just to exchange notes is so, so important. All right. Just now wrapping up, we have about six and a half minutes. I’d like to understand how many of you in the audience are from nonprofits.
Can you raise your hands? OK, quite a few. All right. So now we have. keen non -profit listeners. So the question to the non -profits and also probably others over here is what are some of the learnings that you came away with which surprised you or which you didn’t know of before which could be useful for the non -profits in the audience as well as they go about their AI journeys. If you could share some of your nuggets. Pritam, please.
So I think one thing is that one thing we learned is to stop looking at use cases for AI but look at pain points and troubles we already have and see if AI is a good fit there. It doesn’t have to be like AI is a new technology, let’s put it and build something amazing and innovative. There’s already a lot of trouble in the way even our non -profit runs. There’s a lot of inefficiency, there’s a lot of stuff so you just, there is a lot of stuff that AI can do to help daily operations like our fundraising team write grants or our programmers write code which of course is the number one use case right now.
But yeah, that is one thing we really believe, at least I really believe now.
You know when AI started becoming really popular it almost became like knee jerk reactions for organizations to jump on that bandwagon and if you weren’t doing it then you couldn’t get into the right rooms with the right kind of funders and then you would be lost in that race right not always but more often than not there is that pressure and I think what has happened as a consequence of that is some part of our DNA was forgotten like I feel like civil society organizations have always been you know a species that has really always asked questions sometimes over indexed on always asking questions but then when software development became so much more easier to do AI became so much easier to do we started leading with solutions and we started deploying solutions and then and this is true for our space as well that when we go to a classroom there will be five different NGOs who are solving the same problem and there’s that learner who we’re all saying that we’re trying to work for the learner we’re putting the learner at the center of design but there are five not -for -profits who are all trying to solve the same problem for that learner without recognizing what we’re doing to that learner.
And we need to have that conversation, right? So sometimes I think it’s important to just take a moment and have that conversation or do that assessment before you start leading to solutions.
Yeah, it reminds me of multiple people building an app for ASHA workers and they have to interface with multiple apps themselves, right? Yes, Roy?
Yeah, no, I think just sort of definitely plus one into what both of them have mentioned, I think don’t build what is sexy, build what is needed. And I think we had a very different use case to start off with. And I think we ended up building something very different because we recognized that that was actually the need of the hour. So I think that’s definitely one thing that I would say. And the second piece is I feel like… especially when we are building for the non -profit space in India, scale becomes a very important metric that we want to reach out to like thousands and hundreds and thousands of teachers or students or whoever.
And I think in that whole space, we miss out on some fundamentals like responsibility, that responsible AI that we were talking about. So I think how do we build that? And that’s something that I think I have learned through the process. There’s something that we’ve been working on now called the golden set to ensure that the responses the AI is giving is actually more aligned to what we actually want the AI to give. And I think those pieces are things that if we fix early on, then I think it’s just easier. And that’s something that I think we’ve learned through this process.
Great. Thank you, Manak. I mean, the golden data set you talked about is part of an evaluation framework, also something which is very strongly, emphasized in the context of the program. We have a couple of minutes. We’d like to open up for questions, but before that, any last words? Erica, Tamina?
Y es. So, totally resonate with what all my three panelists said. So, don’t just get on to using technology because everyone else is doing it. Only if it really meets your needs, that is when you think about technology. And when you think about technology, please look, there might be already solutions existing, so don’t jump on to building custom solutions because I think what you are working on is also something many others are working on. And that will just help you save time, effort, and money. So, maybe just, even if an existing solution, I’m just saying, like an open source platform, is meeting 60 -70 % of your needs, just start with that. And then you can improvise it and then decide on whether I want to use it or not.
want to build up something which is custom. So yeah, those are like my thoughts.
It’s a fantastic note to end on. Thank you. Thank you for the panel. Big hand to them. Thank you. We have one minute and three seconds. Questions? I have a few hands up. Like a Gemini gem or a Claude project. Did you all evaluate that? Curious to know the journey of that build versus buy or just adopt for any one of them. Yeah.
Yeah, I can just add and maybe, you know, as you are using it. So like when we were talking about out say a chatbot, right? So it’s not, it’s like those models, like when we are using, we built the Glyphic chatbot. So we’ve obviously integrated it with open AI. So it’s not that you are reinventing anything, but, and even we are building an AI platform. Now, even in that platform, what are we doing? We are integrating different open source elements which are there. So we ourselves as an organization, even if we’re building something new, we are picking up things which are already existing stitching it together and then adding layers when we speak to our NGOs of, you know, what is missing in it and then just build that part of it.
So it is always there in our working that we integrate open source solutions that are existing and not built. Even if it is like for dashboards, we are like, we have a data management platform, but we’ve not been creating dashboards on it. We’ve integrated it with Superset, which is another open source solution. I’m just, I know you may not know these names, but what I want to call out. here is that even as a tech organization who’s building platforms we’re not building things from scratch we are integrating with other tools you know which are meeting the needs even when we are building our own platforms
thank you actually it’s 0 -0 but we’ll stay back if you have questions we’ll have to you know have a conversation with you but thank you for your time it was great to see a big audience also engaged hope it was useful and do reach out to anyone of us if you want to find out more about the program there’s a lot of documents which have been written and published as blogs as well from project tech for dev and some of the non -profits so happy to be engaging with you also going forward in this journey thank you so much and thank you to the panel Thank you. Thank you. you Thank you.
This programme specifically targeted nonprofits wanting to take their first step into AI, distinguishing it from Agency Fund’s other initiatives that work with nonprofits already advanced in their tec…
EventAmbassador Reintam Saar from Estonia outlined the structure and objectives of the first Global Dialogue on AI Governance, scheduled for July this year in Geneva. The dialogue aims to bring together go…
Event– **Gabriela Ramos**: Moderator of the panel discussion, mentioned as running for a position at UNESCO and having worked with UNESCO on the recommendation on the ethics of artificial intelligence Ann…
EventAdham Abouzied: Yeah, yeah, very clear. I think we’ve been mainly looking at basically how to inspire what is the right governance for a systems level change to basically push forward, accelerate the …
EventSajid Rahman: Thank you, and good afternoon. You know, it’s a great pleasure to speak about something which is not only shaping our lives, but also demanding combined stewardship from different stakeh…
EventAnd that’s a very lonely journey. It is filled with frustrations and failures, and it helps to have a cohort of colleagues in the same stage of life or in the same stage of product development. And so…
Event_reportingMarlene Owizniak: Yeah, and I’ll just add briefly on the AI lifecycle. ECNL has been working with Discord on piloting what we call a framework for meaningful engagement, where from the first stage of …
Event“For the health sector, we’re looking at our frontline health workers… giving them decision support tools that enable them to have better diagnosis”<a href=”https://dig.watch/event/india-ai-impact-s…
EventHousehold income, the inputs get better, and to see how they can access the markets and agriculture credit. Similar in health and education, it’s in great practices that we are seeing in Africa, in Gh…
EventTeachers have longworriedabout ChatGPT enabling students to cheat, with its ability to produce essays and solve problems quickly. According to Chegg, 40% of undergraduates worldwide have used generati…
UpdatesHeloisa Candello:Thank you, Diogo. I’m going to start. Hello everyone. I’m going to share my screen, and then we can start. Thank you so much for the introduction. One second. So I’m Eloisa Candelo, I…
EventArtificial intelligence (AI) is becoming relevant for online education, especially in terms of monitoring and personalising the educational experience. Derek Haoyang Li, founder and CEO of Squirrel AI…
UpdatesEmmanuel Manasseh:Thank you very much, UNESCO, for preparing this important session, and I’m really grateful for the ITU to be invited to be part of this AWSIS Plus 20 event talking about enhancing te…
EventAki Enkenberg: Yes, absolutely. And I’m very happy to be here. Thanks, UNICEF, for inviting me to be part of the panel. It’s quite a timely issue that does require strong multistakeholder coopera…
EventIn a groundbreaking shift for the academic world, AI now contributes to at least 10% of research papers, soaring to 20% in computer science, according toThe Economist. This transformation is driven by…
UpdatesAntonio Saravanos: So you bring up an excellent point, right? Unfortunately, it’s quite easy to detect the use of TATCPT or another artificial intelligence, specifically an NLP, natural language p…
EventAnd then people in biosecurity frameworks, like doing biosecurity frameworks, thinks that AI governance is like a tool. And these people don’t talk to each other. And that gap, that gap right there is…
EventAI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast amounts of data and statistical prediction, now generate text that can mirror hum…
UpdatesThe tone was collaborative and constructive throughout, with panelists demonstrating shared commitment to the multi-stakeholder approach despite representing different sectors. There was a sense of ca…
EventNGOs should consider creating platforms or scenarios where they can connect with potential partners and brainstorm solutions. This can help address gaps and foster collaboration among different stakeh…
EventThis transcript contains only a single speaker (Vinod Khosla) presenting his vision for AI applications in India, with brief introductory remarks from a moderator. There are no opposing viewpoints, co…
EventThe discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkable consensus on the importance of standards, with no significant disagreements o…
EventThe discussion maintained an energetic and collaborative tone throughout, with panelists frequently building on each other’s points rather than disagreeing. The moderator deliberately kept the pace dy…
EventThe tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized opportunities rather than obstacles, with particular enthusiasm around technology’s p…
EventTheir struggle is seen as symptomatic of the broader challenges of unequal opportunities to contribute to international policy dialogues, highlighting inclusivity issues in multistakeholder governance…
EventThe tone was collaborative and forward-looking, with speakers sharing both achievements and ongoing challenges in a constructive manner. There was a sense of urgency about addressing digital divides, …
EventIt is a reflection of a collaborative ethos, endorsing the proposal as a catalyst for political unity and development, in line with the overarching objectives of the Sustainable Development Goals. Th…
EventThe tone was largely constructive and collaborative, with participants building on each other’s ideas. There was a sense of urgency about the need to improve current approaches, balanced with optimism…
EventThe overall tone was constructive and diplomatic, with most delegations expressing willingness to compromise and find common ground. There was a sense of urgency to reach agreement, given the approach…
EventThe discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI potential and collaborative problem-solving. Speakers demonstrated confidence in…
EventThe tone was consistently professional, collaborative, and optimistic throughout. Speakers demonstrated deep expertise while maintaining accessibility, with frequent acknowledgment of colleagues’ cont…
EventThe tone remains consistently enthusiastic, informative, and encouraging throughout the conversation. Both speakers maintain an upbeat, travel-blogger style with frequent use of superlatives (“unforge…
Event“The session opened with moderator Manohar Sreekanth welcoming the audience.”
The knowledge base confirms that the moderator’s name was Manohar, indicating a moderator was present at the start of the session [S1].
“The cohort programme involved seven nonprofit organisations.”
The source states that seven NGOs were brought together for the programme [S2].
“Open‑application & screening process – NGOs submitted applications, then participated in calls to verify fit and to confirm a requirement for a dedicated internal resource.”
The knowledge base describes an open application followed by screening calls with the NGOs to assess fit [S2].
“In‑person workshops – the cohort included workshops where each NGO presented its progress in person, fostering direct peer exchange.”
A related source notes that workshops are often transformed into blended, multi-month programmes to enable deeper engagement, providing context on the use of in-person and hybrid workshops in similar initiatives [S34].
“The moderator delegated the introductions of the panelists to the conversational assistant Elizabeth.”
General descriptions of moderator duties indicate that moderators typically introduce speakers and manage the flow of the discussion, which aligns with the moderator handling introductions, though the delegation to an AI assistant is not mentioned in the knowledge base [S89].
The panel shows strong convergence on the value of cohort‑based programs for mentorship, peer learning and collaborative solution building; on embedding responsible AI safeguards from the outset; on the need to ground AI projects in real organisational pain points and to leverage existing open‑source tools; and on systematic evaluation and shared technical resources to overcome capacity constraints.
High consensus across funders, program designers and implementing NGOs, suggesting that future AI for development initiatives should institutionalise cohort models, responsible‑AI partnerships, needs‑driven design and shared technical pools to maximise impact.
The panel largely converged on the value of cohort‑based programs, open‑source solutions, and responsible AI. However, key disagreements emerged around how to provision technical capacity (shared pool vs hiring engineers), how to embed evaluation and impact measurement, and the preferred mechanisms for ensuring AI safety (external partnerships vs internal guardrails). These divergences reflect differing operational priorities and experiences among funders, technologists, and NGOs.
Moderate – while there is broad consensus on goals (effective, ethical AI for NGOs), the participants differ on implementation pathways. This suggests that future cohort designs will need to reconcile resource models, evaluation frameworks, and safety strategies to achieve cohesive outcomes.
These pivotal comments collectively shaped the panel from a high‑level overview of the AI cohort program to a nuanced exploration of its underlying philosophy, practical implementation, and ethical considerations. Erica’s and Tamina’s early insights established the cohort’s learning‑centric purpose, while Tamina’s resource‑pooling model and evaluation framework introduced concrete mechanisms. Mainak’s WhatsApp example grounded the talk in real‑world constraints, and Steven’s shift to behavior‑science highlighted interdisciplinary challenges. Subsequent remarks on responsible AI, the pitfalls of hype, and leveraging open‑source solutions deepened the conversation, steering participants toward a pragmatic, impact‑focused, and ethically aware approach to AI in the nonprofit sector.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

