Transforming Health Systems with AI From Lab to Last Mile

20 Feb 2026 17:00h - 18:00h

Transforming Health Systems with AI From Lab to Last Mile

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Vikalp Sahni presenting an AI-driven, end-to-end health-care platform that aims to eliminate information fragmentation, simplify history collection, and free doctors from administrative tasks [6-9]. Leveraging in-house AI, the system creates a digital identity (ABHA), aggregates patient-generated records into a personal health record, and uses language-aware prompts to summarize conditions and schedule appointments, as illustrated by the Neeti use-case [14-45]. During a clinic visit, the AI-enhanced EMR provides real-time transcription (EkaScribe), alerts clinicians to drug allergies, and automatically generates multilingual discharge notes that are synced back to the patient’s PHR [46-70]. Sahni acknowledged remaining challenges such as multilingual scaling, data verification, and model evaluation at large scale [73-76].


Sindura Ganapathi then highlighted the personal relevance of the problem, noting the burden of paperwork for caregivers and raising the question of veterinary care relevance [78-88]. A panel of regulators, funders and researchers – including Dr. Richard Rukwata, Prof. Charlotte Watts, Dr. Monica Sharma and Dr. Trevor Mundel – was introduced to discuss AI’s role in health systems [91-118]. Watts emphasized the shift from hype to substantive, global conversations about integrating AI responsibly into low- and middle-income health systems and the need for rigorous real-world evidence [127-132][213-218]. Mundel stressed that technology accounts for only about ten percent of AI success, with the remaining effort focused on ecosystems and defining human-in-the-loop roles [137-141].


Rukwata described the regulator’s dilemma of accelerating innovation while ensuring safety, and outlined collaborations with the Gates Foundation to create neutral, AI-enabled drug-approval applications [159-176]. When asked about data privacy, Sahni explained adherence to HIPAA, India’s DPDP Act, and the use of end-to-end encryption, while Watts added that funded evaluations will enforce strict anonymity and ethical clearances [271-280][287-292]. Mundel further noted emerging privacy-preserving techniques such as federated learning, citing a grant that used local data to improve an ultrasound diagnostic model without central data sharing [294-301]. Sharma highlighted the funders’ commitment to shared standards, reducing fragmentation, and providing a single evaluation framework to ease researchers’ workload [240-255].


Responding to a query on AI agents for high-anxiety maternal care, Sahni advocated a multi-agent architecture with a grounding agent and continuous human oversight to ensure safety and contextual relevance [336-344]. The session concluded with calls for next-year milestones: a transparent, error-free patient-facing agent (Mundel), operational partners demonstrating real-world impact (Watts), tighter regulator-industry collaboration (Rukwata), and preserving the clinician’s final decision (Sharma) [354-366].


Keypoints


Major discussion points


EkaCare’s end-to-end AI-driven care platform – Vikalp outlined a solution that tackles three core problems: fragmented health information, cumbersome patient history collection, and doctors spending too much time on documentation instead of care. He demonstrated how a digitally-savvy patient (Neeti) creates an ABHA ID, uploads records, interacts with a multilingual AI assistant, schedules an appointment, and how the doctor’s EMR is auto-populated, with AI alerts for drug allergies and automatic translation of notes into the patient’s language [6-9][14-20][30-38][45-51][54-62][66-70].


Technical and operational challenges of scaling AI in health – The presenters acknowledged hurdles such as multilingual support, data verifiability, model evaluation, and the need for robust governance. Vikalp noted “challenges… how to build these things at scale for multiple languages… who is evaluating these capabilities” [73-77]; later, concerns about hype, regulatory pressure, and the balance between speed and safety were raised [144-149][151-158][159-180].


Regulators and funders navigating speed vs. safety – Panelists (Richard, Charlotte, Trevor, Monika) discussed the tension between accelerating innovation and ensuring patient safety, the role of regulators as the “last person to be blamed” [159-168]; funding bodies emphasized the need for rigorous real-world evidence, cost-effectiveness, and coordinated standards to avoid fragmented expectations [213-229][240-247].


Human-in-the-loop, multi-agent architecture and privacy – Trevor stressed that technology is only ~10 % of AI success, with ecosystems and people being crucial [137-141]; Vikalp described adherence to HIPAA/DPDP, encryption, and certification for data privacy [271-280]; later, a multi-agent design with grounding agents and continuous medical oversight was advocated to keep AI safe in high-anxiety contexts like maternal health [336-344][294-301].


Future aspirations for the AI health community – Participants expressed what they hope to see at the next summit: transparent, explainable patient-facing agents; deeper operational evaluations with funder-partner collaborations; stronger industry-regulator cooperation; and maintaining the human clinician’s final authority [354-359][361-366][363-364].


Overall purpose / goal of the discussion


The session aimed to showcase a concrete AI health-care solution (EkaCare), surface the technical, regulatory, and ethical challenges of deploying AI at scale, and gather perspectives from regulators, funders, and practitioners to shape collaborative, evidence-based pathways for integrating AI into global health systems.


Overall tone and its evolution


– The conversation opened with an informative and demonstrative tone as Vikalp walked through the patient-centric AI workflow.


– It shifted to a collaborative and reflective mood when panelists shared personal anecdotes and acknowledged both hype and genuine concerns.


– A serious, problem-solving tone emerged around regulatory pressures, data privacy, and the need for rigorous evaluation.


– The closing segment turned optimistic and forward-looking, with participants expressing hopes for transparent agents, coordinated funding, and concrete outcomes at the next summit.


Thus, the tone moved from demonstration → reflection → concern → optimism, mirroring the progression from presenting a solution to discussing its broader ecosystem implications.


Speakers

Charlotte Watts – Areas of expertise: public health, HIV, gender-based violence, epidemiology, mathematics; Role/Title: Executive Director of Solutions, Wellcome Trust; former UK government official and G20 participant [S1]


Participant – Areas of expertise: ; Role/Title:  (generic audience member)


Monika Sharma – Areas of expertise: biomedical research, science innovation, health-sector funding; Role/Title: Dr. Monica Sharma, Lead, No One Artists India Foundation [S6]


Vikalp Sahni – Areas of expertise: AI in healthcare, digital health platforms, EMR integration; Role/Title: Founder/Representative, EkaCare (AI for Bharat’s Health) [S7][S8]


Richard Rukwata – Areas of expertise: pharmaceutical regulation, regulatory harmonization in Africa; Role/Title: Dr. Richard Rukwata, Director General, Medicines Control Authority of Zimbabwe; chief regulator [S9]


Sindura Ganapathi – Areas of expertise: veterinary medicine, regulatory affairs, conference moderation; Role/Title: Moderator/Host of the session; involved in G20 from the India side [S11]


Trevor Mundel – Areas of expertise: pharmaceutical development, global health, health-innovation funding; Role/Title: Dr. Trevor Mundel, Rhodes Scholar, former medical doctor and PhD in mathematics, senior leader in global health and innovation funding [S12]


Additional speakers:


(none identified beyond the list above)


Full session reportComprehensive analysis and detailed insights

1. Introduction & three core challenges – Vikalp Sahni opened by asking the audience if anyone had never visited a doctor, quickly showing that virtually everyone has experience with medical care [1-3]. He then identified three persistent problems in health delivery: (i) fragmented information from appointment-booking to vitals collection, (ii) difficulty for patients to convey a complete medical history, and (iii) excessive clinician time spent on documentation rather than patient interaction [7-9]. Sahni positioned EkaCare’s end-to-end AI-driven platform as a solution that uses in-house artificial intelligence to address all three challenges [10-12].


2. Neeti patient journey – The patient-facing workflow was illustrated through “Neeti”, a 65-year-old digitally-savvy woman with diabetes [13-16]. She first creates an ABHA (Ayushman Bharat Health Account) digital identity [17-18] and uploads photographs of her legacy records into a Personal Health Record (PHR) app, where AI extracts and digitises her history [19-20]. Neeti then asks a multilingual AI assistant to “summarise my health”, receiving a concise overview of her conditions [21-25]. When she reports a fever and a foot wound in her native language, the AI asks targeted follow-up questions (e.g., wound location, swelling, odour) and presents language-appropriate prompts that simplify interaction for a senior user [26-34]. After gathering contact details, the system recognises the case as urgent, suggests available doctors on a specific date, and creates an appointment once Neeti selects a provider [35-45].


3. Doctor’s EMR interaction – At the clinic, the physician views an AI-enhanced electronic medical record that already displays Neeti’s past history and current complaints [46-51]. By activating the audio-based “EkaScribe”, the conversation is transcribed in real time, producing verifiable notes that are automatically copied into the EMR [54-58]. The AI detects Neeti’s allergy to amoxicillin, raises an alert, and the clinician promptly switches the prescription to clindamycin [59-65]. All notes are rendered in the patient’s local language and, with a single click, synchronised back to Neeti’s PHR, creating a new node for future consultations [66-70]. Sahni highlighted that this workflow consolidates fragmented data, provides safety checks, and delivers multilingual documentation [71-72].


4. Scalability & evaluation challenges – Sahni acknowledged major hurdles to large-scale deployment: supporting dozens of Indian languages, ensuring model verifiability, and defining who will evaluate performance at scale [73-77].


5. Sindura Ganapathi’s opening remarks – Sindura shifted the tone by first asking whether a veterinary doctor counted as a “doctor” [78-80] and noting the untapped business potential in pet-care [82-84]. She then shared her personal experience as a caregiver for a mother with multiple chronic conditions, confirming that the interfaces described by Vikalp mirrored real-world frustrations with paperwork [85-88]. She further described reading the CEO-of-Anthropic blog, initially feeling “bleak” about the state of AI in health, but becoming “energised” by the hustle of the summit and inviting participants to share how the last two-to-three days had made them feel [115-124].


6. Panel introduction – The discussion moved to a panel of regulators, funders and researchers: Dr Richard Rukwata (Zimbabwe Medicines Control Authority), Prof Charlotte Watts (Wellcome Trust), Dr Monika Sharma (No One Artists India Foundation) and Dr Trevor Mundel (pharmaceutical and global-health veteran) [91-118].


7. Panel discussion highlights


Charlotte Watts* used the early-day energy of the summit to call for a shift from hype to substantive, global conversations about AI in health, especially in low- and middle-income countries [127-132]. She stressed the need for rigorous real-world evidence-randomised controlled trials, cost-effectiveness analyses, and system-integration assessments-before AI can be scaled [213-229].


Trevor Mundel* reiterated that technology alone accounts for only about ten percent of AI success; the remaining effort lies in people, workflows and ecosystem design, and in defining the human-in-the-loop role [137-141]. He later advocated a multi-agent architecture with a grounding agent and continuous medical oversight for high-anxiety maternal and infant care [336-345].


Monika Sharma* contributed a personal anecdote about her 6½-year-old child’s view of AI, illustrating how early perceptions shape expectations [140-148]. She also argued for shared standards among funders to reduce fragmentation, avoid duplication, and ensure that AI investments translate into real-world impact [240-247]; she reminded the audience that the clinician’s final decision must remain central [366].


Richard Rukwata* described the “dual pressure” of accelerating innovation while remaining the ultimate point of accountability when things go wrong [159-168]. He referred to a podcast where this tension was discussed and cited a collaboration with the Gates Foundation to develop AI-enabled screening tools for marketing authorisations, aiming to create neutral applications that speed review without compromising safety [170-176].


8. Q&A – Data privacy – A participant asked for policy-level guidance on data privacy [265-267]. Vikalp responded that EkaCare follows established frameworks such as HIPAA, India’s DPDP Act and NHA guidelines, pursues relevant certifications, and employs end-to-end encryption [271-280]. Prof Watts added that funded evaluations will enforce strict anonymity, ethical clearances and privacy safeguards [287-292]. Dr Mundel introduced federated learning as a promising technique that keeps raw data local while still improving models, citing a grant-funded ultrasound diagnostic system that used federated contributions without central data sharing [294-301].


9. Q&A – TB geospatial decision-support – A participant queried the operational use of geospatial AI for active-case-finding and diagnostic-network optimisation for tuberculosis [300-306]. Charlotte Watts responded that geospatial AI can help identify hotspots, optimise resource allocation and must be evaluated for cost-effectiveness and integration with primary-care pathways [307-315]. Trevor Mundel added that funding constraints and the need for robust validation mean such tools should be piloted in partnership with national programmes before wider rollout [316-324].


10. Q&A – Maternal & infant care agents – When asked about AI agents for high-anxiety maternal and infant care, Vikalp reiterated the importance of a grounding agent and a dedicated medical team to keep the system within safe boundaries [336-345]. He noted that a single-prompt agent can narrow the worldview, whereas a collaborative multi-agent design mitigates risk, especially when mental-health considerations are involved [340-344].


11. Closing wishes for the next AI Summit (Geneva) – Each panelist offered a brief “next-year wish”:


Trevor Mundel*: a next-generation, fully transparent patient-facing agent that never makes contraindication errors and inspires complete confidence [354-359].


Charlotte Watts*: fund-partner organisations present operational learnings, moving the dialogue from hype to honest assessments of what works and what does not [361-366].


Richard Rukwata*: deeper collaboration between industry and regulators to turn the latter from perceived bottlenecks into partners for safe, effective medicines [363-364].


Monika Sharma*: unified evaluation standards to reduce fragmentation and ease researchers’ workload [240-247].


12. Core themes & remaining gaps – The session converged on four core themes: (1) AI must be built around a human-in-the-loop, ecosystem-centric design; (2) rigorous, real-world evidence and cost-effectiveness analyses are prerequisites for scaling, especially in low-resource settings; (3) unwavering commitment to data privacy through legal compliance, technical safeguards and emerging techniques such as federated learning; and (4) collaborative frameworks that align regulators, industry and funders to balance speed with safety. Unresolved issues include concrete policy guidance for privacy-by-design, scalable multilingual model verification, prospective evaluation of geospatial AI for tuberculosis case finding, and detailed specifications for reassuring maternal-health agents [73-77][137-141][213-229][271-280][240-247].


Session transcriptComplete transcript of the session
Vikalp Sahni

All of us here, we would have visited doctors at some point in time or have been sick. Anyone who has never visited a doctor, please raise hand. So practically everyone. So let’s imagine how was your experience when you visit a doctor? How do you express your symptoms? How the doctor interacts with you and how the interaction happens with the medical systems where EMRs comes in? What we are trying to show today and what we’ve built at EkaCare is an end -to -end solution that solves three key challenges that we face today. One is the fragmentation of information and clear delivery, be it right from taking an appointment or taking a vitals. The second is how easily and comfortably you can tell about your history rather than fumbling through lots of files and how easily it can be collected, collated and being.

And the last but not the least. we would want doctors to spend time with us and not with machines writing about prescriptions, rather talking to us, counseling us, connecting with us. So the solution that we have built solves for all these three challenges. Obviously, thanks to the advancement in AI, we have been able to do a lot of this due to the capabilities that we have built in -house. So I’m going to narrate a story. This story is of Neeti. She’s a 65 -year -old female, has diabetes, and she wants to now see how she can do the whole end -to -end care delivery. To start off, Neeti is quite digital savvy. She actually has created her ABHA address.

ABHA is the digital identity that India government provides. This digital identity allowed her to collect a lot of her medical records. records into the app, which is her PHR or patient health record app. She has also taken many photographs so that the AI can read through these photographs and collects her medical history in a digital format so that it can be summarized. Now what happens is Neeti wants to talk to an AI, which is a med assist or an assistant for Neeti. She goes ahead, she just picks up a prompt, says summarize my health. What is happening now is all of Neeti’s health is getting summarized. You would know, Neeti would know these are the kind of things that has come up from the medical records.

Also, there is a prompt that Neeti would get, which is very, very relevant to the kind of things that Neeti is supposed to talk about. But today Neeti came for a very different purpose. And now in a local language, she’s talking to the bot. And she’s talking to Neeti. And Neeti is actually telling that In English, she’s expressing that she has fever and there is a wound in her foot. What AI would start doing now is try to understand more about this specific condition. Where is the wound? Is it swelled? Are there any kind of smell that is coming in? And all of this is happening in the local language that Neeti understands. More importantly, it is not letting Neeti to only type or talk.

There are these prompts that are coming in that will ease off the interaction of a 65 -year -old female. After collecting more information, such as mobile number, the AI would identify that this is an important case and this needs a doctor’s intervention. But which doctor’s intervention? With which clinic? On which day? All of this information will now get collected. This will be displayed. So in this case, Neeti is being told that there is an availability of these two doctors on 14th of February. But she can always say that, okay, I want to do it in a different day. Pick up the doctor. As soon as she picks up the doctor, the appointments gets created. Neeti can actually do all of this by typing or by acting on the prompts as well.

So this is how all the information that Neeti wanted to share with the doctor gets collected, gets summarized, and now appointment is created. The next story goes to when Neeti visits the doctor’s clinic. And when Neeti visits the doctor’s clinic, this is the doctor’s view where a doctor is looking at a classical EMR screen. but how this EMR screen is fitted with these AI utilities that can help a doctor to get the better outcome is what we want to demonstrate. If you see all the current EMR and the current prescription for Neeti is completely empty. There is nothing there. Doctor is looking at the past history of Neeti as well as what are the current ailments and current issues that has been listed.

AI also ensured that it not only figures out the important information for patient, but here a doctor is also able to understand and get to know more about Neeti that there is an uncontrolled diabetes. So this is the kind of person that he’s dealing with. But more importantly, it would be very hard for a doctor to start filling all of these information. During the consultation, doctor just starts the audio -based EkaScribe, which is now doing the interaction between doctor and the patient, recording the interaction between doctor and the patient. These interactions gets converted into medical notes and these medical notes are verifiable medical notes that doctors would see. Again, this entire thing has come out just by the interaction between the doctor and the patient.

Doctor has to just do copy to EMR pad. As soon as the copy to EMR pad happens, this entire information gets filled, whatever has been discussed, all the medication that doctors wanted to do. But here we go and see that during the consultation, doctor prescribed amoxicillin. But the patient’s medical history said that he is or she is allergic to amoxicillin. The capable AI -based EMR is now alerting that the patient is allergic to amoxicillin. So, the patient is allergic to amoxicillin. So, the patient is allergic to it. without actually going deeper, a doctor can very easily go ahead now and change this medication to provide for a better outcome as well as to reduce the medical errors.

So it’s changed from amoxicillin to clindamycin. As it changed, the prompt also changed. If you look at the information, all filled, the PDF view of the patient will have the entire medications, everything created in the local language. There is a translation of all the remarks, advices, everything in the language that patient understands. And at the click of a button, this information goes and sits into the patient’s PHR app, creating another node into her medical system. That can be used for the further consultation and any kind of other ailments. So that’s what is the power of AI and the utilities that we are seeing today. The care process right from being fragmented to being consolidated, understanding the patient’s entire medical history to making sure that the doctor’s time is saved while he’s seeing more patients and more medical data is captured.

Today, all of that is possible. But yes, there are challenges. How to build these things at scale for multiple languages, how to generate that data so that your models are verifiable at those large scale. Who is evaluating these capabilities that are being built? All of these are challenges that we as a developer space. And I’m looking forward to building more and working more in this domain.

Sindura Ganapathi

I’ll ask you to take a seat. When you said, is there anyone who has not visited a doctor, instinctively I was asking, does veterinary doctor count? Because I’m a veterinarian by background. And then it’s only a half joke, actually. In the pet care industry, there is real value and business to be made there. So just a thought. And on a more serious note, you could change the name of the lady and adjust age, et cetera. That could be my mother. And I deal with this personally as a caregiver, has all these conditions, deal with so many papers. And every interface you mentioned is a leaf out of my personal life. So thank you for thinking about building a solution here.

I will invite my. Panelists one by one, please join us in the on the stage. First, Dr. Richard Rukwata. He is the chief regulator. He is the director general of Medicines Control Authority of Zimbabwe. I have very high regard for regulators because I have been working on our regulatory agency and the streamlining, and I can see how difficult job that is. And the fact that you have seen this through for ML3 recognition, that’s a wonderful accomplishment. Congratulations. Congratulations on that. Not an easy job. And also, you are involved in the regulatory harmonization work of Africa, and there is a lot of interesting thoughts you will be hopefully able to share on Next, I would like to invite Professor Charlotte Watts.

Last we saw was in G20. Hopefully, it brings back memories. Yes. Happy ones. I’d like to keep it that way. She has had extensive career in both health care, HIV, gender -based violence, epidemiology, mathematics, and a deep experience working in the government, UK government, which was the capacity she came for G20 meetings, which I was involved in from India side. So it’s a pleasure to have you back, Charlotte. And now she’s working at Wellcome Trust as Executive Director of Solutions. I would love to hear more about how you are thinking about these things. And next I would like to invite Dr. Monica Sharma. I happened to meet her just now, and she is the lead for No One Artists India Foundation.

And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has extensive experience working in putting together funding programs, whether it is Newton Fund, whether it is IRTG, Germany’s International Research Training Groups, or India’s BioPharma Mission Program. So all of these, I’m sure, will come in very handy in your current role and would love to hear from you on thoughts related to the topic today. And last but not least, my dear friend and mentor, Dr. Trevor Mundell. I should say Dr. Dr. Trevor Mundell. He is both a – he has an unusual background. People who work with him smile when I say unusual. And Trevor, he did medical degree and then he figured he wanted a Ph .D.

in mathematics. So and is a Rhodes Scholar and has extensive experience in pharmaceutical industry as from early research to development and decade plus experience in global health. With that, we will get started. First to begin with, so I think hopefully you all have mics. For me personally, coming here after having read the blog that went out very famously by CEO of Anthropic, I came in with a very bleak feeling, to be very honest. It’s. Kind of depressing. what are we creating but I have to say last two three days has been energizing seeing all the chaos in terms of interactions people talking to each other hustle just hustle and people excited about the product they are building it brought me memories of vegetable market where I grew up from where people are like life there right so people are trying to sell something people are trying to buy something people are talking and the reason I talk about that as a happy thing is it’s nice to see so many human beings that’s what came to my mind in this in the backdrop of that blog so just love to hear from you what was your feeling as human beings I think this this seeing anything that you want to particularly share last two three days You have been here.

You saw all of this. What did that make you feel? Because I think going forward, this feeling of human beings, I think, will have a currency of its own. Anybody wants to volunteer and say something? An open ended question.

Charlotte Watts

Yes, I’m happy to jump in. So I just got here actually yesterday. So I actually missed, I think, the early start of the week, which I heard was fantastic because you had the youth here. As well as, you know, older people who’ve been in the global health or the global sort of sphere or in the AI world for longer. So that that mix and the drive of the kind of energy, I think, is what I was hearing people tell me about the start of the week. But now I’ve just been here. Yeah. Sort of last night and today. And for me, what I feel quite reassured about, I wonder if it’s so, you know, the change.

is so profound and so I suppose it I was sort of wary because there’s so much hype um and then clearly the risks are being articulated but what I feel reassured about in going to a number of sessions is just actually we’re starting to have the more meaningful conversations about what this really means that is getting beyond that either hyper cell or hyper fear to actually how do we navigate this space and also how do we navigate this as a global community because this is not something that’s one country’s problem to fix so so actually I’m feeling that you know this is this is a really important conference and and we’re starting to get into the we need and we’re starting to get into the nitty -gritty of of how on earth we move forward in in the in the best way that really

Sindura Ganapathi

Anybody else want to share? Trevor and then Monica.

Trevor Mundel

Well, Sundara, you know, what I’ve heard frequently in this meeting, and I hear it quite often in the AI application space, is that technology is just 10 % of the exercise in applications of AI. And the rest is really around people and ecosystems. And as soon as people say that, they then go back to talk about technology. So, I am interested in how we do more than just pay lip service to this notion that we really need to think about the ecosystem and the people involved, probably more than the technology itself. And defining the actual role for humans in the loop is going to be, I think, you know, as important as any of the technological advances.

Monika Sharma

So, Sundara, I don’t have an experience from the summit as such, because I’ve just arrived here. But I want to share a very relevant experience from this model. morning so while i was coming here i have a six and a half years old who just saw ai on my you know computer and he said where are you going i said yeah i have a meeting to attend he said ai and i was like oh he’s able to see i said so you know what is here he said yeah it’s artificial intelligence and i said what else you know about it he said yeah soon they’re going to be robots robots doing everything for us and i was like no but still you would need me and i found like oh my god that’s not a good start of a conversation like like everybody is influenced by this so thank you so much for bringing that human back to to this summit yeah that’s what i thought i’ll add as a you know a conversation from my household this morning thank

Sindura Ganapathi

you yeah no i i charlotte i hope you’re right that there is a lot of hype there now i’m praying for hype after reading how many of you have read that what i referred to the blog you by deriva modi by anthropic seal place of Okay. Okay. Few hands. I am not even sure whether I want to urge you to go read because it really makes you think. And there were some people who are in the field. They said, I am choosing not to read it because I don’t want to know. No. So it’s a good thing to hear this, that this human in the loop and the way we responsibly develop, because that’s the theme we want to explore, especially in the context of health.

That, I think, is a good segue, Dr. Richard, I want to ask you, start with you. Job of a regulator, I said, is hard. The reason I experienced it firsthand, having now very close. We work with our regulatory system, et cetera, where you have two extreme pressures on a regulator and the one it needs to move fast. It needs to be less like everybody wants it to be. regulation and you want to speed up innovation and any every day gets counted and you are held to the metric. That’s one extreme. The other extreme is, boy, if anything goes wrong, who is the first person? Who approved this? Who allowed it to come out? So, these are two extreme things and usually in a slower cycle, you are able to have some time.

So, how are you thinking about it in the age of both the eye, but in general, in reconciling these two extremes of demands put on

Richard Rukwata

Yes. Thank you for that insight. I have to think on my feet here, but you’re quite right. It’s a matter of industry wanting more results from the regulator. Thank you. investment and also wanting to retain or rather wanting the regulator to retain responsibility when things go wrong. I remember watching a very interesting podcast, I think it was called Moonshot, and in this episode they were saying, well, if all the jobs are taken by AI, regulatory jobs will be the last to remain because people always have, people should always have somebody to blame, right? We can’t say, oh, you know, somebody was harmless. No, AI did it. No, that would never work. So worst case scenario, I’ll be the last person there so that they can hang me when something goes wrong.

At least I have that job security to think about. But really, with respect to what is happening as far as industry’s expectations are concerned, we see a lot of potential in AI. We’re currently working with Grant from the Gates Foundation. on an application for screening applications for marketing authorizations. I think those in our industry, the pharma industry, know that this is the biggest source of angst amongst industrialists that regulators take too long, and we are seen as an impediment to progress, actually. So we also blame industry. We’re saying, well, you know, you submit, you know, incomplete applications and then blame it on us. So we’re hoping that with technology we’ll have, you know, applications in the near future that can work for both sides of the fence, right?

Neutral applications that don’t necessarily speak to one side, but they enable all of us to at least reach a common position very quickly. This is the beautiful thing about computers, right? They don’t feel any type of way. They don’t feel any type of way about you. They don’t necessarily like you. They don’t dislike you. so we’re hoping that this will allow us to do I was just saying not yet so we’re hoping that as we work more towards the development of these tools we’ll be able to see more traction from industry so that we become a more efficient part of the supply chain from development to market and not to be seen as the barrier to entry in this field.

Thank you.

Sindura Ganapathi

That’s very helpful and also there is both a challenge for a regulator when this AI speeds up the cycle of innovation brings new complexities but also itself a very good tool in either summarizing a complex application or building models that allows a few people to actually have the same capability as a well developed pharma so be on the same page so lots of interesting possibilities here which in India we’re also thinking about. along those lines all three of you are coming from one type of shared commonality which is funding innovation and as a funder of innovation you are also in not too dissimilar way are trying to balance promoting innovation while upholding safety and minimizing risk etc so i would like to hear from each of you because each of you are different kind of funders how you are thinking about balancing these two in the funding programs and scouring innovation and speeding that up you can go in any order you can thumb wrestle

Charlotte Watts

trevor’s pointing to me but i went first last time but now i can go um i mean we we fund uh so you know we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we a range of innovations with ambition of improving and saving lives.

Increasingly, we are funding innovation

Trevor Mundel

you know on the acceleration front we look at it you know on the acceleration front we look at it in that every month we don’t have the next generation malaria vaccine. You know, and certainly every year we’re seeing hundreds of thousands of deaths in young children. Every year we don’t have the enhanced personal coaching in education. We see a generation that is losing opportunities. So we feel a tremendous pressure, I know, from the funder side in terms of how do we speed the availability, the access to a tool which looks like it might be a solution to some of those vexing problems. But I think that it really behoves us here to think about completely focusing on fast might be slow.

And we have to have this moment of reflection because what could derail the good application of AI? You know, you think about it in the health area, which is so sensitive, the few errors, like on the regulatory front, relatively few errors that could occur, the, you know, unfortunate outcome for a patient. which can be attributed to a system which was probably misused by, you know, the people who are using it maybe, but nevertheless will be attributed to AI. And that leads to a tremendous deceleration and things not moving ahead. We take the lesson of the self -driving vehicles, you know, where they may be incredibly good drivers and better than the average human at driving, but one fatal accident puts that whole enterprise at risk.

So I think from the funder’s perspective, we need to have a situation where maybe taking a little bit of a reflective and a slower approach might be fast.

Sindura Ganapathi

Monika?

Monika Sharma

So I represent No Notice Foundation and we support health, people and planet both. So at this point, sitting with… funders, global funders. with yourself. And I think it sends a strong message how important AI is at the moment with respect to health. So while we are trying to address as funders different parts of the ecosystem addressing health, but having AI bringing evidence to this really matters. So I think I really feel that having a joint approach towards it is kind of strengthening the whole ecosystem of AI.

Sindura Ganapathi

QR code allows you to look at it and all the details I believe are there. I have not tried it. I would very quickly like to hear from any of you or all of you. What are you trying to what are you hoping from this? And after this, you know, usually panels, I find panels very boring, by the way. So and as a person sitting there or as a person trying to sit here and trying to give Gyan in two minutes. So I would love to make it more interactive. So get your questions and there is still time. So right after this, hopefully I would like to see you interacting and sharing your thoughts and sharing questions.

I’ll be coming to you. So who wants to say your hope from this call?

Charlotte Watts

Yeah, so we’re really excited about this announcement today. You’ll see it’s the big health research and innovation foundations coming together. to jointly support what is a major initiative. And essentially, what we want to do here is say, how do we generate real -world evidence on what does it really mean and are we really seeing real -world health impacts once we start to integrate AI into different health systems? So we have lots of exciting opportunities that are showing the efficacy of particular application, but what this call really wants to support is rigorous evaluations of where AI systems are integrated into clinical decision -making. Our focus is on low – and middle -income countries. We are interested in really asking a range of questions.

What does it mean for the health system? Are these new initiatives actually operable? Can it be integrated into what often is quite a big bureaucracy of a health system? what are the costs associated with that? Are these interventions actually cost effective? In the end, ministries of health have to make decisions based on affordability. So how do we learn more about the costs of this transition? And what are the things we didn’t expect? Right. And what we see, you know, if we look at the evidence base, we’ve got a lot of exciting evidence of interventions that show promise. We’ve only got a relative handful of rigorous randomized controlled trials that are actually assessing interventions when they’re implemented.

So there’s a massive gap there. And then we’re also now starting to see in different contexts anecdotal evidence of where AI has been integrated, but it’s actually butted against the system. And actually that opportunity isn’t realizing and sort of is showing it’s easier said than done. So basically this investment is to try and address that evidence gap. And I just want to call out that Jay Powell is here and APHRC. who are key partners on this in supporting the implementation and for APHRC, the contextualization of the work that we hope to be supporting in Africa.

Sindura Ganapathi

Wonderful. Thank you. Anything else, Trevor, you want to add?

Trevor Mundel

Well, I just want to say thanks to our partners that welcomed Novo Nudist Foundation on this initial effort. I hope it’s the start of even more in the future over there because the global health world has been plagued by this lack of primary data. You know, us and others have funded a lot of modeling simulation around global health problems. But you cannot transcend the lack of primary data at the end of the day. And AI is too important for that to be the constraint that impedes implementation at the end of the day.

Monika Sharma

I thought maybe. It’ll be good to also add that how as we fund this together. envision this as a commitment towards shared standards. So while we’ll be working together as part of this call, we are saying that the real world evaluation is not optional. It is the foundation. And by aligning together, we are kind of defining what good looks like so that we reduce the burden on countries and developers who would otherwise face a patch of I would say patchwork of expectations. And secondly, I would say that by joining hands, we are reducing fragmentation in a rapidly evolving field. And now that we are coordinated, we are getting away with the risk of duplication, the quality that we want to see in the applications or in the products.

And we make sure that the investments that we do are getting into the real world. I mean, they do create an impact because of the coordination that is part of this whole process. And I also say that when we sit together, it adds the seriousness. to the ecosystem that what we are doing is not a side experiment. This is something that we are creating as infrastructure for a long -term process that I would say governments have been asking for it. And the best part as a researcher, I would say, is that we don’t, like the researchers don’t have to navigate three different timelines. Okay, yeah, that’s great for that. And no three different criterias. We just have not like one agreed aligned criteria.

And I would say that no three different deadlines, no timeline. So it makes really life easy as a researcher, I would say. I hope you get some interesting calls from it.

Sindura Ganapathi

So if we have questions, is there a mic going around? I hope there is. If not, I’ll give you mine so I don’t have to answer your questions. And please, there is one hand up there. Okay, please direct your question. Including to Vikalp, if you have questions. Yep. Let’s start with the gentleman at the back and then you’re up next.

Participant

Thank you, folks. Very interesting. My questions around data privacy and data privacy by design. And the lady mentioned three different parameters. Could you elaborate more on how data privacy can be incorporated, at least at a policy level?

Sindura Ganapathi

Anyone, anyone wants to take that question, at least in the context of this call, I guess you can are in general. Yeah. How are you handling this? Yeah.

Vikalp Sahni

So I think health data is quite sensitive and I mean, more sensitive data rather when it comes to country, when it comes to individual, when it comes to even places such as police, military, et cetera. So. So it’s a pretty valid question. Some of the things that we as an organization try to follow is the general guidelines that has been provided by the competent authorities, such as be it HIPAA on the healthcare data or DPDP, which is the Act for Data Privacy in India. And more importantly, if we look at the data exchanges, such as NHA in India have also created clear guidelines. I think following those guidelines and getting yourself tested against those guidelines are fundamentally important.

And it has become so sensitive that today a lot of our customers, they do ask us whether you have a continuous, continuous like sort of applicable certificates from these. privacy authorities as well as these privacy -based frameworks. So that’s how we solve for it. And I think it’s a good thing. In health, it is fundamentally critical. And the technology, how it is growing, I think there are multiple other ways as well, like an end -to -end encryption and so on and so forth, where we can use it to keep things private.

Sindura Ganapathi

So there are two aspects to it. One is technological, and another is policy. There are other sessions entirely focused on people who are working on it. So I wouldn’t put you in the shoes to answer that. But on the technological front, both Charlotte, if you want to address, or Trevor, on what are some of the things, model learning without data being exchanged, or synthetic data, so many aspects of it which have been at the forefront. And Charlotte, whatever you want to add.

Charlotte Watts

I mean, I just… I just wanted to say, in terms of the evaluation… that we want to support through this funding. We’re very much expecting clearly an anonymity of, you know, basically for those evaluations to adhere to high quality research standards. So the kind of bars and checks and controls that you’d expect if you’re doing any sort of research study on health and the sort of ethical guidance and clearance procedures that you need to adhere to. So for us, that’s just an important part of any aspect of research that we support and that we’ll be supporting in this initiative. And that includes issues of privacy and other things.

Sindura Ganapathi

Do you want to say anything about the technological emergence of any new technology that has been helping with preserving data privacy, but not the innovative learnings and improvements of the models?

Trevor Mundel

Yeah, Sundar, you know, so I think that for us, it’s. There’s no compromise on. patient data, privacy from the clinical trial, as Charlotte has mentioned over here. But AI does raise a lot of other issues that go almost beyond that. So, for instance, you know, the various models of federated learning that people have introduced where you can have locally private data, but you contribute to the evolution of a model, which improves because it has access to a very diverse data source. Now, has that actually been regulated? We had an example of one of our grantees who produced a very good system for using ultrasound to diagnose certain chest diseases, and it was based on a federated contribution from different groups that kept their own data local and private, but they contributed to the model.

And, you know, that hasn’t really been tested, and all of the policies around is that a disclosure, which is acceptable now in the age of AI, I think it’s something that we may want to. So, I think it’s something that we encourage with the right framework.

Sindura Ganapathi

Thank you. Do you have the mic? Okay. Then if you have another mic, you can take it to the gentleman, madam, and then after you.

Participant

My question is to Professor Watts. You mentioned about clinical decision support. So the context from an Indian healthcare setting, as you’re well aware, is majority of our health is run at the front line. So there’s also an element of operational decision support as such. So there’s a bunch of geospatial AI models that we are working with Google for geospatial inferencing in the tuberculosis space, mostly active case finding and then diagnostic network optimization. So my question is, from an evidence perspective, we obviously are doing some retrospective analysis, and we plan to follow it up with a prospective analysis as such, although it’s a single user. So I’m wondering if you have any thoughts on that. I think it’s a great question.

I think it’s a great question. I think it’s a great question. but would this be of interest and what is your level of inclination to operational decision support because I’m a physician myself, I’m a medical informaticist as a PhD. I can tell you for one thing for sure is the patients who come into the system, they’re for the most part taken care of but those are all the silent patients who are out there undetected in the community. So what’s your inclination indeed in this research grant for such solutions?

Charlotte Watts

It’s a wonderful question because essentially I come from public health. So our interest, I think our collective interest is actually how do we in particular focus our evaluations and generate evidence where there’s the greatest opportunity to improve health and to strengthen systems and some of that aspect might be actually how do you other opportunities to other opportunities to really help to improve health and to strengthen systems and some of outreach and improved care for the underserved and so we’re not going to say you know this works and this fits and this isn’t fit but ultimately we are interested in how does that integrate within the system in the call we mentioned the importance of looking at interventions as the primary care level not only at tertiary care and I think the things that will resonate in our interest is is really are there areas where actually the opportunity is big enough that actually it merits that assessments to say is this really translating into tangible health impacts and is the return on that actually affordable and is it something that could be scaled so that issue of you know how does it connect with the system is an important part of the question as well that we’re interested in.

Trevor Mundel

Now I do think it’s a very important question because you’re probably all aware of the constraints that we face now in the global health space in terms of funding. some of the exciting new technologies that are coming along, whether it be at the level of the Global Fund or of Gavi, who both have not met quite the standard that we would like to in their replenishments. So there’s just a reduced amount of funding available for those critical commodities that could be life -changing. And when we get a TB vaccine, which we hope we might have in, say, three years, how are we going to afford to actually put that out to the people who need it?

So it’s exactly the kind of targeting that you’re talking about in terms of risk targeting that can make all the difference in terms of taking now the lesser amount that we can afford, but putting it to where the need is the greatest. And that matching, which the AI systems and that geospatial targeting that you’re talking about, is exactly the solution that we need to promote and understand how it works.

Sindura Ganapathi

So, person who has a mic, and then you can hand it to the person after you ask the question.

Participant

It has been like a great session. So how do we go about building AI agents that are not only intelligent, but reassuring in very high anxiety environments like maternal and infant care? How do we go about that? I would love to hear your thoughts because we’re building something on the same.

Sindura Ganapathi

When you say high anxiety, just so that.

Participant

High anxiety for maternal and infant care, because even myself as a new mother, I feel that there are a lot of open areas where the mother doesn’t know what to do. Right. And it’s an open field. And the pediatricians, Gainax and the mother support system is very low. When you go down to tier two and tier three cities. How do we go about building that? I would love to get some thoughts.

Sindura Ganapathi

Take it.

Vikalp Sahni

So I think one of the things that we have done while we build, a lot of these agentic pipelines for doctors, for users is. having human in the loop while the development is happening is extremely, extremely important. And that’s what Trevor also mentioned, because today, how this can go and where it can lead is not something that you can fully control. And so there are these systems that are specifically designed where an anonymous de -identified conversations are practically being distilled to see if the agents are working together in tandem. The second thing, and that’s more technical that we have sort of figured out is the models are quite capable. But when you are running them with a single goal, or a single agent with a single prompt, that practically, at times narrows down the whole worldview.

But if you are running multiple agents collaborating together where there is a grounding agent whose job is to make sure that the other agent is not sort of going beyond what the boundaries are, I think that is fundamental in healthcare. If it is just a single agent, single prompt and a very, so, and that’s what we should avoid because it’s a quite deep workflow, especially if we look at maternal health and things where the mental health comes into being. It’s fundamentally important that we follow some good technical principle of creating a multi -agent architecture, but more importantly, have a human in the loop. Because we, as a company, hasn’t been able to find a way to get out of it.

That’s why we have like a strong 10 member medical team, which is also growing where these are doctors working with the technology.

Sindura Ganapathi

Thank you. And unfortunately, I have been told we are out of time, but speakers will be available. If you can please come up to them. And one very quick thing, just before we go, anything you want to share, what you would like to see next year when we come back to AI Summit? I just heard that it is being hosted in Geneva. So we are all showing up there. We have all these aspirations. What would it look like when we show up there to say, OK, this year we did something together? Anything that comes to your mind?

Trevor Mundel

You know, I’d love to see the next iteration of VicAlp’s patient facing agent, and that would be an agent that would be able to guide you in your health pathway, would be completely transparent. And that I would actually understand why it made its decisions. And I would have 100 % confidence. that in that anxiety -provoking situation, it never made an error related to guidances, drug contraindications. It was always correct in those things. And I wouldn’t have to be concerned about that. That’s the next iteration that I’d love to see next year.

Sindura Ganapathi

Next year, maybe.

Charlotte Watts

And what I would like to have next year is instead of all of us as funders sitting up here, I would like to see some of the partners that we’re funding who are doing work to really understand what this looks like operationally and to have really honest conversations about what’s working and what’s not working. And so we’re moving away from the hype to really actually starting to get into the nitty -gritty of what this could be and can be.

Richard Rukwata

Okay, so quickly, I would like to see a situation where there’s more collaboration between industry and regulators because ultimately we’re on the same side. We want the same thing. better quality safe and effective medicines for all our people so development in that area would be very exciting

Sindura Ganapathi

final word to you

Monika Sharma

i think i i would still love to see that no matter how much evidence we generate from ai no matter what we do we still have that last uh word from the doctor who is sitting there and never forget the human angle while we navigate the ai space that’s what i always want to thank you so much

Sindura Ganapathi

yes thank you so much next time we meet i hope we all feel as optimistic as we do and some some more thank you so much for attending here thank you speakers thank you speakers we just have a souvenir for you from india side for the session thank you so much okay where we go Yeah. Yeah. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you you you Thank you. Thank you. Thank you. Thank you. you

Related ResourcesKnowledge base sources related to the discussion topics (36)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“She first creates an ABHA (Ayushman Bharat Health Account) digital identity”

The knowledge base confirms that ABHA is the digital health identity issued by the Indian government as part of the Ayushman Bharat Digital Mission, with hundreds of millions of IDs created [S7] and described as the government-provided digital identity for health records [S10] and [S104].

Confirmedhigh

“Sahni acknowledged major hurdles to large‑scale deployment: supporting dozens of Indian languages, ensuring model verifiability, and defining who will evaluate performance at scale”

Sahni’s identified challenges match the technical issues highlighted in the knowledge base, which cites the need to build AI systems that work across multiple Indian languages, generate verifiable data, and determine who evaluates AI capabilities in healthcare [S1].

Additional Contextmedium

“ABHA is the digital identity that India government provides”

Additional context from the knowledge base explains that ABHA IDs are linked to a federated health record architecture and are a core component of the Ayushman Bharat Digital Mission, enabling health records to move across providers [S42] and [S104].

External Sources (107)
S1
Transforming Health Systems with AI From Lab to Last Mile — -Charlotte Watts: Executive Director of Solutions at Wellcome Trust, extensive career in healthcare, HIV, gender-based v…
S2
The Power of Satellites in Emergency Alerting and Protecting Lives — Alexandre Vallet: Thank you very much Dr. Zavazava. Thank you very much both of you for this introductory remark. I will…
S3
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S4
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S5
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — – **Participant**: Role/Title not specified, Area of expertise not specified
S6
Transforming Health Systems with AI From Lab to Last Mile — -Monika Sharma: Dr. Monica Sharma, Lead for No One Artists India Foundation, background in biomedical field and science …
S8
Transforming Health Systems with AI From Lab to Last Mile — – Vikalp Sahni- Richard Rukwata
S9
Transforming Health Systems with AI From Lab to Last Mile — -Richard Rukwata: Dr. Richard Rukwata, Director General of Medicines Control Authority of Zimbabwe, Chief Regulator, inv…
S10
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — I will invite my. Panelists one by one, please join us in the on the stage. First, Dr. Richard Rukwata. He is the chief …
S11
Transforming Health Systems with AI From Lab to Last Mile — -Sindura Ganapathi: Conference moderator/host, has veterinary background, works with regulatory agencies, was involved i…
S12
Transforming Health Systems with AI From Lab to Last Mile — -Trevor Mundel: Dr. Dr. Trevor Mundel (medical degree and Ph.D. in mathematics), Rhodes Scholar, extensive experience in…
S13
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has ext…
S14
Keynote-Vishal Sikka — “Bridging that gap requires delivering correct systems, trusted, verifiable, reliable systems that deliver value to peop…
S15
Safe and Responsible AI at Scale Practical Pathways — Ashish Srivastava brought a practitioner’s perspective, highlighting three critical challenges: data interoperability ac…
S16
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S17
Beyond North: Effects of weakening encryption policies | IGF 2023 WS #516 — Importantly, WhatsApp collaborates with other companies and civil society groups to resist encryption regulations. This …
S18
WS #241 Balancing Acts 2.0: Can Encryption and Safety Co-Exist? — Audience: No problem. My name is Vinicius Fortuna and I work on internet access resilience and privacy at Jigsaw and tha…
S19
The AI Pareto Paradox: More computing power – diminishing AI impact?  — To break through this plateau, we have to reverse the ratio. The real breakthroughs, the 80% of successes that actually …
S20
Building Population-Scale Digital Public Infrastructure for AI — Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathwa…
S21
Keynote-Roy Jakobs — “Innovation and governance must advance together With speed Because trust determines adoption … If they move at differ…
S22
The Foundation of AI Democratizing Compute Data Infrastructure — Federated learning approach that allows data contribution to global models while maintaining local ownership and control
S23
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — He cautioned against techno-solutionist approaches: “If we throw resources at AI, we can fix the healthcare system. So w…
S24
Conversational AI in low income &amp; resource settings | IGF 2023 — AI technologies can bridge the digital divide in healthcare. Existing care solutions have the potential to become global…
S25
https://dig.watch/event/india-ai-impact-summit-2026/how-the-global-south-is-accelerating-ai-adoption_-finance-sector-insights — And I think that’s true in the short term when the ecosystem is getting prepared. But in longer term, frauds and mis -se…
S26
Panel Discussion AI in Healthcare India AI Impact Summit — “One of the big barriers is multilingual.”[1]. “Maybe use cases, and I briefly hit on this before, but I think certainly…
S27
Cracking the Code of Digital Health / DAVOS 2025 — Key points included the need for better data liquidity and interoperability to fully leverage AI’s potential in healthca…
S28
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and em…
S29
How Trust and Safety Drive Innovation and Sustainable Growth — And an organization like the ICO is there for both sides to see, well, there’s someone actually overseeing that. And tha…
S30
Policymaker’s Guide to International AI Safety Coordination — OECD Secretary General Mathias Cormann emphasized that trust is built through inclusion and objective evidence. He ident…
S31
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And how do we demonstrate that the risks have been managed well? And that is where the assurance ecosystem that Rebecca …
S32
Overview of AI policy in 15 jurisdictions — Summary China remains a global leader in AI, driven by significant state investment, a vast tech ecosystem and abundant …
S33
Global Digital Governance &amp; Multistakeholder Cooperation for WSIS+20 — Ebert calls for creating transparent governance rules that can keep pace with rapid AI development while ensuring benefi…
S34
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S35
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Development | Infrastructure Examples include tumor board preparation, holistic patient data aggregation, post-discharg…
S36
AI for Good Innovation Factory Grand Finale 2025 — – **Accessibility and Affordability Criteria**: Judges consistently emphasized the importance of solutions being deploya…
S37
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — A human rights-based approach with community solutions is advocated AI policies in Africa should ideally espouse a cont…
S38
Foster AI accessibility for building inclusive knowledge Societies: a multi-stakeholder reflection on WSIS+20 review — 5. Information accessibility, endeavouring to ensure the availability, affordability, and accessibility of information t…
S39
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Evidence-Based Policymaking: Mechanisms and Challenges ## Industry Perspectives: Systems Integration Challenges ## …
S40
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S41
Why science metters in global AI governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S42
MedTech and AI Innovations in Public Health Systems — Does it actually save cost, as sir was mentioning. And the third element of institutionalization, sir, is also the use c…
S43
Obama’s 2013 Inaugural: a doctor’s diagnosis — In the section about Health, the construction resonates twice over as: ‘these things [Medicare, Medicaid, Social Securit…
S44
A Guide for Practitioners — – What are the current macroeconomic, political and social environments, and how do they relate to health? A thoro…
S45
Global AI Policy Framework: International Cooperation and Historical Perspectives — Despite coming from different backgrounds (diplomatic/legal vs academic), both speakers advocate for patience and carefu…
S46
AI could save billions but healthcare adoption is slow — AI is being hailed as atransformative force in healthcare, with the potential to reduce costs andimprove outcomesdramati…
S47
The mismatch between public fear of AI and its measured impact — Inmedicine and science, AI has shown promise in pattern recognition and data analysis. Deployment is cautious, as clinic…
S48
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S49
Transforming Health Systems with AI From Lab to Last Mile — A central tension throughout the discussion involved balancing the urgency of addressing healthcare challenges with the …
S50
Indias AI Leap Policy to Practice with AIP2 — This unexpected disagreement emerges around the pace of AI deployment. Fred emphasizes the dual nature of AI and the nee…
S51
Building Population-Scale Digital Public Infrastructure for AI — Balancing speed of diffusion with safety, especially in health applications
S52
AI in healthcare gains regulatory compass from UK experts — Professor Alastair Dennistonhas outlinedthe core principles for regulating AI in healthcare, describing AI as the ‘X-ray…
S53
Safe and Responsible AI at Scale Practical Pathways — Guardrails, Human‑in‑the‑Loop, and Risk‑Assessment Mechanisms Are Essential for Reliable Deployment
S54
Leveraging AI4All_ Pathways to Inclusion — “First, access is a multi -layered problem”[16]. “Good technology by itself does not bring in or include people”[18]. “T…
S55
WS #460 Building Digital Policy for Sustainable E Waste Management — The discussion identified several technological applications: AI for predictive analytics, IoT for real-time tracking, a…
S56
Secure Talk Using AI to Protect Global Communications &amp; Privacy — This story brought a visceral reality to the discussion, moving beyond abstract statistics to show the personal and inst…
S57
WS #283 AI Agents: Ensuring Responsible Deployment — Will Carter: Quite a lot of thought. This has been core to our mission at Google from the beginning, from our earliest d…
S58
Agentic AI in Focus Opportunities Risks and Governance — “And of course, humans have to have full oversight end -to -end.”[64]. “And we want these agentic payments to be safe an…
S59
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S60
AI agent autonomy rises as users gain trust in Anthropic’s Claude Code — A new study from Anthropicoffersan early picture of how people allow AI agents to work independently in real conditions….
S61
Transforming Health Systems with AI From Lab to Last Mile — The first challenge addressed was the fragmentation of healthcare information and delivery systems. Traditional healthca…
S62
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — All of us here, we would have visited doctors at some point in time or have been sick. Anyone who has never visited a do…
S63
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — But I think today it’s affecting our tasks. It’s affecting tasks of efficiency. You know, we’ve already started doing pr…
S64
Conversational AI in low income &amp; resource settings | IGF 2023 — Furthermore, the CEO asserts that trust can be bolstered in healthcare through the implementation of AI solutions. For i…
S65
Cracking the Code of Digital Health / DAVOS 2025 — Key points included the need for better data liquidity and interoperability to fully leverage AI’s potential in healthca…
S66
Safe and Responsible AI at Scale Practical Pathways — The panel revealed that making data AI-ready is fundamentally a governance challenge rather than merely technical. The a…
S67
Laying the foundations for AI governance — The panel showed relatively low levels of direct disagreement, with most speakers identifying similar obstacles (time, u…
S68
Panel Discussion AI in Healthcare India AI Impact Summit — One of the big barriers is multilingual. So. So you can’t use a model that’s good in English, but it’s not good in other…
S69
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and em…
S70
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — – **Balancing Safety and Innovation**: A central theme was dispelling the notion that safety and innovation are incompat…
S71
Policymaker’s Guide to International AI Safety Coordination — OECD Secretary General Mathias Cormann emphasized that trust is built through inclusion and objective evidence. He ident…
S72
Multistakeholder Partnerships for Thriving AI Ecosystems — We’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven te…
S73
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — He argues that AI should augment clinicians while keeping humans central to decision‑making, acknowledging the difficult…
S74
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And as several of our panelists emphasized, if we don’t address that gap deliberately, the shift towards AI agents is on…
S75
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Governments have collectively affirmed the importance of building trust by governing AI based on human rights, and that …
S76
What is it about AI that we need to regulate? — Multiple sessions identified the need to strengthen the IGF Secretariat and institutional capacity. Thedecision-making w…
S77
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Audience:Good afternoon. Good morning. My name is Paola Galvez. I’m Peruvian, right now based in Paris. I just finished …
S78
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — The discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human …
S79
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S80
AI Transformation in Practice_ Insights from India’s Consulting Leaders — The tone was pragmatically optimistic and refreshingly candid. Both speakers were honest about challenges and uncertaint…
S81
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S82
Fireside chat with Dr Matthew Meselson — The tone was largely conversational and reflective, with Meselson recounting personal anecdotes and experiences in a war…
S83
Dynamic Coalition Collaborative Session — The discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. Howeve…
S84
How AI Is Transforming Diplomacy and Conflict Management — The discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Participants demonstrated…
S85
Building Inclusive Societies with AI — -Collaborative spirit: All panelists demonstrated willingness to work together across sectors The tone remained consist…
S86
Main Topic 2 –  GovTech Dynamics: Navigating Innovation and Challenges in Public Services — Attendees were allotted a 15-minute interlude, ensuring a structured pause within the schedule. In summation, the event …
S87
Revamping Decision-Making in Digital Governance and the WSIS Framework — The discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s poi…
S88
High Level Session 3: AI &amp; the Future of Work — The discussion maintained a cautiously optimistic tone throughout, with speakers acknowledging both the tremendous poten…
S89
Delegated decisions, amplified risks: Charting a secure future for agentic AI — The tone was consistently critical and cautionary throughout, with Whittaker maintaining a technically informed but acce…
S90
Main Topic 2: Neurotechnology and privacy: Navigating human rights and regulatory challenges in the age of neural data — The discussion maintained a serious, academic tone throughout, with speakers expressing both fascination with the techno…
S91
Can National Security Keep Up with AI? / Davos 2025 — The overall tone was serious and analytical, with panelists offering measured perspectives on complex issues. There were…
S92
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — The tone of the discussion was generally optimistic and forward-looking, with speakers emphasizing the need for urgent a…
S93
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S94
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S95
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S96
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S97
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-healthcare-india-ai-impact-summit — No, I think that’s true. So we have been talking to medical device companies who are now targeting new age diagnostic to…
S98
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-bharats-health_-addressing-a-billion-clinical-realities — And especially the whole vision of making India a developed country, we have to leapfrog. And many of these technologies…
S99
Current Developments in DNS Privacy | IGF 2023 — This created a fragmented system depending upon the registry or registrar involved and introduced a number of key issues…
S100
Artificial intelligence (AI) – UN Security Council — In conclusion, while AI-powered content moderation offers significant benefits, it is essential to recognize and address…
S101
WS #231 Address Digital Funding Gaps in the Developing World — ### Neeti Biyani – APNIC Foundation (Session Moderator) – **Neeti Biyani** – Works with the APNIC Foundation, Session m…
S102
Day 0 Event #83 Empowering Afghan Women: Bridging Digital Gaps for Education — – Neeti Biyani: Senior advisor of strategy and development with the APNIC Foundation Amrita Choudhury: I’ll try to ans…
S103
29, filed Jan. 22, 2010, at 9-10. — CCHT led to a 25% reduction in the number of bed days of care and a 19% drop in hospital admissions. At $1,600 per patie…
S104
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — The Ayushman Bharat Health Account number (ABHA number) is being rolled out.
S105
AI as a companion in our most human moments — A few months ago, I met someone whose story stayed with me. A friend of my cousin had recently received a cancer diagnos…
S106
Contact — data from the system and give it to a specified person.
S107
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — And at the back end, we will, based on the consent, access the details of where the farmer is from, what is the crop bei…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Vikalp Sahni
5 arguments140 words per minute1769 words753 seconds
Argument 1
Solves fragmentation of information and clear delivery, be it right from taking an appointment or taking a vitals.
EXPLANATION
Vikalp describes an AI‑driven platform that aggregates patient data from multiple sources, summarises health records and automates appointment booking and vital capture, thereby eliminating fragmented information flow. The end‑to‑end solution streamlines the patient journey from registration to clinical encounter.
EVIDENCE
He outlines the three key challenges-including fragmentation-and then walks through the Neeti story where the AI collects her ABHA-linked records, photographs, summarises her health, and automatically schedules appointments, showing how fragmented steps are removed [6-8], [13-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion highlights how AI aggregates fragmented patient data, automates appointment booking and vital capture, eliminating disjointed steps [S1] and demonstrates the use of ABHA-linked records to create a unified health view [S10].
MAJOR DISCUSSION POINT
Fragmentation of health information
Argument 2
Provides real‑time safety checks such as drug‑allergy alerts, multilingual prescription generation and automatic sync with the patient’s PHR.
EXPLANATION
The platform uses AI during the consultation to instantly flag contraindications, generate prescriptions in the patient’s language and push the completed record to the personal health‑record app, ensuring safety and accessibility for both clinician and patient. This real‑time feedback reduces medical errors and improves patient understanding.
EVIDENCE
During Neeti’s visit the AI alerts the doctor to an amoxicillin allergy, suggests clindamycin, creates a translated PDF of the prescription and synchronises it to her PHR app, demonstrating safety alerts, multilingual output and automatic data sync [59-66], [68-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Real-time safety alerts, multilingual prescription PDFs and automatic sync to personal health records are described as core features of the platform [S1] and further illustrated through the Neeti case study [S10].
MAJOR DISCUSSION POINT
Real‑time safety and multilingual support
Argument 3
Highlights challenges of scaling the solution across multiple languages and ensuring model verifiability at large scale.
EXPLANATION
Vikalp acknowledges that extending the AI system to many regional languages and validating models on massive datasets are major technical and operational hurdles. He calls for robust data generation and verification processes to maintain reliability as the platform grows.
EVIDENCE
He explicitly mentions the difficulty of building at scale for multiple languages, generating data for model verification, and evaluating capabilities at large scale [73-76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vikalp explicitly mentions the difficulty of multilingual scaling and the need for verifiable large-scale model training [S1]; a related overview of his technical challenges appears in a dedicated AI-for-Bharat briefing [S7].
MAJOR DISCUSSION POINT
Scaling and verification challenges
AGREED WITH
Charlotte Watts, Trevor Mundel
Argument 4
Development must keep clinicians in the loop, use multi‑agent architectures with a grounding agent, and rely on a dedicated medical team for oversight.
EXPLANATION
Vikalp stresses that AI agents should operate under human supervision, employing several cooperating agents plus a grounding agent that enforces safety boundaries, while a medical team reviews outputs. This design reduces the risk of autonomous errors in healthcare.
EVIDENCE
He describes a pipeline where human-in-the-loop oversight, multi-agent collaboration, a grounding agent, and a ten-member medical team ensure safe development and deployment [336-345].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-in-the-loop design, multi-agent collaboration and a grounding agent are outlined as safety mechanisms, with a medical team providing oversight [S1]; the importance of human-centred oversight and data verification is reinforced in a responsible-AI perspective piece [S15].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop design
AGREED WITH
Trevor Mundel
DISAGREED WITH
Trevor Mundel
Argument 5
Commits to complying with HIPAA, India’s DPDP Act and NHA guidelines; pursues certifications and end‑to‑end encryption to protect health data.
EXPLANATION
Vikalp outlines the organization’s adherence to established privacy regulations, obtaining relevant certifications, and employing technical safeguards such as encryption to ensure data confidentiality. These measures aim to meet legal and ethical standards for health information.
EVIDENCE
He references HIPAA, India’s DPDP Act, NHA guidelines, customer demands for privacy certifications, and the use of end-to-end encryption as core privacy controls [271-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The platform’s privacy strategy references HIPAA, India’s DPDP Act, NHA guidelines and end-to-end encryption as core controls [S1]; broader discussions on encryption standards and policy implications are provided in encryption-focused analyses [S17][S18].
MAJOR DISCUSSION POINT
Data privacy compliance
AGREED WITH
Charlotte Watts, Trevor Mundel, Participant
DISAGREED WITH
Trevor Mundel, Charlotte Watts, Participant
T
Trevor Mundel
5 arguments169 words per minute981 words346 seconds
Argument 1
Technology accounts for only ~10 % of AI success; the rest is people, workflows and ecosystem design.
EXPLANATION
Trevor argues that technological capability is a small fraction of AI impact; successful health AI requires supportive people, processes and ecosystem alignment. He warns against focusing solely on technology without addressing human factors.
EVIDENCE
He states that AI technology is only about ten percent of the effort and that the remainder depends on people, ecosystems, and workflow design, noting the tendency to revert to tech talk [137-141].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Pareto paradox emphasizes that 80 % of impact comes from people, processes and institutional knowledge, echoing the 10 % technology claim [S19]; the panel also notes the gap between tech hype and human-centred implementation [S1].
MAJOR DISCUSSION POINT
Ecosystem over technology
AGREED WITH
Vikalp Sahni
Argument 2
Warns that rapid deployment without thorough evaluation can backfire; a reflective, slower‑approach may ultimately accelerate trustworthy adoption.
EXPLANATION
Trevor cautions that hasty AI roll‑outs risk errors that can damage trust and slow progress; a measured, reflective pace can lead to faster, reliable adoption. He draws parallels with self‑driving car incidents to illustrate the risk.
EVIDENCE
He discusses the need for reflection, the danger of a single fatal accident derailing an entire enterprise, and argues that a slower, thoughtful approach can ultimately speed trustworthy deployment [190-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker cautions against premature roll-outs and advocates a reflective pace to preserve trust, a view echoed in the panel’s discussion of speed versus safety [S1] and in a keynote on aligning innovation with governance speed [S21].
MAJOR DISCUSSION POINT
Need for cautious deployment
AGREED WITH
Charlotte Watts, Vikalp Sahni
DISAGREED WITH
Richard Rukwata, Charlotte Watts
Argument 3
Highlights federated learning as a promising technique that keeps raw data local while still improving models, but notes the regulatory uncertainty around it.
EXPLANATION
Trevor describes federated learning where institutions keep data on‑site yet contribute to a shared model, offering privacy benefits. However, he points out the lack of clear regulatory frameworks governing such approaches.
EVIDENCE
He explains federated learning, gives an example of an ultrasound diagnostic system built on federated contributions, and notes that regulatory guidance for such models is still lacking [294-301].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Federated learning is presented as a privacy-preserving model-training approach, with ongoing regulatory ambiguity highlighted in a dedicated federated-learning overview [S22]; the broader panel also mentions regulatory gaps for such techniques [S1].
MAJOR DISCUSSION POINT
Federated learning and regulation
DISAGREED WITH
Vikalp Sahni, Charlotte Watts, Participant
Argument 4
Argues that targeted AI‑driven risk‑allocation is essential given limited global‑health funding, and that such tools can maximise the reach of scarce interventions.
EXPLANATION
Trevor emphasizes that AI can help prioritize limited resources, such as targeting TB vaccine deployment, by identifying high‑need populations, thereby improving cost‑effectiveness. Strategic targeting is crucial under funding constraints.
EVIDENCE
He discusses limited global-health funding, the need for risk-targeting, and how AI-driven geospatial targeting can help allocate scarce interventions like a future TB vaccine [318-322].
MAJOR DISCUSSION POINT
AI for resource targeting
DISAGREED WITH
Charlotte Watts
Argument 5
Envisions next‑generation patient‑facing agents that explain their reasoning, avoid contraindication errors and inspire full confidence among users and clinicians.
EXPLANATION
Trevor imagines future AI agents that are fully transparent, providing explanations for every decision, guaranteeing safety (no drug‑contraindication errors), and earning 100 % trust in high‑anxiety scenarios such as maternal health.
EVIDENCE
He describes a desired patient-facing agent that is completely transparent, never makes contraindication errors, and gives users and clinicians total confidence in stressful situations [354-359].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The vision of transparent, error-free agents aligns with discussions of intelligent agents that reduce fraud and guarantee safety [S25] and with calls for trusted, verifiable AI layers that deliver value while ensuring safety [S14].
MAJOR DISCUSSION POINT
Transparent patient‑facing agents
DISAGREED WITH
Vikalp Sahni
C
Charlotte Watts
3 arguments189 words per minute1321 words417 seconds
Argument 1
Emphasises the massive evidence gap: need for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries.
EXPLANATION
Charlotte points out that while many AI health pilots exist, few have robust randomized controlled trial evidence, particularly in LMICs. She calls for systematic evaluation of impact, cost, and scalability to inform policy and funding decisions.
EVIDENCE
She outlines the paucity of RCTs, the need for real-world evidence, cost-effectiveness studies, and mentions partners such as APHRC and Jay Powell supporting implementation in Africa [213-232].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel stresses the paucity of RCTs and the need for robust real-world evidence, particularly in LMICs, and notes a funding call that will support such evaluations [S1][S22].
MAJOR DISCUSSION POINT
Evidence gap in AI health
AGREED WITH
Trevor Mundel, Vikalp Sahni
DISAGREED WITH
Trevor Mundel
Argument 2
Stresses that funded research must guarantee participant anonymity, ethical clearance and strict privacy safeguards.
EXPLANATION
Charlotte notes that any funded AI evaluation must adhere to high ethical standards, including anonymisation of data, obtaining ethics approvals, and implementing privacy protections, aligning with best research practices.
EVIDENCE
She emphasizes expectations for anonymity, ethical clearance, and privacy safeguards as essential components of any research study she supports [287-292].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Requirements for anonymisation, ethics approval and privacy protections are highlighted in responsible-AI guidelines and data-governance best practices [S15] as well as in the broader discussion of ethical implementation [S1].
MAJOR DISCUSSION POINT
Ethical and privacy standards in research
AGREED WITH
Vikalp Sahni, Trevor Mundel, Participant
DISAGREED WITH
Vikalp Sahni, Trevor Mundel, Participant
Argument 3
Indicates interest in AI that supports frontline primary‑care decisions, integrates with existing health‑system bureaucracy, and demonstrates affordable, scalable impact.
EXPLANATION
Charlotte expresses interest in AI tools that operate at the primary‑care level, fit within existing health‑system processes, and are cost‑effective and scalable, especially for underserved populations. She wants to see how such interventions can be operationalised within bureaucratic health systems.
EVIDENCE
She discusses focusing on primary-care integration, affordability, system integration, and tangible health impact as key evaluation criteria [316-326].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Conversational AI for low-resource primary-care settings and its potential for scalable impact are discussed in a low-income-setting AI briefing [S24]; integration with health-system workflows is also mentioned in the panel’s overview of system-level AI adoption [S1].
MAJOR DISCUSSION POINT
Frontline AI decision support
R
Richard Rukwata
2 arguments143 words per minute445 words186 seconds
Argument 1
Regulators face dual pressure: accelerate innovation while remaining accountable for safety; AI can help create neutral, faster‑review applications.
EXPLANATION
Richard describes the regulator’s dilemma of needing speedy approvals while ensuring patient safety, and suggests AI can produce neutral applications that speed up review without compromising accountability.
EVIDENCE
He outlines the two extremes-speed versus accountability-and notes that AI can generate neutral, faster-review applications to ease the regulator’s burden [153-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between speed and accountability for regulators and the role of AI in streamlining review processes are examined in a keynote on innovation-governance alignment [S21] and reiterated in the panel’s discussion of regulator dilemmas [S1].
MAJOR DISCUSSION POINT
Balancing speed and safety in regulation
AGREED WITH
Sindura Ganapathi, Monika Sharma
DISAGREED WITH
Trevor Mundel, Charlotte Watts
Argument 2
Collaboration with funders (e.g., Gates Foundation) is underway to pilot AI‑driven screening tools for marketing authorisations.
EXPLANATION
Richard notes a partnership with the Gates Foundation to develop AI tools that screen marketing authorisation applications, aiming to reduce delays and improve efficiency in the regulatory process.
EVIDENCE
He mentions working with the Gates Foundation on an AI-driven screening application for marketing authorisations, highlighting industry-regulator collaboration [170-175].
MAJOR DISCUSSION POINT
Funders‑regulator AI collaboration
M
Monika Sharma
1 argument166 words per minute629 words227 seconds
Argument 1
Proposes a joint funding framework with shared standards to avoid fragmented expectations, reduce duplication and ensure that AI projects deliver measurable health impact.
EXPLANATION
Monika advocates for coordinated funding with common standards, reducing patchwork requirements and duplication, while aligning evaluation criteria so AI interventions produce real‑world impact efficiently.
EVIDENCE
She describes shared standards, reduction of fragmentation, avoidance of duplication, and ensuring that investments translate into measurable health outcomes [240-256].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for coordinated funding, common standards and avoidance of fragmented expectations are echoed in a policy-research roadmap critiquing techno-solutionist fragmentation [S23] and in the panel’s emphasis on unified funding mechanisms [S1].
MAJOR DISCUSSION POINT
Coordinated funding and standards
S
Sindura Ganapathi
2 arguments86 words per minute1852 words1280 seconds
Argument 1
Panel moderation stresses the need for interactive, participant‑driven discussions rather than static presentations.
EXPLANATION
Sindura emphasizes making panels more engaging by encouraging audience interaction, questioning the boring nature of traditional panels, and inviting participants to share their thoughts directly.
EVIDENCE
She asks the audience to share, remarks that panels are boring, and explicitly invites interactive participation throughout the session [135-136], [202-210].
MAJOR DISCUSSION POINT
Interactive panel format
Argument 2
Calls for closer industry‑regulator collaboration to turn regulators from perceived bottlenecks into partners for safe, effective medicines.
EXPLANATION
Sindura urges stronger partnership between industry and regulators, highlighting shared goals of quality, safety and efficacy, and suggesting that collaboration can transform regulators from obstacles into allies in the innovation pipeline.
EVIDENCE
She frames regulator pressures-speed versus accountability-and calls for collaboration, noting the need for industry-regulator partnership to improve the supply chain [150-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for aligned innovation and regulation to prevent trust erosion and foster partnership is highlighted in a keynote on speed versus governance [S21] and reinforced by the panel’s discussion of regulator-industry dynamics [S1].
MAJOR DISCUSSION POINT
Industry‑regulator partnership
P
Participant
3 arguments162 words per minute385 words142 seconds
Argument 1
Participants request concrete policy‑level guidance on embedding privacy‑by‑design in AI health systems.
EXPLANATION
The participant asks for detailed, actionable guidance on how data privacy can be incorporated at the policy level within AI‑enabled health solutions, moving beyond generic statements.
EVIDENCE
He explicitly asks the panel to elaborate on how data privacy can be incorporated at a policy level [265-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance on privacy-by-design, data governance and encryption standards is provided in responsible-AI and data-privacy frameworks [S15] and in analyses of encryption best practices [S17][S18].
MAJOR DISCUSSION POINT
Policy guidance on privacy‑by‑design
DISAGREED WITH
Vikalp Sahni, Trevor Mundel, Charlotte Watts
Argument 2
Seeks evidence on geospatial AI models for TB active‑case finding and health‑system optimisation, questioning prospective evaluation.
EXPLANATION
The participant describes work on geospatial AI for tuberculosis active‑case finding and diagnostic network optimisation, and asks how such tools can be evaluated prospectively to generate robust evidence.
EVIDENCE
He outlines the use of geospatial AI for TB case finding, mentions retrospective analysis and plans for prospective study, and asks for thoughts on evaluation [308-313].
MAJOR DISCUSSION POINT
Evidence for geospatial AI in TB
Argument 3
Calls for AI agents that are transparent, error‑free and able to provide calm guidance in stressful maternal‑health scenarios.
EXPLANATION
The participant stresses the need for AI systems in maternal and infant care that are trustworthy, explain their reasoning, avoid errors, and reduce anxiety for new mothers, especially in low‑resource settings.
EVIDENCE
He describes high-anxiety maternal care, the lack of guidance for new mothers, and requests ideas for building reassuring, transparent AI agents for this context [265-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The desire for transparent, trustworthy agents aligns with discussions of intelligent agents that reduce errors and build confidence [S25] and with calls for verifiable, trusted AI layers that explain reasoning [S14].
MAJOR DISCUSSION POINT
Trustworthy AI for maternal health
Agreements
Agreement Points
Human‑in‑the‑loop and ecosystem design are essential; technology alone is insufficient for safe AI health deployment.
Speakers: Vikalp Sahni, Trevor Mundel
Development must keep clinicians in the loop, use multi‑agent architectures with a grounding agent, and rely on a dedicated medical team for oversight. Technology accounts for only ~10 % of AI success; the rest is people, workflows and ecosystem design.
Both speakers stress that AI systems in health must be built around human oversight and supportive ecosystems rather than relying solely on technical capability [336-345][137-141].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with safe-AI-at-scale frameworks that stress guardrails and human-in-the-loop mechanisms, and echoes concerns that technology without supporting infrastructure and human factors is inadequate [S53][S54][S55][S56][S59].
Rigorous evaluation, cautious rollout, and model verifiability are required before large‑scale AI health adoption.
Speakers: Charlotte Watts, Trevor Mundel, Vikalp Sahni
Emphasises the massive evidence gap: need for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries. Warns that rapid deployment without thorough evaluation can backfire; a reflective, slower‑approach may ultimately accelerate trustworthy adoption. Highlights challenges of scaling the solution across multiple languages and ensuring model verifiability at large scale.
All three highlight that AI health tools must be validated through robust evidence and careful scaling to avoid errors and maintain trust [213-232][190-197][73-76].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects evidence-based AI policy recommendations calling for systematic evaluation, incident monitoring, and transparent model documentation, as outlined in OECD’s AI Incidents Monitor and AI policy roadmaps [S39][S40][S42][S45][S48][S49][S53].
Strong commitment to data privacy and privacy‑by‑design in AI health systems.
Speakers: Vikalp Sahni, Charlotte Watts, Trevor Mundel, Participant
Commits to complying with HIPAA, India’s DPDP Act and NHA guidelines; pursues certifications and end‑to‑end encryption to protect health data. Stresses that funded research must guarantee participant anonymity, ethical clearance and strict privacy safeguards. Highlights federated learning as a promising technique that keeps raw data local while still improving models, but notes regulatory uncertainty. Requests concrete policy‑level guidance on embedding privacy‑by‑design in AI health systems.
Each speaker underscores the necessity of legal compliance, technical safeguards, and policy guidance to ensure privacy of health data [271-280][287-292][294-301][265-267].
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with privacy-by-design principles highlighted in secure AI communication frameworks and AI incident monitoring that prioritize privacy governance [S56][S40].
Need for stronger collaboration between regulators, industry, and funders to accelerate safe AI innovation.
Speakers: Richard Rukwata, Sindura Ganapathi, Monika Sharma
Regulators face dual pressure: accelerate innovation while remaining accountable for safety; AI can help create neutral, faster‑review applications. Calls for closer industry‑regulator collaboration to turn regulators from perceived bottlenecks into partners. Proposes a joint funding framework with shared standards to avoid fragmented expectations and ensure measurable health impact.
All three call for coordinated action across regulatory bodies, industry players and funding agencies to balance speed, safety and impact [159-168][170-175][150-158][240-256].
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes calls for multi-stakeholder cooperation in AI health governance, as described in WHO roundtables and OECD analyses on building digital public infrastructure [S52][S39][S51][S45].
Emphasis on inclusive, multilingual AI solutions that serve underserved and low‑resource populations.
Speakers: Vikalp Sahni, Charlotte Watts, Participant, Sindura Ganapathi
Provides real‑time safety checks such as multilingual prescription generation and automatic sync with the patient’s PHR. Focuses on primary‑care level integration in low‑ and middle‑income countries, assessing affordability and system integration. Seeks AI agents that are transparent, error‑free and reassuring in high‑anxiety maternal and infant care settings. Advocates for interactive, participant‑driven discussions to ensure diverse voices are heard.
Speakers converge on the need for AI health tools that are linguistically accessible, culturally appropriate and designed for frontline use in resource-constrained settings [33-35][68-69][217-224][265-334].
POLICY CONTEXT (KNOWLEDGE BASE)
Matches criteria from AI for Good Innovation Factory and inclusive AI initiatives that stress affordability, offline capability, and multilingual access for marginalized communities [S36][S37][S38][S54].
Similar Viewpoints
Both see AI as a tool to make scarce health resources and regulatory processes more efficient, thereby addressing funding and speed constraints [318-322][159-168].
Speakers: Trevor Mundel, Richard Rukwata
AI can help prioritize limited resources and improve efficiency in health interventions. AI can create neutral, faster‑review applications to ease regulatory bottlenecks.
Both stress that coordinated funding coupled with robust evaluation standards is essential to generate credible evidence and avoid fragmented efforts [213-232][240-256].
Speakers: Charlotte Watts, Monika Sharma
Need for rigorous real‑world evidence and cost‑effectiveness analyses. Joint funding framework with shared standards to ensure measurable health impact.
Unexpected Consensus
Transparent, error‑free patient‑facing agents for high‑anxiety maternal health scenarios.
Speakers: Participant, Trevor Mundel
Calls for AI agents that are transparent, error‑free and able to provide calm guidance in stressful maternal‑health situations. Envisions next‑generation patient‑facing agents that are completely transparent, never make contraindication errors and inspire full confidence.
A lay participant’s request for trustworthy maternal-health AI aligns directly with Trevor’s vision of fully transparent, safety-guaranteed agents, showing an unexpected convergence between user needs and a funder’s strategic outlook [265-334][354-359].
POLICY CONTEXT (KNOWLEDGE BASE)
Supports the ‘glass-box’ AI transparency agenda and guardrail recommendations for patient-facing tools to build trust and reduce anxiety [S48][S53][S47].
Overall Assessment

The panel shows strong convergence on four core themes: (1) human‑in‑the‑loop and ecosystem‑centric design; (2) the necessity of rigorous, real‑world evaluation before scaling; (3) unwavering commitment to data privacy and privacy‑by‑design; (4) the importance of collaborative frameworks linking regulators, industry and funders. Additionally, there is broad agreement on building inclusive, multilingual AI solutions for underserved populations.

High consensus – most speakers echo each other’s positions across technical, regulatory and ethical dimensions, indicating a shared understanding that responsible AI in health requires coordinated governance, robust evidence, privacy safeguards and inclusive design. This consensus paves the way for joint initiatives, shared standards and funding mechanisms to advance AI‑enabled health care while mitigating risks.

Differences
Different Viewpoints
Pace of AI deployment in health regulation and the balance between speed and safety
Speakers: Richard Rukwata, Trevor Mundel, Charlotte Watts
Regulators face dual pressure: accelerate innovation while remaining accountable for safety; AI can help create neutral, faster‑review applications. Warns that rapid deployment without thorough evaluation can backfire; a reflective, slower‑approach may ultimately accelerate trustworthy adoption. Emphasises the massive evidence gap: need for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries.
Richard argues that AI can speed up regulatory review while keeping safety, whereas Trevor cautions that moving too fast can erode trust and advocates a slower, reflective rollout; Charlotte adds that rigorous evidence (RCTs, cost-effectiveness) is needed before scaling, supporting a more cautious path. The three speakers share the goal of safe AI-enabled health systems but disagree on how quickly and under what evidentiary standards AI should be introduced. [153-158][190-197][213-222]
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects ongoing debate highlighted in AI policy roadmaps and roundtables that caution against rapid rollout without safeguards, emphasizing a measured pace [S45][S46][S47][S49][S50][S51].
Degree of human oversight versus fully autonomous, transparent AI agents in patient care
Speakers: Vikalp Sahni, Trevor Mundel
Development must keep clinicians in the loop, use multi‑agent architectures with a grounding agent, and rely on a dedicated medical team for oversight. Envisions next‑generation patient‑facing agents that explain their reasoning, avoid contraindication errors and inspire full confidence among users and clinicians.
Vikalp stresses a human-in-the-loop design, employing multiple cooperating agents and a medical team to guard against errors, while Trevor imagines future agents that are fully transparent and error-free, implying minimal need for continuous human supervision. This reflects a split on how much autonomy is acceptable for health AI. [336-345][354-359]
POLICY CONTEXT (KNOWLEDGE BASE)
Tied to discussions on autonomy limits, emphasizing mandatory human oversight and safety controls in AI agents [S53][S57][S58][S59][S60].
Approaches to ensuring data privacy and governance in AI‑enabled health systems
Speakers: Vikalp Sahni, Trevor Mundel, Charlotte Watts, Participant
Commits to complying with HIPAA, India’s DPDP Act and NHA guidelines; pursues certifications and end‑to‑end encryption to protect health data. Highlights federated learning as a promising technique that keeps raw data local while still improving models, but notes the regulatory uncertainty around it. Stresses that funded research must guarantee participant anonymity, ethical clearance and strict privacy safeguards. Participants request concrete policy‑level guidance on embedding privacy‑by‑design in AI health systems.
Vikalp focuses on compliance with existing regulations and technical safeguards (encryption, certifications); Trevor proposes federated learning to keep data local but points out the lack of clear regulatory frameworks; Charlotte emphasizes anonymity and ethical approvals for research; the Participant asks for actionable policy guidance. The speakers agree on the importance of privacy but diverge on the primary mechanism to achieve it. [271-280][294-301][287-292][265-267]
POLICY CONTEXT (KNOWLEDGE BASE)
Linked to frameworks that integrate privacy-by-design with incident monitoring and governance structures for health AI deployments [S56][S40][S41].
Methodology for generating evidence on AI health interventions
Speakers: Charlotte Watts, Trevor Mundel
Emphasises the massive evidence gap: need for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries. Argues that targeted AI‑driven risk‑allocation is essential given limited global‑health funding, and that such tools can maximise the reach of scarce interventions.
Charlotte calls for systematic, rigorous evaluation (RCTs, cost-effectiveness) before wide deployment, while Trevor stresses the immediate utility of AI for resource targeting in constrained funding environments, suggesting a more pragmatic, quicker-to-use approach. Both aim to improve health outcomes but differ on the evidentiary pathway. [213-222][318-322]
POLICY CONTEXT (KNOWLEDGE BASE)
Aligned with evidence-based policymaking roadmaps that call for systematic data collection, use-case libraries, and rigorous impact assessment [S39][S40][S42][S45][S49].
Unexpected Differences
Definition of “doctor” in the opening poll
Speakers: Vikalp Sahni, Sindura Ganapathi
All of us here, we would have visited doctors at some point in time or have been sick. Anyone who has never visited a doctor, please raise hand. So practically everyone. When you said, is there anyone who has not visited a doctor, instinctively I was asking, does veterinary doctor count? Because I’m a veterinarian by background.
Vikalp assumes the term “doctor” refers exclusively to human medical practitioners, while Sindura expands it to include veterinary doctors, revealing an unexpected semantic disagreement early in the session. [1-3][79-80]
Level of autonomy expected from AI agents for high‑anxiety maternal care
Speakers: Participant, Vikalp Sahni
How do we build AI agents that are not only intelligent, but reassuring in very high anxiety environments like maternal and infant care? Development must keep clinicians in the loop, use multi‑agent architectures with a grounding agent, and rely on a dedicated medical team for oversight.
The Participant seeks fully reassuring, possibly autonomous agents for maternal health, whereas Vikalp insists on a human-in-the-loop, multi-agent system, showing an unexpected clash between user expectations for autonomy and the developer’s safety-first design. [329-334][336-345]
POLICY CONTEXT (KNOWLEDGE BASE)
Connects to autonomy debates and transparency requirements for high-stakes health agents, reinforcing the need for controlled autonomy and explainability [S48][S57][S58][S60].
Overall Assessment

The panel displayed moderate disagreement centered on the speed of AI rollout, the extent of human oversight versus autonomous agents, and the optimal strategy for data privacy and evidence generation. While participants shared a common vision of leveraging AI to improve health outcomes, they diverged on implementation pathways—ranging from rapid, AI‑driven regulatory acceleration to cautious, evidence‑based deployment, and from strict human supervision to fully transparent autonomous agents.

The disagreements are substantive but not irreconcilable; they highlight the need for coordinated policy frameworks that balance speed, safety, privacy, and rigorous evaluation. Without addressing these divergent views, scaling AI health solutions may face regulatory push‑back, trust deficits, and fragmented implementation.

Partial Agreements
All three agree that AI should be deployed to improve health outcomes, but Vikalp stresses human oversight, Trevor stresses ecosystem and people factors, and Charlotte stresses rigorous evidence before scaling. The shared goal is safe, effective AI in health, yet the pathways (human‑in‑the‑loop, ecosystem focus, evidence generation) differ. [336-345][137-141][213-222]
Speakers: Vikalp Sahni, Trevor Mundel, Charlotte Watts
Development must keep clinicians in the loop, use multi‑agent architectures with a grounding agent, and rely on a dedicated medical team for oversight. Technology accounts for only ~10 % of AI success; the rest is people, workflows and ecosystem design. Emphasises the massive evidence gap: need for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries.
All three want robust privacy protection, but Vikalp relies on regulatory compliance and encryption, Trevor proposes technical federated learning with pending regulation, and the Participant seeks concrete policy‑by‑design guidance. The goal of privacy is shared, yet the means differ. [271-280][294-301][265-267]
Speakers: Vikalp Sahni, Trevor Mundel, Participant
Commits to complying with HIPAA, India’s DPDP Act and NHA guidelines; pursues certifications and end‑to‑end encryption to protect health data. Highlights federated learning as a promising technique that keeps raw data local while still improving models, but notes the regulatory uncertainty around it. Participants request concrete policy‑level guidance on embedding privacy‑by‑design in AI health systems.
Takeaways
Key takeaways
An AI‑enabled end‑to‑end patient‑care platform can reduce information fragmentation, automate appointment scheduling, provide real‑time safety checks (e.g., allergy alerts), generate multilingual prescriptions, and sync with patients’ personal health records. Human‑in‑the‑loop, ecosystem design, and multi‑agent architectures are critical; technology alone accounts for only ~10 % of success. Regulators face dual pressure to accelerate innovation while remaining accountable for safety; AI can help create neutral, faster‑review applications, but closer industry‑regulator collaboration is needed. Funders stress the large evidence gap and call for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries, with shared standards to avoid fragmented expectations. Data privacy must be addressed through compliance with regulations (HIPAA, India DPDP, NHA guidelines), certifications, end‑to‑end encryption, and emerging techniques such as federated learning, though regulatory clarity is still lacking. Operational decision‑support tools for frontline settings (e.g., geospatial AI for TB case finding) are of high interest, but require prospective evaluation and integration with existing health‑system workflows. Designing AI agents for high‑anxiety contexts (maternal and infant care) requires transparency, explainability, error‑free guidance, a grounding agent, and continuous human oversight.
Resolutions and action items
Commitment by the funding coalition (Wellcome Trust, Gates Foundation, etc.) to support rigorous real‑world evidence generation for AI in health, with emphasis on LMICs (Charlotte Watts). Agreement to develop shared evaluation standards and coordinated funding criteria to reduce duplication and fragmentation (Monika Sharma). Regulatory office (Zimbabwe Medicines Control Authority) will explore AI‑driven screening tools for marketing authorisations in partnership with funders (Richard Rukwata). EkaCare (Vikalp Sahni) will continue building the platform with a multi‑agent architecture, a grounding agent, and a dedicated medical team for human‑in‑the‑loop oversight. Panelists expressed intent to collaborate more closely between industry, regulators, and funders to turn regulators from perceived bottlenecks into partners (Richard Rukwata, Sindura Ganapathi). Future work will explore federated learning approaches while seeking appropriate regulatory guidance (Trevor Mundel). Next AI Summit in Geneva will showcase next‑generation patient‑facing agents that are fully transparent and error‑free (Trevor Mundel).
Unresolved issues
Concrete policy‑level guidance on embedding privacy‑by‑design and handling data‑privacy questions raised by participants. How to evaluate and scale geospatial AI models for TB active‑case finding and health‑system optimisation in prospective, real‑world settings. Specific technical and regulatory pathways for federated learning in health AI applications. Detailed roadmap for scaling the platform across multiple Indian languages and ensuring model verifiability at large scale. Design specifications and validation protocols for AI agents that provide reassuring support in high‑anxiety maternal and infant care scenarios.
Suggested compromises
Adopt a reflective, slower‑approach to rapid AI deployment to ensure safety and maintain trust (Trevor Mundel). Balance regulator speed with accountability by using neutral AI‑driven applications that satisfy both industry and safety requirements (Richard Rukwata). Use multi‑agent systems with a grounding agent plus human oversight to mitigate risks of single‑agent failures (Vikalp Sahni). Align funding standards across agencies to reduce fragmented expectations and duplication, while still encouraging innovation (Monika Sharma). Encourage collaboration between industry and regulators to shift the perception of regulators from bottlenecks to partners (Sindura Ganapathi).
Thought Provoking Comments
Technology is just 10 % of the exercise in applications of AI. The rest is really around people and ecosystems… defining the actual role for humans in the loop is going to be as important as any of the technological advances.
Highlights the often‑overlooked socio‑technical dimension of AI in health, shifting focus from pure tech to ecosystem design and human oversight.
Prompted others to discuss ecosystem challenges, led to deeper conversation about regulator‑industry dynamics and the need for human‑in‑the‑loop safeguards, influencing later remarks by Richard and Vikalp about multi‑agent architectures and regulatory roles.
Speaker: Trevor Mundel
I remember watching a podcast… if all the jobs are taken by AI, regulatory jobs will be the last to remain because people always have somebody to blame… we’ll be the last person there so that they can hang me when something goes wrong.
Uses humor to expose the paradox regulators face: pressure to accelerate innovation while being the ultimate liability, underscoring the tension between speed and safety.
Set the stage for a discussion on balancing rapid AI deployment with accountability, leading to his description of AI‑assisted application review and the call for neutral tools that satisfy both industry and regulators.
Speaker: Richard Rukwata
We’re starting to have more meaningful conversations about what this really means… moving beyond hype or fear to actually how do we navigate this space as a global community.
Signals a turning point from speculative excitement to a call for rigorous, collaborative evaluation, especially in low‑ and middle‑income contexts.
Steered the panel toward concrete topics such as real‑world evidence, cost‑effectiveness, and operational integration, influencing subsequent questions about evidence gaps and the need for rigorous trials.
Speaker: Charlotte Watts
The AI‑based EMR alerted that the patient was allergic to amoxicillin, prompting an immediate medication change to clindamycin.
Provides a concrete, patient‑safety example of AI augmenting clinical decision‑making, illustrating the practical value of the technology.
Grounded the abstract discussion in a real‑world use case, prompting participants to consider safety benefits and prompting later concerns about validation and privacy.
Speaker: Vikalp Sahni (narration)
When you said, ‘is there anyone who has not visited a doctor’, instinctively I was asking, does veterinary doctor count? … In the pet care industry, there is real value and business to be made there.
Introduces a broader perspective on healthcare AI beyond human medicine, expanding the scope of the conversation to include veterinary applications.
Broadened the audience’s view of AI’s market potential and prompted acknowledgment of diverse stakeholder needs, subtly shifting the tone to a more inclusive, entrepreneurial outlook.
Speaker: Sindura Ganapathi
I think that for us, it’s… no compromise on patient data privacy… federated learning… locally private data contributes to model improvement without moving the data.
Raises a cutting‑edge technical solution (federated learning) to the privacy challenge, linking technology to policy and regulatory gaps.
Spurred a brief technical‑policy exchange, leading Charlotte to mention ethical clearance and Vikalp to discuss encryption, deepening the discussion on privacy safeguards.
Speaker: Trevor Mundel
We need to evaluate AI interventions in primary‑care settings, especially for underserved populations, and generate real‑world evidence on cost‑effectiveness and scalability.
Articulates a clear research agenda that ties AI deployment to health system impact, emphasizing evidence generation in low‑resource contexts.
Guided the subsequent Q&A toward operational decision‑support, TB geospatial models, and funding priorities, aligning the panel around measurable outcomes.
Speaker: Charlotte Watts
If you run a single agent with a single prompt, you narrow the worldview. Multi‑agent architecture with a grounding agent ensures safety, especially in maternal health where mental health is involved.
Introduces a nuanced technical design principle (multi‑agent with grounding) to mitigate risks in high‑anxiety health domains.
Provided a concrete answer to the participant’s question on building reassuring AI agents, influencing the conversation toward system design considerations rather than just policy.
Speaker: Vikalp Sahni
One fatal accident in self‑driving cars puts the whole enterprise at risk… we may need a slower, reflective approach that ultimately makes us faster.
Uses an analogy from autonomous vehicles to illustrate the high stakes of AI errors in health, advocating for cautious acceleration.
Reinforced the earlier cautionary notes from Charlotte and Richard, shaping the panel’s consensus on balancing speed with safety and influencing the final wishes for future summit focus.
Speaker: Trevor Mundel
I would love to see partners we fund actually present operational learnings next year, moving away from hype to honest conversations about what’s working and what’s not.
Calls for transparency and accountability in funded projects, emphasizing the need for practical, evidence‑based dialogue.
Summarized the panel’s collective desire for concrete outcomes, setting a forward‑looking agenda for the next summit and reinforcing the shift from hype to rigor.
Speaker: Charlotte Watts
Overall Assessment

The discussion pivoted from an enthusiastic product showcase to a nuanced debate about the real‑world integration of AI in health. Key comments—Trevor’s ecosystem reminder, Richard’s regulator paradox, Charlotte’s call for rigorous evidence, and Vikalp’s concrete patient‑safety example—served as turning points that redirected the conversation toward accountability, privacy, and measurable impact. These insights introduced new dimensions (regulatory pressure, multi‑agent design, federated learning, veterinary care) and prompted participants to explore practical challenges and solutions rather than remaining in speculative hype. The cumulative effect was a collective shift toward a balanced vision: rapid, innovative AI deployment tempered by robust human oversight, rigorous evaluation, and cross‑sector collaboration.

Follow-up Questions
Does a veterinary doctor count as a doctor visit in the context of this discussion?
Clarifies the scope of the conversation and explores potential applications of AI in pet care, an emerging market.
Speaker: Sindura Ganapathi
How can regulators reconcile the twin pressures of accelerating innovation while ensuring safety and accountability in the age of AI?
Addresses a core regulatory challenge that impacts the speed of AI adoption in healthcare and the protection of patients.
Speaker: Sindura Ganapathi (to Dr. Richard Rukwata)
How can AI health solutions be built at scale for multiple languages, how can we generate verifiable data for large‑scale models, and who should evaluate these capabilities?
Identifies technical and governance gaps that are essential for widespread, trustworthy deployment of AI in diverse linguistic contexts.
Speaker: Vikalp Sahni
How do we move beyond lip‑service to truly integrate ecosystems and people, and define the role of humans in the loop for AI in health?
Highlights the need for concrete strategies to embed human oversight in AI workflows, crucial for safety and acceptance.
Speaker: Trevor Mundel
What real‑world evidence is needed to assess the health impact, operability, cost‑effectiveness, and system integration of AI interventions, especially in low‑ and middle‑income countries?
Calls for rigorous evaluation frameworks to inform policy, funding, and scaling decisions for AI in health.
Speaker: Charlotte Watts
How can data privacy and privacy‑by‑design be incorporated at the policy level for AI health platforms?
Seeks guidance on regulatory and policy mechanisms to protect sensitive health data, a prerequisite for user trust and compliance.
Speaker: Participant (unidentified)
What emerging technologies (e.g., federated learning, synthetic data) can preserve data privacy while still enabling model improvement?
Explores technical solutions that could allow collaborative AI development without compromising patient confidentiality.
Speaker: Sindura Ganapathi (prompt to panel)
Is operational decision‑support (e.g., geospatial AI for TB case finding and network optimisation) of interest for funding, and how should its evidence be generated?
Seeks clarification on funding priorities for AI tools that target underserved, undetected patient populations.
Speaker: Participant (unidentified)
How can we build AI agents for maternal and infant care that are both intelligent and reassuring in high‑anxiety environments?
Addresses a critical need for safe, trustworthy AI support for vulnerable mothers and newborns, especially in lower‑tier cities.
Speaker: Participant (unidentified)
What would participants like to see at the next AI Summit (e.g., concrete demos, operational insights, collaborations)?
Aims to shape future conference agendas to focus on actionable outcomes rather than hype.
Speaker: Sindura Ganapathi
How should funders balance promoting innovation with upholding safety and minimizing risk in their funding programmes?
Seeks strategies for responsible investment that accelerate beneficial AI while guarding against harm.
Speaker: Sindura Ganapathi (to panel)
What is the optimal pacing for AI deployment in global health to avoid catastrophic failures while meeting urgent needs?
Raises the research need for frameworks that balance speed of innovation with safety and public trust.
Speaker: Trevor Mundel
How can shared standards and coordinated evaluation criteria reduce fragmentation and duplication in AI health funding?
Calls for harmonised guidelines to streamline development, assessment, and implementation of AI solutions across countries.
Speaker: Monika Sharma
How can a multi‑agent architecture with a grounding agent and human‑in‑the‑loop be designed to ensure safety for maternal health AI applications?
Proposes a technical research direction to improve reliability and ethical compliance of AI agents in sensitive health domains.
Speaker: Vikalp Sahni

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.