Transforming Health Systems with AI From Lab to Last Mile
20 Feb 2026 17:00h - 18:00h
Transforming Health Systems with AI From Lab to Last Mile
Summary
The session opened with Vikalp Sahni presenting an AI-driven, end-to-end health-care platform that aims to eliminate information fragmentation, simplify history collection, and free doctors from administrative tasks [6-9]. Leveraging in-house AI, the system creates a digital identity (ABHA), aggregates patient-generated records into a personal health record, and uses language-aware prompts to summarize conditions and schedule appointments, as illustrated by the Neeti use-case [14-45]. During a clinic visit, the AI-enhanced EMR provides real-time transcription (EkaScribe), alerts clinicians to drug allergies, and automatically generates multilingual discharge notes that are synced back to the patient’s PHR [46-70]. Sahni acknowledged remaining challenges such as multilingual scaling, data verification, and model evaluation at large scale [73-76].
Sindura Ganapathi then highlighted the personal relevance of the problem, noting the burden of paperwork for caregivers and raising the question of veterinary care relevance [78-88]. A panel of regulators, funders and researchers – including Dr. Richard Rukwata, Prof. Charlotte Watts, Dr. Monica Sharma and Dr. Trevor Mundel – was introduced to discuss AI’s role in health systems [91-118]. Watts emphasized the shift from hype to substantive, global conversations about integrating AI responsibly into low- and middle-income health systems and the need for rigorous real-world evidence [127-132][213-218]. Mundel stressed that technology accounts for only about ten percent of AI success, with the remaining effort focused on ecosystems and defining human-in-the-loop roles [137-141].
Rukwata described the regulator’s dilemma of accelerating innovation while ensuring safety, and outlined collaborations with the Gates Foundation to create neutral, AI-enabled drug-approval applications [159-176]. When asked about data privacy, Sahni explained adherence to HIPAA, India’s DPDP Act, and the use of end-to-end encryption, while Watts added that funded evaluations will enforce strict anonymity and ethical clearances [271-280][287-292]. Mundel further noted emerging privacy-preserving techniques such as federated learning, citing a grant that used local data to improve an ultrasound diagnostic model without central data sharing [294-301]. Sharma highlighted the funders’ commitment to shared standards, reducing fragmentation, and providing a single evaluation framework to ease researchers’ workload [240-255].
Responding to a query on AI agents for high-anxiety maternal care, Sahni advocated a multi-agent architecture with a grounding agent and continuous human oversight to ensure safety and contextual relevance [336-344]. The session concluded with calls for next-year milestones: a transparent, error-free patient-facing agent (Mundel), operational partners demonstrating real-world impact (Watts), tighter regulator-industry collaboration (Rukwata), and preserving the clinician’s final decision (Sharma) [354-366].
Keypoints
Major discussion points
– EkaCare’s end-to-end AI-driven care platform – Vikalp outlined a solution that tackles three core problems: fragmented health information, cumbersome patient history collection, and doctors spending too much time on documentation instead of care. He demonstrated how a digitally-savvy patient (Neeti) creates an ABHA ID, uploads records, interacts with a multilingual AI assistant, schedules an appointment, and how the doctor’s EMR is auto-populated, with AI alerts for drug allergies and automatic translation of notes into the patient’s language [6-9][14-20][30-38][45-51][54-62][66-70].
– Technical and operational challenges of scaling AI in health – The presenters acknowledged hurdles such as multilingual support, data verifiability, model evaluation, and the need for robust governance. Vikalp noted “challenges… how to build these things at scale for multiple languages… who is evaluating these capabilities” [73-77]; later, concerns about hype, regulatory pressure, and the balance between speed and safety were raised [144-149][151-158][159-180].
– Regulators and funders navigating speed vs. safety – Panelists (Richard, Charlotte, Trevor, Monika) discussed the tension between accelerating innovation and ensuring patient safety, the role of regulators as the “last person to be blamed” [159-168]; funding bodies emphasized the need for rigorous real-world evidence, cost-effectiveness, and coordinated standards to avoid fragmented expectations [213-229][240-247].
– Human-in-the-loop, multi-agent architecture and privacy – Trevor stressed that technology is only ~10 % of AI success, with ecosystems and people being crucial [137-141]; Vikalp described adherence to HIPAA/DPDP, encryption, and certification for data privacy [271-280]; later, a multi-agent design with grounding agents and continuous medical oversight was advocated to keep AI safe in high-anxiety contexts like maternal health [336-344][294-301].
– Future aspirations for the AI health community – Participants expressed what they hope to see at the next summit: transparent, explainable patient-facing agents; deeper operational evaluations with funder-partner collaborations; stronger industry-regulator cooperation; and maintaining the human clinician’s final authority [354-359][361-366][363-364].
Overall purpose / goal of the discussion
The session aimed to showcase a concrete AI health-care solution (EkaCare), surface the technical, regulatory, and ethical challenges of deploying AI at scale, and gather perspectives from regulators, funders, and practitioners to shape collaborative, evidence-based pathways for integrating AI into global health systems.
Overall tone and its evolution
– The conversation opened with an informative and demonstrative tone as Vikalp walked through the patient-centric AI workflow.
– It shifted to a collaborative and reflective mood when panelists shared personal anecdotes and acknowledged both hype and genuine concerns.
– A serious, problem-solving tone emerged around regulatory pressures, data privacy, and the need for rigorous evaluation.
– The closing segment turned optimistic and forward-looking, with participants expressing hopes for transparent agents, coordinated funding, and concrete outcomes at the next summit.
Thus, the tone moved from demonstration → reflection → concern → optimism, mirroring the progression from presenting a solution to discussing its broader ecosystem implications.
Speakers
– Charlotte Watts – Areas of expertise: public health, HIV, gender-based violence, epidemiology, mathematics; Role/Title: Executive Director of Solutions, Wellcome Trust; former UK government official and G20 participant [S1]
– Participant – Areas of expertise: ; Role/Title: (generic audience member)
– Monika Sharma – Areas of expertise: biomedical research, science innovation, health-sector funding; Role/Title: Dr. Monica Sharma, Lead, No One Artists India Foundation [S6]
– Vikalp Sahni – Areas of expertise: AI in healthcare, digital health platforms, EMR integration; Role/Title: Founder/Representative, EkaCare (AI for Bharat’s Health) [S7][S8]
– Richard Rukwata – Areas of expertise: pharmaceutical regulation, regulatory harmonization in Africa; Role/Title: Dr. Richard Rukwata, Director General, Medicines Control Authority of Zimbabwe; chief regulator [S9]
– Sindura Ganapathi – Areas of expertise: veterinary medicine, regulatory affairs, conference moderation; Role/Title: Moderator/Host of the session; involved in G20 from the India side [S11]
– Trevor Mundel – Areas of expertise: pharmaceutical development, global health, health-innovation funding; Role/Title: Dr. Trevor Mundel, Rhodes Scholar, former medical doctor and PhD in mathematics, senior leader in global health and innovation funding [S12]
Additional speakers:
(none identified beyond the list above)
1. Introduction & three core challenges – Vikalp Sahni opened by asking the audience if anyone had never visited a doctor, quickly showing that virtually everyone has experience with medical care [1-3]. He then identified three persistent problems in health delivery: (i) fragmented information from appointment-booking to vitals collection, (ii) difficulty for patients to convey a complete medical history, and (iii) excessive clinician time spent on documentation rather than patient interaction [7-9]. Sahni positioned EkaCare’s end-to-end AI-driven platform as a solution that uses in-house artificial intelligence to address all three challenges [10-12].
2. Neeti patient journey – The patient-facing workflow was illustrated through “Neeti”, a 65-year-old digitally-savvy woman with diabetes [13-16]. She first creates an ABHA (Ayushman Bharat Health Account) digital identity [17-18] and uploads photographs of her legacy records into a Personal Health Record (PHR) app, where AI extracts and digitises her history [19-20]. Neeti then asks a multilingual AI assistant to “summarise my health”, receiving a concise overview of her conditions [21-25]. When she reports a fever and a foot wound in her native language, the AI asks targeted follow-up questions (e.g., wound location, swelling, odour) and presents language-appropriate prompts that simplify interaction for a senior user [26-34]. After gathering contact details, the system recognises the case as urgent, suggests available doctors on a specific date, and creates an appointment once Neeti selects a provider [35-45].
3. Doctor’s EMR interaction – At the clinic, the physician views an AI-enhanced electronic medical record that already displays Neeti’s past history and current complaints [46-51]. By activating the audio-based “EkaScribe”, the conversation is transcribed in real time, producing verifiable notes that are automatically copied into the EMR [54-58]. The AI detects Neeti’s allergy to amoxicillin, raises an alert, and the clinician promptly switches the prescription to clindamycin [59-65]. All notes are rendered in the patient’s local language and, with a single click, synchronised back to Neeti’s PHR, creating a new node for future consultations [66-70]. Sahni highlighted that this workflow consolidates fragmented data, provides safety checks, and delivers multilingual documentation [71-72].
4. Scalability & evaluation challenges – Sahni acknowledged major hurdles to large-scale deployment: supporting dozens of Indian languages, ensuring model verifiability, and defining who will evaluate performance at scale [73-77].
5. Sindura Ganapathi’s opening remarks – Sindura shifted the tone by first asking whether a veterinary doctor counted as a “doctor” [78-80] and noting the untapped business potential in pet-care [82-84]. She then shared her personal experience as a caregiver for a mother with multiple chronic conditions, confirming that the interfaces described by Vikalp mirrored real-world frustrations with paperwork [85-88]. She further described reading the CEO-of-Anthropic blog, initially feeling “bleak” about the state of AI in health, but becoming “energised” by the hustle of the summit and inviting participants to share how the last two-to-three days had made them feel [115-124].
6. Panel introduction – The discussion moved to a panel of regulators, funders and researchers: Dr Richard Rukwata (Zimbabwe Medicines Control Authority), Prof Charlotte Watts (Wellcome Trust), Dr Monika Sharma (No One Artists India Foundation) and Dr Trevor Mundel (pharmaceutical and global-health veteran) [91-118].
7. Panel discussion highlights
Charlotte Watts* used the early-day energy of the summit to call for a shift from hype to substantive, global conversations about AI in health, especially in low- and middle-income countries [127-132]. She stressed the need for rigorous real-world evidence-randomised controlled trials, cost-effectiveness analyses, and system-integration assessments-before AI can be scaled [213-229].
Trevor Mundel* reiterated that technology alone accounts for only about ten percent of AI success; the remaining effort lies in people, workflows and ecosystem design, and in defining the human-in-the-loop role [137-141]. He later advocated a multi-agent architecture with a grounding agent and continuous medical oversight for high-anxiety maternal and infant care [336-345].
Monika Sharma* contributed a personal anecdote about her 6½-year-old child’s view of AI, illustrating how early perceptions shape expectations [140-148]. She also argued for shared standards among funders to reduce fragmentation, avoid duplication, and ensure that AI investments translate into real-world impact [240-247]; she reminded the audience that the clinician’s final decision must remain central [366].
Richard Rukwata* described the “dual pressure” of accelerating innovation while remaining the ultimate point of accountability when things go wrong [159-168]. He referred to a podcast where this tension was discussed and cited a collaboration with the Gates Foundation to develop AI-enabled screening tools for marketing authorisations, aiming to create neutral applications that speed review without compromising safety [170-176].
8. Q&A – Data privacy – A participant asked for policy-level guidance on data privacy [265-267]. Vikalp responded that EkaCare follows established frameworks such as HIPAA, India’s DPDP Act and NHA guidelines, pursues relevant certifications, and employs end-to-end encryption [271-280]. Prof Watts added that funded evaluations will enforce strict anonymity, ethical clearances and privacy safeguards [287-292]. Dr Mundel introduced federated learning as a promising technique that keeps raw data local while still improving models, citing a grant-funded ultrasound diagnostic system that used federated contributions without central data sharing [294-301].
9. Q&A – TB geospatial decision-support – A participant queried the operational use of geospatial AI for active-case-finding and diagnostic-network optimisation for tuberculosis [300-306]. Charlotte Watts responded that geospatial AI can help identify hotspots, optimise resource allocation and must be evaluated for cost-effectiveness and integration with primary-care pathways [307-315]. Trevor Mundel added that funding constraints and the need for robust validation mean such tools should be piloted in partnership with national programmes before wider rollout [316-324].
10. Q&A – Maternal & infant care agents – When asked about AI agents for high-anxiety maternal and infant care, Vikalp reiterated the importance of a grounding agent and a dedicated medical team to keep the system within safe boundaries [336-345]. He noted that a single-prompt agent can narrow the worldview, whereas a collaborative multi-agent design mitigates risk, especially when mental-health considerations are involved [340-344].
11. Closing wishes for the next AI Summit (Geneva) – Each panelist offered a brief “next-year wish”:
Trevor Mundel*: a next-generation, fully transparent patient-facing agent that never makes contraindication errors and inspires complete confidence [354-359].
Charlotte Watts*: fund-partner organisations present operational learnings, moving the dialogue from hype to honest assessments of what works and what does not [361-366].
Richard Rukwata*: deeper collaboration between industry and regulators to turn the latter from perceived bottlenecks into partners for safe, effective medicines [363-364].
Monika Sharma*: unified evaluation standards to reduce fragmentation and ease researchers’ workload [240-247].
12. Core themes & remaining gaps – The session converged on four core themes: (1) AI must be built around a human-in-the-loop, ecosystem-centric design; (2) rigorous, real-world evidence and cost-effectiveness analyses are prerequisites for scaling, especially in low-resource settings; (3) unwavering commitment to data privacy through legal compliance, technical safeguards and emerging techniques such as federated learning; and (4) collaborative frameworks that align regulators, industry and funders to balance speed with safety. Unresolved issues include concrete policy guidance for privacy-by-design, scalable multilingual model verification, prospective evaluation of geospatial AI for tuberculosis case finding, and detailed specifications for reassuring maternal-health agents [73-77][137-141][213-229][271-280][240-247].
All of us here, we would have visited doctors at some point in time or have been sick. Anyone who has never visited a doctor, please raise hand. So practically everyone. So let’s imagine how was your experience when you visit a doctor? How do you express your symptoms? How the doctor interacts with you and how the interaction happens with the medical systems where EMRs comes in? What we are trying to show today and what we’ve built at EkaCare is an end -to -end solution that solves three key challenges that we face today. One is the fragmentation of information and clear delivery, be it right from taking an appointment or taking a vitals. The second is how easily and comfortably you can tell about your history rather than fumbling through lots of files and how easily it can be collected, collated and being.
And the last but not the least. we would want doctors to spend time with us and not with machines writing about prescriptions, rather talking to us, counseling us, connecting with us. So the solution that we have built solves for all these three challenges. Obviously, thanks to the advancement in AI, we have been able to do a lot of this due to the capabilities that we have built in -house. So I’m going to narrate a story. This story is of Neeti. She’s a 65 -year -old female, has diabetes, and she wants to now see how she can do the whole end -to -end care delivery. To start off, Neeti is quite digital savvy. She actually has created her ABHA address.
ABHA is the digital identity that India government provides. This digital identity allowed her to collect a lot of her medical records. records into the app, which is her PHR or patient health record app. She has also taken many photographs so that the AI can read through these photographs and collects her medical history in a digital format so that it can be summarized. Now what happens is Neeti wants to talk to an AI, which is a med assist or an assistant for Neeti. She goes ahead, she just picks up a prompt, says summarize my health. What is happening now is all of Neeti’s health is getting summarized. You would know, Neeti would know these are the kind of things that has come up from the medical records.
Also, there is a prompt that Neeti would get, which is very, very relevant to the kind of things that Neeti is supposed to talk about. But today Neeti came for a very different purpose. And now in a local language, she’s talking to the bot. And she’s talking to Neeti. And Neeti is actually telling that In English, she’s expressing that she has fever and there is a wound in her foot. What AI would start doing now is try to understand more about this specific condition. Where is the wound? Is it swelled? Are there any kind of smell that is coming in? And all of this is happening in the local language that Neeti understands. More importantly, it is not letting Neeti to only type or talk.
There are these prompts that are coming in that will ease off the interaction of a 65 -year -old female. After collecting more information, such as mobile number, the AI would identify that this is an important case and this needs a doctor’s intervention. But which doctor’s intervention? With which clinic? On which day? All of this information will now get collected. This will be displayed. So in this case, Neeti is being told that there is an availability of these two doctors on 14th of February. But she can always say that, okay, I want to do it in a different day. Pick up the doctor. As soon as she picks up the doctor, the appointments gets created. Neeti can actually do all of this by typing or by acting on the prompts as well.
So this is how all the information that Neeti wanted to share with the doctor gets collected, gets summarized, and now appointment is created. The next story goes to when Neeti visits the doctor’s clinic. And when Neeti visits the doctor’s clinic, this is the doctor’s view where a doctor is looking at a classical EMR screen. but how this EMR screen is fitted with these AI utilities that can help a doctor to get the better outcome is what we want to demonstrate. If you see all the current EMR and the current prescription for Neeti is completely empty. There is nothing there. Doctor is looking at the past history of Neeti as well as what are the current ailments and current issues that has been listed.
AI also ensured that it not only figures out the important information for patient, but here a doctor is also able to understand and get to know more about Neeti that there is an uncontrolled diabetes. So this is the kind of person that he’s dealing with. But more importantly, it would be very hard for a doctor to start filling all of these information. During the consultation, doctor just starts the audio -based EkaScribe, which is now doing the interaction between doctor and the patient, recording the interaction between doctor and the patient. These interactions gets converted into medical notes and these medical notes are verifiable medical notes that doctors would see. Again, this entire thing has come out just by the interaction between the doctor and the patient.
Doctor has to just do copy to EMR pad. As soon as the copy to EMR pad happens, this entire information gets filled, whatever has been discussed, all the medication that doctors wanted to do. But here we go and see that during the consultation, doctor prescribed amoxicillin. But the patient’s medical history said that he is or she is allergic to amoxicillin. The capable AI -based EMR is now alerting that the patient is allergic to amoxicillin. So, the patient is allergic to amoxicillin. So, the patient is allergic to it. without actually going deeper, a doctor can very easily go ahead now and change this medication to provide for a better outcome as well as to reduce the medical errors.
So it’s changed from amoxicillin to clindamycin. As it changed, the prompt also changed. If you look at the information, all filled, the PDF view of the patient will have the entire medications, everything created in the local language. There is a translation of all the remarks, advices, everything in the language that patient understands. And at the click of a button, this information goes and sits into the patient’s PHR app, creating another node into her medical system. That can be used for the further consultation and any kind of other ailments. So that’s what is the power of AI and the utilities that we are seeing today. The care process right from being fragmented to being consolidated, understanding the patient’s entire medical history to making sure that the doctor’s time is saved while he’s seeing more patients and more medical data is captured.
Today, all of that is possible. But yes, there are challenges. How to build these things at scale for multiple languages, how to generate that data so that your models are verifiable at those large scale. Who is evaluating these capabilities that are being built? All of these are challenges that we as a developer space. And I’m looking forward to building more and working more in this domain.
I’ll ask you to take a seat. When you said, is there anyone who has not visited a doctor, instinctively I was asking, does veterinary doctor count? Because I’m a veterinarian by background. And then it’s only a half joke, actually. In the pet care industry, there is real value and business to be made there. So just a thought. And on a more serious note, you could change the name of the lady and adjust age, et cetera. That could be my mother. And I deal with this personally as a caregiver, has all these conditions, deal with so many papers. And every interface you mentioned is a leaf out of my personal life. So thank you for thinking about building a solution here.
I will invite my. Panelists one by one, please join us in the on the stage. First, Dr. Richard Rukwata. He is the chief regulator. He is the director general of Medicines Control Authority of Zimbabwe. I have very high regard for regulators because I have been working on our regulatory agency and the streamlining, and I can see how difficult job that is. And the fact that you have seen this through for ML3 recognition, that’s a wonderful accomplishment. Congratulations. Congratulations on that. Not an easy job. And also, you are involved in the regulatory harmonization work of Africa, and there is a lot of interesting thoughts you will be hopefully able to share on Next, I would like to invite Professor Charlotte Watts.
Last we saw was in G20. Hopefully, it brings back memories. Yes. Happy ones. I’d like to keep it that way. She has had extensive career in both health care, HIV, gender -based violence, epidemiology, mathematics, and a deep experience working in the government, UK government, which was the capacity she came for G20 meetings, which I was involved in from India side. So it’s a pleasure to have you back, Charlotte. And now she’s working at Wellcome Trust as Executive Director of Solutions. I would love to hear more about how you are thinking about these things. And next I would like to invite Dr. Monica Sharma. I happened to meet her just now, and she is the lead for No One Artists India Foundation.
And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has extensive experience working in putting together funding programs, whether it is Newton Fund, whether it is IRTG, Germany’s International Research Training Groups, or India’s BioPharma Mission Program. So all of these, I’m sure, will come in very handy in your current role and would love to hear from you on thoughts related to the topic today. And last but not least, my dear friend and mentor, Dr. Trevor Mundell. I should say Dr. Dr. Trevor Mundell. He is both a – he has an unusual background. People who work with him smile when I say unusual. And Trevor, he did medical degree and then he figured he wanted a Ph .D.
in mathematics. So and is a Rhodes Scholar and has extensive experience in pharmaceutical industry as from early research to development and decade plus experience in global health. With that, we will get started. First to begin with, so I think hopefully you all have mics. For me personally, coming here after having read the blog that went out very famously by CEO of Anthropic, I came in with a very bleak feeling, to be very honest. It’s. Kind of depressing. what are we creating but I have to say last two three days has been energizing seeing all the chaos in terms of interactions people talking to each other hustle just hustle and people excited about the product they are building it brought me memories of vegetable market where I grew up from where people are like life there right so people are trying to sell something people are trying to buy something people are talking and the reason I talk about that as a happy thing is it’s nice to see so many human beings that’s what came to my mind in this in the backdrop of that blog so just love to hear from you what was your feeling as human beings I think this this seeing anything that you want to particularly share last two three days You have been here.
You saw all of this. What did that make you feel? Because I think going forward, this feeling of human beings, I think, will have a currency of its own. Anybody wants to volunteer and say something? An open ended question.
Yes, I’m happy to jump in. So I just got here actually yesterday. So I actually missed, I think, the early start of the week, which I heard was fantastic because you had the youth here. As well as, you know, older people who’ve been in the global health or the global sort of sphere or in the AI world for longer. So that that mix and the drive of the kind of energy, I think, is what I was hearing people tell me about the start of the week. But now I’ve just been here. Yeah. Sort of last night and today. And for me, what I feel quite reassured about, I wonder if it’s so, you know, the change.
is so profound and so I suppose it I was sort of wary because there’s so much hype um and then clearly the risks are being articulated but what I feel reassured about in going to a number of sessions is just actually we’re starting to have the more meaningful conversations about what this really means that is getting beyond that either hyper cell or hyper fear to actually how do we navigate this space and also how do we navigate this as a global community because this is not something that’s one country’s problem to fix so so actually I’m feeling that you know this is this is a really important conference and and we’re starting to get into the we need and we’re starting to get into the nitty -gritty of of how on earth we move forward in in the in the best way that really
Anybody else want to share? Trevor and then Monica.
Well, Sundara, you know, what I’ve heard frequently in this meeting, and I hear it quite often in the AI application space, is that technology is just 10 % of the exercise in applications of AI. And the rest is really around people and ecosystems. And as soon as people say that, they then go back to talk about technology. So, I am interested in how we do more than just pay lip service to this notion that we really need to think about the ecosystem and the people involved, probably more than the technology itself. And defining the actual role for humans in the loop is going to be, I think, you know, as important as any of the technological advances.
So, Sundara, I don’t have an experience from the summit as such, because I’ve just arrived here. But I want to share a very relevant experience from this model. morning so while i was coming here i have a six and a half years old who just saw ai on my you know computer and he said where are you going i said yeah i have a meeting to attend he said ai and i was like oh he’s able to see i said so you know what is here he said yeah it’s artificial intelligence and i said what else you know about it he said yeah soon they’re going to be robots robots doing everything for us and i was like no but still you would need me and i found like oh my god that’s not a good start of a conversation like like everybody is influenced by this so thank you so much for bringing that human back to to this summit yeah that’s what i thought i’ll add as a you know a conversation from my household this morning thank
you yeah no i i charlotte i hope you’re right that there is a lot of hype there now i’m praying for hype after reading how many of you have read that what i referred to the blog you by deriva modi by anthropic seal place of Okay. Okay. Few hands. I am not even sure whether I want to urge you to go read because it really makes you think. And there were some people who are in the field. They said, I am choosing not to read it because I don’t want to know. No. So it’s a good thing to hear this, that this human in the loop and the way we responsibly develop, because that’s the theme we want to explore, especially in the context of health.
That, I think, is a good segue, Dr. Richard, I want to ask you, start with you. Job of a regulator, I said, is hard. The reason I experienced it firsthand, having now very close. We work with our regulatory system, et cetera, where you have two extreme pressures on a regulator and the one it needs to move fast. It needs to be less like everybody wants it to be. regulation and you want to speed up innovation and any every day gets counted and you are held to the metric. That’s one extreme. The other extreme is, boy, if anything goes wrong, who is the first person? Who approved this? Who allowed it to come out? So, these are two extreme things and usually in a slower cycle, you are able to have some time.
So, how are you thinking about it in the age of both the eye, but in general, in reconciling these two extremes of demands put on
Yes. Thank you for that insight. I have to think on my feet here, but you’re quite right. It’s a matter of industry wanting more results from the regulator. Thank you. investment and also wanting to retain or rather wanting the regulator to retain responsibility when things go wrong. I remember watching a very interesting podcast, I think it was called Moonshot, and in this episode they were saying, well, if all the jobs are taken by AI, regulatory jobs will be the last to remain because people always have, people should always have somebody to blame, right? We can’t say, oh, you know, somebody was harmless. No, AI did it. No, that would never work. So worst case scenario, I’ll be the last person there so that they can hang me when something goes wrong.
At least I have that job security to think about. But really, with respect to what is happening as far as industry’s expectations are concerned, we see a lot of potential in AI. We’re currently working with Grant from the Gates Foundation. on an application for screening applications for marketing authorizations. I think those in our industry, the pharma industry, know that this is the biggest source of angst amongst industrialists that regulators take too long, and we are seen as an impediment to progress, actually. So we also blame industry. We’re saying, well, you know, you submit, you know, incomplete applications and then blame it on us. So we’re hoping that with technology we’ll have, you know, applications in the near future that can work for both sides of the fence, right?
Neutral applications that don’t necessarily speak to one side, but they enable all of us to at least reach a common position very quickly. This is the beautiful thing about computers, right? They don’t feel any type of way. They don’t feel any type of way about you. They don’t necessarily like you. They don’t dislike you. so we’re hoping that this will allow us to do I was just saying not yet so we’re hoping that as we work more towards the development of these tools we’ll be able to see more traction from industry so that we become a more efficient part of the supply chain from development to market and not to be seen as the barrier to entry in this field.
Thank you.
That’s very helpful and also there is both a challenge for a regulator when this AI speeds up the cycle of innovation brings new complexities but also itself a very good tool in either summarizing a complex application or building models that allows a few people to actually have the same capability as a well developed pharma so be on the same page so lots of interesting possibilities here which in India we’re also thinking about. along those lines all three of you are coming from one type of shared commonality which is funding innovation and as a funder of innovation you are also in not too dissimilar way are trying to balance promoting innovation while upholding safety and minimizing risk etc so i would like to hear from each of you because each of you are different kind of funders how you are thinking about balancing these two in the funding programs and scouring innovation and speeding that up you can go in any order you can thumb wrestle
trevor’s pointing to me but i went first last time but now i can go um i mean we we fund uh so you know we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we a range of innovations with ambition of improving and saving lives.
Increasingly, we are funding innovation
you know on the acceleration front we look at it you know on the acceleration front we look at it in that every month we don’t have the next generation malaria vaccine. You know, and certainly every year we’re seeing hundreds of thousands of deaths in young children. Every year we don’t have the enhanced personal coaching in education. We see a generation that is losing opportunities. So we feel a tremendous pressure, I know, from the funder side in terms of how do we speed the availability, the access to a tool which looks like it might be a solution to some of those vexing problems. But I think that it really behoves us here to think about completely focusing on fast might be slow.
And we have to have this moment of reflection because what could derail the good application of AI? You know, you think about it in the health area, which is so sensitive, the few errors, like on the regulatory front, relatively few errors that could occur, the, you know, unfortunate outcome for a patient. which can be attributed to a system which was probably misused by, you know, the people who are using it maybe, but nevertheless will be attributed to AI. And that leads to a tremendous deceleration and things not moving ahead. We take the lesson of the self -driving vehicles, you know, where they may be incredibly good drivers and better than the average human at driving, but one fatal accident puts that whole enterprise at risk.
So I think from the funder’s perspective, we need to have a situation where maybe taking a little bit of a reflective and a slower approach might be fast.
Monika?
So I represent No Notice Foundation and we support health, people and planet both. So at this point, sitting with… funders, global funders. with yourself. And I think it sends a strong message how important AI is at the moment with respect to health. So while we are trying to address as funders different parts of the ecosystem addressing health, but having AI bringing evidence to this really matters. So I think I really feel that having a joint approach towards it is kind of strengthening the whole ecosystem of AI.
QR code allows you to look at it and all the details I believe are there. I have not tried it. I would very quickly like to hear from any of you or all of you. What are you trying to what are you hoping from this? And after this, you know, usually panels, I find panels very boring, by the way. So and as a person sitting there or as a person trying to sit here and trying to give Gyan in two minutes. So I would love to make it more interactive. So get your questions and there is still time. So right after this, hopefully I would like to see you interacting and sharing your thoughts and sharing questions.
I’ll be coming to you. So who wants to say your hope from this call?
Yeah, so we’re really excited about this announcement today. You’ll see it’s the big health research and innovation foundations coming together. to jointly support what is a major initiative. And essentially, what we want to do here is say, how do we generate real -world evidence on what does it really mean and are we really seeing real -world health impacts once we start to integrate AI into different health systems? So we have lots of exciting opportunities that are showing the efficacy of particular application, but what this call really wants to support is rigorous evaluations of where AI systems are integrated into clinical decision -making. Our focus is on low – and middle -income countries. We are interested in really asking a range of questions.
What does it mean for the health system? Are these new initiatives actually operable? Can it be integrated into what often is quite a big bureaucracy of a health system? what are the costs associated with that? Are these interventions actually cost effective? In the end, ministries of health have to make decisions based on affordability. So how do we learn more about the costs of this transition? And what are the things we didn’t expect? Right. And what we see, you know, if we look at the evidence base, we’ve got a lot of exciting evidence of interventions that show promise. We’ve only got a relative handful of rigorous randomized controlled trials that are actually assessing interventions when they’re implemented.
So there’s a massive gap there. And then we’re also now starting to see in different contexts anecdotal evidence of where AI has been integrated, but it’s actually butted against the system. And actually that opportunity isn’t realizing and sort of is showing it’s easier said than done. So basically this investment is to try and address that evidence gap. And I just want to call out that Jay Powell is here and APHRC. who are key partners on this in supporting the implementation and for APHRC, the contextualization of the work that we hope to be supporting in Africa.
Wonderful. Thank you. Anything else, Trevor, you want to add?
Well, I just want to say thanks to our partners that welcomed Novo Nudist Foundation on this initial effort. I hope it’s the start of even more in the future over there because the global health world has been plagued by this lack of primary data. You know, us and others have funded a lot of modeling simulation around global health problems. But you cannot transcend the lack of primary data at the end of the day. And AI is too important for that to be the constraint that impedes implementation at the end of the day.
I thought maybe. It’ll be good to also add that how as we fund this together. envision this as a commitment towards shared standards. So while we’ll be working together as part of this call, we are saying that the real world evaluation is not optional. It is the foundation. And by aligning together, we are kind of defining what good looks like so that we reduce the burden on countries and developers who would otherwise face a patch of I would say patchwork of expectations. And secondly, I would say that by joining hands, we are reducing fragmentation in a rapidly evolving field. And now that we are coordinated, we are getting away with the risk of duplication, the quality that we want to see in the applications or in the products.
And we make sure that the investments that we do are getting into the real world. I mean, they do create an impact because of the coordination that is part of this whole process. And I also say that when we sit together, it adds the seriousness. to the ecosystem that what we are doing is not a side experiment. This is something that we are creating as infrastructure for a long -term process that I would say governments have been asking for it. And the best part as a researcher, I would say, is that we don’t, like the researchers don’t have to navigate three different timelines. Okay, yeah, that’s great for that. And no three different criterias. We just have not like one agreed aligned criteria.
And I would say that no three different deadlines, no timeline. So it makes really life easy as a researcher, I would say. I hope you get some interesting calls from it.
So if we have questions, is there a mic going around? I hope there is. If not, I’ll give you mine so I don’t have to answer your questions. And please, there is one hand up there. Okay, please direct your question. Including to Vikalp, if you have questions. Yep. Let’s start with the gentleman at the back and then you’re up next.
Thank you, folks. Very interesting. My questions around data privacy and data privacy by design. And the lady mentioned three different parameters. Could you elaborate more on how data privacy can be incorporated, at least at a policy level?
Anyone, anyone wants to take that question, at least in the context of this call, I guess you can are in general. Yeah. How are you handling this? Yeah.
So I think health data is quite sensitive and I mean, more sensitive data rather when it comes to country, when it comes to individual, when it comes to even places such as police, military, et cetera. So. So it’s a pretty valid question. Some of the things that we as an organization try to follow is the general guidelines that has been provided by the competent authorities, such as be it HIPAA on the healthcare data or DPDP, which is the Act for Data Privacy in India. And more importantly, if we look at the data exchanges, such as NHA in India have also created clear guidelines. I think following those guidelines and getting yourself tested against those guidelines are fundamentally important.
And it has become so sensitive that today a lot of our customers, they do ask us whether you have a continuous, continuous like sort of applicable certificates from these. privacy authorities as well as these privacy -based frameworks. So that’s how we solve for it. And I think it’s a good thing. In health, it is fundamentally critical. And the technology, how it is growing, I think there are multiple other ways as well, like an end -to -end encryption and so on and so forth, where we can use it to keep things private.
So there are two aspects to it. One is technological, and another is policy. There are other sessions entirely focused on people who are working on it. So I wouldn’t put you in the shoes to answer that. But on the technological front, both Charlotte, if you want to address, or Trevor, on what are some of the things, model learning without data being exchanged, or synthetic data, so many aspects of it which have been at the forefront. And Charlotte, whatever you want to add.
I mean, I just… I just wanted to say, in terms of the evaluation… that we want to support through this funding. We’re very much expecting clearly an anonymity of, you know, basically for those evaluations to adhere to high quality research standards. So the kind of bars and checks and controls that you’d expect if you’re doing any sort of research study on health and the sort of ethical guidance and clearance procedures that you need to adhere to. So for us, that’s just an important part of any aspect of research that we support and that we’ll be supporting in this initiative. And that includes issues of privacy and other things.
Do you want to say anything about the technological emergence of any new technology that has been helping with preserving data privacy, but not the innovative learnings and improvements of the models?
Yeah, Sundar, you know, so I think that for us, it’s. There’s no compromise on. patient data, privacy from the clinical trial, as Charlotte has mentioned over here. But AI does raise a lot of other issues that go almost beyond that. So, for instance, you know, the various models of federated learning that people have introduced where you can have locally private data, but you contribute to the evolution of a model, which improves because it has access to a very diverse data source. Now, has that actually been regulated? We had an example of one of our grantees who produced a very good system for using ultrasound to diagnose certain chest diseases, and it was based on a federated contribution from different groups that kept their own data local and private, but they contributed to the model.
And, you know, that hasn’t really been tested, and all of the policies around is that a disclosure, which is acceptable now in the age of AI, I think it’s something that we may want to. So, I think it’s something that we encourage with the right framework.
Thank you. Do you have the mic? Okay. Then if you have another mic, you can take it to the gentleman, madam, and then after you.
My question is to Professor Watts. You mentioned about clinical decision support. So the context from an Indian healthcare setting, as you’re well aware, is majority of our health is run at the front line. So there’s also an element of operational decision support as such. So there’s a bunch of geospatial AI models that we are working with Google for geospatial inferencing in the tuberculosis space, mostly active case finding and then diagnostic network optimization. So my question is, from an evidence perspective, we obviously are doing some retrospective analysis, and we plan to follow it up with a prospective analysis as such, although it’s a single user. So I’m wondering if you have any thoughts on that. I think it’s a great question.
I think it’s a great question. I think it’s a great question. but would this be of interest and what is your level of inclination to operational decision support because I’m a physician myself, I’m a medical informaticist as a PhD. I can tell you for one thing for sure is the patients who come into the system, they’re for the most part taken care of but those are all the silent patients who are out there undetected in the community. So what’s your inclination indeed in this research grant for such solutions?
It’s a wonderful question because essentially I come from public health. So our interest, I think our collective interest is actually how do we in particular focus our evaluations and generate evidence where there’s the greatest opportunity to improve health and to strengthen systems and some of that aspect might be actually how do you other opportunities to other opportunities to really help to improve health and to strengthen systems and some of outreach and improved care for the underserved and so we’re not going to say you know this works and this fits and this isn’t fit but ultimately we are interested in how does that integrate within the system in the call we mentioned the importance of looking at interventions as the primary care level not only at tertiary care and I think the things that will resonate in our interest is is really are there areas where actually the opportunity is big enough that actually it merits that assessments to say is this really translating into tangible health impacts and is the return on that actually affordable and is it something that could be scaled so that issue of you know how does it connect with the system is an important part of the question as well that we’re interested in.
Now I do think it’s a very important question because you’re probably all aware of the constraints that we face now in the global health space in terms of funding. some of the exciting new technologies that are coming along, whether it be at the level of the Global Fund or of Gavi, who both have not met quite the standard that we would like to in their replenishments. So there’s just a reduced amount of funding available for those critical commodities that could be life -changing. And when we get a TB vaccine, which we hope we might have in, say, three years, how are we going to afford to actually put that out to the people who need it?
So it’s exactly the kind of targeting that you’re talking about in terms of risk targeting that can make all the difference in terms of taking now the lesser amount that we can afford, but putting it to where the need is the greatest. And that matching, which the AI systems and that geospatial targeting that you’re talking about, is exactly the solution that we need to promote and understand how it works.
So, person who has a mic, and then you can hand it to the person after you ask the question.
It has been like a great session. So how do we go about building AI agents that are not only intelligent, but reassuring in very high anxiety environments like maternal and infant care? How do we go about that? I would love to hear your thoughts because we’re building something on the same.
When you say high anxiety, just so that.
High anxiety for maternal and infant care, because even myself as a new mother, I feel that there are a lot of open areas where the mother doesn’t know what to do. Right. And it’s an open field. And the pediatricians, Gainax and the mother support system is very low. When you go down to tier two and tier three cities. How do we go about building that? I would love to get some thoughts.
Take it.
So I think one of the things that we have done while we build, a lot of these agentic pipelines for doctors, for users is. having human in the loop while the development is happening is extremely, extremely important. And that’s what Trevor also mentioned, because today, how this can go and where it can lead is not something that you can fully control. And so there are these systems that are specifically designed where an anonymous de -identified conversations are practically being distilled to see if the agents are working together in tandem. The second thing, and that’s more technical that we have sort of figured out is the models are quite capable. But when you are running them with a single goal, or a single agent with a single prompt, that practically, at times narrows down the whole worldview.
But if you are running multiple agents collaborating together where there is a grounding agent whose job is to make sure that the other agent is not sort of going beyond what the boundaries are, I think that is fundamental in healthcare. If it is just a single agent, single prompt and a very, so, and that’s what we should avoid because it’s a quite deep workflow, especially if we look at maternal health and things where the mental health comes into being. It’s fundamentally important that we follow some good technical principle of creating a multi -agent architecture, but more importantly, have a human in the loop. Because we, as a company, hasn’t been able to find a way to get out of it.
That’s why we have like a strong 10 member medical team, which is also growing where these are doctors working with the technology.
Thank you. And unfortunately, I have been told we are out of time, but speakers will be available. If you can please come up to them. And one very quick thing, just before we go, anything you want to share, what you would like to see next year when we come back to AI Summit? I just heard that it is being hosted in Geneva. So we are all showing up there. We have all these aspirations. What would it look like when we show up there to say, OK, this year we did something together? Anything that comes to your mind?
You know, I’d love to see the next iteration of VicAlp’s patient facing agent, and that would be an agent that would be able to guide you in your health pathway, would be completely transparent. And that I would actually understand why it made its decisions. And I would have 100 % confidence. that in that anxiety -provoking situation, it never made an error related to guidances, drug contraindications. It was always correct in those things. And I wouldn’t have to be concerned about that. That’s the next iteration that I’d love to see next year.
Next year, maybe.
And what I would like to have next year is instead of all of us as funders sitting up here, I would like to see some of the partners that we’re funding who are doing work to really understand what this looks like operationally and to have really honest conversations about what’s working and what’s not working. And so we’re moving away from the hype to really actually starting to get into the nitty -gritty of what this could be and can be.
Okay, so quickly, I would like to see a situation where there’s more collaboration between industry and regulators because ultimately we’re on the same side. We want the same thing. better quality safe and effective medicines for all our people so development in that area would be very exciting
final word to you
i think i i would still love to see that no matter how much evidence we generate from ai no matter what we do we still have that last uh word from the doctor who is sitting there and never forget the human angle while we navigate the ai space that’s what i always want to thank you so much
yes thank you so much next time we meet i hope we all feel as optimistic as we do and some some more thank you so much for attending here thank you speakers thank you speakers we just have a souvenir for you from india side for the session thank you so much okay where we go Yeah. Yeah. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you you you Thank you. Thank you. Thank you. Thank you. you
The first challenge addressed was the fragmentation of healthcare information and delivery systems. Traditional healthcare often involves disconnected processes from appointment booking to vital sign …
EventAll of us here, we would have visited doctors at some point in time or have been sick. Anyone who has never visited a doctor, please raise hand. So practically everyone. So let’s imagine how was your …
Event_reportingBut I think today it’s affecting our tasks. It’s affecting tasks of efficiency. You know, we’ve already started doing predictive analysis of beds which are vacant and available and so on. It’s working…
EventFurthermore, the CEO asserts that trust can be bolstered in healthcare through the implementation of AI solutions. For instance, the CEO references a project where they cloned a doctor’s voice to send…
EventKey points included the need for better data liquidity and interoperability to fully leverage AI’s potential in healthcare. Panelists emphasized that while progress has been made, there is still a sig…
EventThe panel revealed that making data AI-ready is fundamentally a governance challenge rather than merely technical. The audience poll demonstrated that while technical solutions are important, primary …
EventThe panel showed relatively low levels of direct disagreement, with most speakers identifying similar obstacles (time, uncertainty, geopolitics, power concentration) but proposing different solutions….
EventOne of the big barriers is multilingual. So. So you can’t use a model that’s good in English, but it’s not good in other languages. So as part of our entry into Indian over the last six months, we’ve …
EventThe discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and emergent-technology governance approaches. The balance between innovation and safe…
Event– **Balancing Safety and Innovation**: A central theme was dispelling the notion that safety and innovation are incompatible. Panelists emphasized that safety should be viewed as an enabler of innovat…
EventOECD Secretary General Mathias Cormann emphasized that trust is built through inclusion and objective evidence. He identified a fundamental tension: while markets reward the private sector for speed, …
EventWe’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven technology leader. He works to advance AI solutions for underserved communities ac…
EventHe argues that AI should augment clinicians while keeping humans central to decision‑making, acknowledging the difficulty of altering entrenched clinical workflows. Maintaining a human in the loop is …
EventAnd as several of our panelists emphasized, if we don’t address that gap deliberately, the shift towards AI agents is only going to make that divide even worse. Rather than closing it. When I think ab…
EventGovernments have collectively affirmed the importance of building trust by governing AI based on human rights, and that was repeated. It is repeated today by a number of heads of state and the leaders…
EventMultiple sessions identified the need to strengthen the IGF Secretariat and institutional capacity. Thedecision-making workshopcalled for “A stronger secretariat…with more resources” and better coor…
EventAudience:Good afternoon. Good morning. My name is Paola Galvez. I’m Peruvian, right now based in Paris. I just finished my graduate internship at the OECD and my master’s in Oxford. Former advisor to …
EventThe discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human resilience and adaptability. While acknowledging legitimate concerns about AI’s …
EventThe discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI potential and collaborative problem-solving. Speakers demonstrated confidence in…
EventThe tone was pragmatically optimistic and refreshingly candid. Both speakers were honest about challenges and uncertainties while maintaining confidence in their firms’ ability to adapt. The conversat…
EventThe discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s points rather than disagreeing. The tone was professional and solution-oriented, …
EventThe tone was largely conversational and reflective, with Meselson recounting personal anecdotes and experiences in a warm, engaging manner. There were moments of humor mixed with serious discussion of…
EventThe discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. However, the tone gradually became more realistic and somewhat pessimistic as speakers…
EventThe discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Participants demonstrated genuine enthusiasm for AI’s potential while expressing well-founded concerns ab…
Event-Collaborative spirit: All panelists demonstrated willingness to work together across sectors The tone remained consistently professional and forward-looking, with panelists building on each other’s …
EventAttendees were allotted a 15-minute interlude, ensuring a structured pause within the schedule. In summation, the event was meticulously crafted, delivering substantive content via the panel while sim…
EventThe discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s points rather than disagreeing. There was a shared sense of urgency about the need …
EventThe discussion maintained a cautiously optimistic tone throughout, with speakers acknowledging both the tremendous potential and serious risks of AI. While there were moments of tension—particularly a…
EventThe tone was consistently critical and cautionary throughout, with Whittaker maintaining a technically informed but accessible warning about AI security risks. While not alarmist, the discussion carri…
EventThe discussion maintained a serious, academic tone throughout, with speakers expressing both fascination with the technological possibilities and genuine concern about the risks. There was a sense of …
EventThe overall tone was serious and analytical, with panelists offering measured perspectives on complex issues. There were occasional moments of tension or disagreement, particularly around the roles of…
EventThe tone of the discussion was generally optimistic and forward-looking, with speakers emphasizing the need for urgent action and reform. However, there were also notes of frustration from some develo…
EventThe tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusiastic and grateful atmosphere, with speakers expressing appreciation for partici…
EventThe tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and progress. However, there were also notes of caution about hype and unrealistic expec…
EventThe discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mutual respect. While there were some tensions around specific content (particularl…
EventThe discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplishment, and forward-looking optimism. Speakers expressed appreciation for the wee…
Event“She first creates an ABHA (Ayushman Bharat Health Account) digital identity”
The knowledge base confirms that ABHA is the digital health identity issued by the Indian government as part of the Ayushman Bharat Digital Mission, with hundreds of millions of IDs created [S7] and described as the government-provided digital identity for health records [S10] and [S104].
“Sahni acknowledged major hurdles to large‑scale deployment: supporting dozens of Indian languages, ensuring model verifiability, and defining who will evaluate performance at scale”
Sahni’s identified challenges match the technical issues highlighted in the knowledge base, which cites the need to build AI systems that work across multiple Indian languages, generate verifiable data, and determine who evaluates AI capabilities in healthcare [S1].
“ABHA is the digital identity that India government provides”
Additional context from the knowledge base explains that ABHA IDs are linked to a federated health record architecture and are a core component of the Ayushman Bharat Digital Mission, enabling health records to move across providers [S42] and [S104].
The panel shows strong convergence on four core themes: (1) human‑in‑the‑loop and ecosystem‑centric design; (2) the necessity of rigorous, real‑world evaluation before scaling; (3) unwavering commitment to data privacy and privacy‑by‑design; (4) the importance of collaborative frameworks linking regulators, industry and funders. Additionally, there is broad agreement on building inclusive, multilingual AI solutions for underserved populations.
High consensus – most speakers echo each other’s positions across technical, regulatory and ethical dimensions, indicating a shared understanding that responsible AI in health requires coordinated governance, robust evidence, privacy safeguards and inclusive design. This consensus paves the way for joint initiatives, shared standards and funding mechanisms to advance AI‑enabled health care while mitigating risks.
The panel displayed moderate disagreement centered on the speed of AI rollout, the extent of human oversight versus autonomous agents, and the optimal strategy for data privacy and evidence generation. While participants shared a common vision of leveraging AI to improve health outcomes, they diverged on implementation pathways—ranging from rapid, AI‑driven regulatory acceleration to cautious, evidence‑based deployment, and from strict human supervision to fully transparent autonomous agents.
The disagreements are substantive but not irreconcilable; they highlight the need for coordinated policy frameworks that balance speed, safety, privacy, and rigorous evaluation. Without addressing these divergent views, scaling AI health solutions may face regulatory push‑back, trust deficits, and fragmented implementation.
The discussion pivoted from an enthusiastic product showcase to a nuanced debate about the real‑world integration of AI in health. Key comments—Trevor’s ecosystem reminder, Richard’s regulator paradox, Charlotte’s call for rigorous evidence, and Vikalp’s concrete patient‑safety example—served as turning points that redirected the conversation toward accountability, privacy, and measurable impact. These insights introduced new dimensions (regulatory pressure, multi‑agent design, federated learning, veterinary care) and prompted participants to explore practical challenges and solutions rather than remaining in speculative hype. The cumulative effect was a collective shift toward a balanced vision: rapid, innovative AI deployment tempered by robust human oversight, rigorous evaluation, and cross‑sector collaboration.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

